id
stringlengths
9
16
pid
stringlengths
11
18
input
stringlengths
5.27k
352k
output
stringlengths
399
9.26k
gao_GAO-18-406
gao_GAO-18-406_0
Background Purpose and Structure of the Judicial Conference and AOUSC The Judicial Conference of the United States is the national policy-making body of the federal courts. The Chief Justice of the United States is the presiding officer of the Judicial Conference. The Conference operates through a network of 20 committees, including the Committee on Financial Disclosure. The Judicial Conference delegated authority to redact information from a financial disclosure report to the Committee on Financial Disclosure. Upon request from a judicial official, the committee, in consultation with the USMS, redacts the information when it decides that revealing such personal or sensitive information could endanger the judicial official or a member of his or her family. Responsibilities of the Committee on Financial Disclosure include reviewing reports filed, adjudicating requests for redactions of information from the report, approving and modifying reporting forms and instructions, and monitoring the release of reports to ensure compliance with statute and the committee’s guidance. The Judicial Conference of the United States is responsible for implementing the judiciary’s redaction authority in a manner that provides judicial officials with the intended security measures without compromising timely public access to judicial officials’ financial disclosure reports. AOUSC is the agency within the judicial branch that provides a broad range of legislative, legal, financial, technology, management, administrative, and program support services to federal courts. It is responsible for carrying out Judicial Conference policies, and one of its primary responsibilities is to provide staff support and counsel to the Judicial Conference and its committees, including the Committee on Financial Disclosure. The Director of AOUSC serves as the Secretary to the Judicial Conference and is an ex officio member of the Executive Committee. Legislative Basis for Filing Financial Disclosure Reports The Ethics in Government Act of 1978, as amended, requires specified judicial, legislative, and executive branch officials to file annual financial disclosure reports in the spring of each year. These reports include financial information for the previous calendar year. Financial disclosure reports are made up of nine parts—positions, agreements, non- investment income, reimbursements, gifts, liabilities, investments and trusts, explanatory comments, and certification and signature. (See appendix I for a copy of a blank annual financial disclosure report). In addition to filing an annual report, covered judicial officials are required to file financial disclosure reports when nominated (nomination report); within 30 days of taking office (initial report); and within 30 days of leaving their position (final report)—see table 1. Federal law also requires that copies of judicial officials’ financial disclosure reports be made available, upon written request, to members of the public. Judicial officials may request that certain information be redacted before their financial disclosure reports are sent to the requesting individuals. Legislative Basis for Judicial Redaction Authority The judiciary’s authority to redact information from financial disclosure reports was established in 1998 and was initially authorized for a 3-year period. That legislation also instituted an annual congressional reporting requirement for the judiciary on the operation of the redaction authority. Over the past 20 years, the judiciary’s redaction authority and reporting requirement have been successively reauthorized for various periods of time, but have lapsed on occasion. The authority was most recently reauthorized on March 23, 2018 through the end of 2027. According to AOUSC officials, while the redaction authority lapsed, the Committee on Financial Disclosure did not grant any new redaction requests, but it did grant requests to continue redactions that were approved prior to December 31, 2017. The Judiciary’s Process for Adjudicating Redaction Requests and Responding to Requests for Copies of Financial Disclosure Reports The Judicial Conference, through its Committee on Financial Disclosure, has developed a multistep process for reviewing federal judges’ requests for redactions of information from their financial disclosure reports and requests for copies of these reports, as shown in figure 1. While the committee encourages judicial officials to request redactions at the time they file their financial disclosure reports, AOUSC officials stated that most redaction requests were made after judicial officials were notified that copies of their reports had been requested. A judicial official may request a redaction of information when his or her financial disclosure report is filed or after receiving a notification of a request for a copy of his or her financial disclosure report. When requesting a redaction, the judicial official must state specifically what information is sought to be redacted and the justification for the redaction. The Committee on Financial Disclosure will determine, in consultation with the USMS, if the information could endanger the judicial official or an immediate family member. For redaction requests involving information pertaining to the unsecured location of (1) a spouse’s employer, (2) a child’s school, or (3) a primary or secondary residence, a separate security consultation is not required based on an agreement AOUSC reached with the USMS memorialized in a 2004 letter that, in essence, serves as a security consultation. For all other types of information requested to be redacted, a further USMS security consultation is required. Taking into account the information provided by the judicial officials, as well as results from the USMS security consultations, members of the Subcommittee on Public Access and Security, a subcommittee under the Committee on Financial Disclosure, decide—by majority vote—to either grant (in whole or in part) or deny each redaction request. Such redactions are good until the end of the calendar year in which they are granted. The Committee on Financial Disclosure notifies the judicial official if the information requested to be redacted has been granted, granted in part, or denied. Judicial officials can appeal a redaction decision; however, according to AOUSC officials, there were no appeals from 2012 through 2016, the time period covered by our review. The Judicial Conference Has Developed Procedures to Ensure Judicial Officials File Financial Disclosure Reports, and More Than 4,000 Reports Are Filed Annually The Judicial Conference’s Committee on Financial Disclosure has developed an electronic report filing system, written guidance, and a compliance process to help ensure judicial officials file their financial disclosure reports. Specifically, in 2011, AOUSC switched from having judicial officials file financial disclosure reports in hard copy to electronic filing through an online electronic depository, Financial Disclosure Online Filing System (FiDO). AOUSC also uses a separate internal electronic database (LEGO) to track compliance with financial disclosure report filings. LEGO contains the entire database of judicial filers, including what reports should be filed, the dates financial disclosure reports are due, and which are in process. The Committee on Financial Disclosure stated in September 2014 that FiDO had been upgraded, but committee members continued to experience limitations with the system. For example, according to AOUSC officials, FiDO does not keep track of which reports are in process or when they are due. Accordingly, the committee members authorized an assessment to look for an alternative system that would meet their needs and, by 2016, had selected software currently being used by the government to be customized for the judiciary. According to AOUSC officials, the plan is for the Judiciary Electronic Filing System (JEFS) to replace both FiDO and LEGO and be used for filing financial disclosure reports and tracking compliance with filing requirements beginning in 2019. The Committee on Financial Disclosure also provides guidance to judicial officials to ensure that financial disclosure reports are filed correctly. The types of guidance provided include the Guide to Judiciary Policy, Filing Instructions for Judicial Officers and Employees, and a Step by Step Guide for the Preparation and Electronic Filing of Financial Disclosure Reports. Additionally, members of the Committee on Financial Disclosure are to review each filed financial disclosure report to confirm that required items have been sufficiently reported and that the filer is in compliance with applicable laws and regulations. In addition, for some sections, members of the committee will compare information provided in a filed report with what was reported in a prior year’s report to ensure the information reported is accurate and consistent. The Committee on Financial Disclosure also provides guidance on the process to be followed if a judicial official fails to file a required financial disclosure report. Specifically, the Guide to Judiciary Policy states that a late filing fee of $200 will be assessed if a report is filed more than 30 days after the report is due. Further, the Chairman of the Committee on Financial Disclosure is to write a letter to any noncompliant filer. In addition to the guidance described above, in 2013, the Committee on Financial Disclosure reported that it would establish specific procedures for securing filer compliance with all reporting requirements and the late filing assessments. In 2014, the Committee reported on the successful implementation of these new policies. Part of this effort included developing templates for three successive communications that are to be provided to a noncompliant filer. The communications reflect a progressively increasing level of urgency in language and content, culminating in explicit warnings that if a noncompliant filer does not comply, the matter can be referred to the Attorney General. From calendar years 2012 through 2016, more than 4,000 financial disclosure reports were required to be filed each year by judicial officials, as shown in table 2. Most of the reports filed were annual reports. According to AOUSC officials, as of March 2018, all annual financial disclosure reports required to be filed from calendar years 2012 through 2016 were filed, except for one for calendar year 2015. Additionally, all nominee and initial financial disclosure reports required to be filed during this time period were filed, and all but one final financial disclosure report, for calendar year 2016, were filed. The AOUSC officials stated that the remaining final report is still pending and the compliance process is being followed to ensure the report will be filed. The Judiciary is Complying with Procedures for Responding to Requests for Financial Disclosure Reports and the Number of Reports Released Has Varied from 2012 through 2016 The judiciary is complying with the Judicial Conference’s Guide to Judiciary Policy (Volume 2, Part D, Chapters 3-4), which sets forth the process for releasing financial disclosure reports. First, members of the public may request financial disclosure reports by submitting Form AO 10A (see appendix II for a blank copy of the Form AO 10A). The Committee on Financial Disclosure notifies the judicial official that a Form AO 10A has been received and provides the official with a copy. At that time, the judicial official has up to 10 days to decide whether or not to request that information from the financial disclosure report be redacted. Once the members of the Subcommittee on Public Access and Security have reviewed any redaction requests and any accompanying USMS security consultation results, the members vote on whether or not to grant redactions and then forward the results to AOUSC staff for final processing. In March 2017, the Judicial Conference approved the release of financial disclosure reports by electronic storage device free of charge in order to expedite the release of requested reports. As a result, once AOUSC staff receive the redaction decisions from the Subcommittee, AOUSC staff are to ensure that approved redactions are made to the financial disclosure reports, and then download the reports to electronic storage devices to mail to the requesting parties. The AOUSC received, on average, about 70 requests for copies of judicial officials’ financial disclosure reports each year from calendar years 2012 through 2016 using the AO 10A request form. The form can include a request for the financial disclosure report of one judicial official, or for multiple judicial officials. Additionally, the form could include a request for multiple years of financial disclosure reports. Based on the AO Form 10As received from calendar years 2012 through 2016, AOUSC released approximately 16,000 financial disclosure reports. The number of financial disclosure reports released each year varied during this time period, as shown in table 3. According to AOUSC officials, the number of financial disclosure reports released each year varies based on the number of requests received and the time of year the requests are submitted. For example, a requester might submit a Form AO 10A late in the calendar year and the requested reports could be released the following calendar year based on how long it takes to process the request. AOUSC officials noted that there are two organizations that have requested copies of the financial disclosure reports for all federal judges every year. In 2016 AOUSC received the requests late in the year and, therefore, were not able to release the reports until 2017. Few Judicial Officials Requested Redactions and They Pertained Mostly to the Unsecured Location of Family Members, but the Judiciary Has Not Reported Redaction Results to Congress in a Timely Manner On Average, 3.2 Percent of Judicial Officials Requested Redactions from 2012 through 2016 The number of judicial officials who requested redactions represents a small percentage of the total number of financial disclosure reports filed in recent years. As shown in table 4, the number of redaction requests ranged from a low of 112 in 2014 to a high of 162 in 2012 and 2015. For calendar years 2012 through 2016, there were a total of 716 requests for redaction of information from judicial officials’ financial disclosure reports—711 from judges and 5 from judicial employees—with a yearly average of about 143 redaction requests. In particular, for calendar years 2012 through 2016, judicial officials’ redaction requests accounted for, on average, 3.2 percent of the total financial disclosure reports filed during this time period, as shown in table 5. When we segregated the results by judges and judicial employees, we found that, on average, 5.8 percent of judges requested redactions compared to 0.1 percent of judicial employees over the 5 year time period. Of the 3.2 percent of financial disclosure reports that included redaction requests made from 2012 through 2016, on average, about 85 percent were granted, 3 percent were partially granted, and 12 percent were denied, as seen in figure 2. Most Redaction Requests Pertained to the Unsecured Location of a Judicial Official or Immediate Family Member We analyzed AOUSC data on redaction requests made from calendar years 2012 through 2016 by type of information requested to be redacted and found that the majority (about 76 percent) of the requested redactions pertained to information related to the unsecured location of a judicial official or an immediate family member. The next biggest category of information requested to be redacted was the “other” category, with 10.4 percent. Three categories—asset value, gifts, and reimbursement—each accounted for less than 1 percent of the redaction requests, as shown in Figure 3. AOUSC Has Not Submitted Required Annual Redaction Reports to Congress in a Timely Manner We requested copies of the annual redaction reports submitted to Congress for calendar years 2012 through 2016 and determined that AOUSC had not submitted the annual redaction reports to congressional committees of jurisdiction in a timely manner. Specifically, we found that AOUSC submitted the annual report covering 2012 in May 2014 and submitted four annual reports (for calendar years 2013 through 2016) in February and August of 2017, as shown in table 6. For the 2013 and 2014 annual reports, AOUSC prepared and submitted them to the congressional committees of jurisdiction after we asked for them. AOUSC officials told us that they could not find evidence that they had submitted the annual reports for calendar years 2013 and 2014 to the committees of jurisdiction in a timely manner. However, AOUSC staff sent a 5-year report to congressional committees of jurisdiction in March 2017 that included information on redaction requests and results for calendar years 2012 through 2016. Thus, the congressional committees of jurisdiction had received no reports from AOUSC on redaction requests and results from May 2014 to February 2017. While the Ethics in Government Act of 1978, as amended, does not set a specific submission date, it requires that AOUSC submit an annual report (i.e., occurring once every year) to congressional committees of jurisdiction on the operation of the judiciary’s redaction authority. As shown in table 8 above, AOUSC did not submit an annual report every year, and there was an interval of almost three years (from May 2014 to February 2017) in which there is no record of AOUSC providing any annual redaction reports to Congress. AOUSC officials stated that although there are no reporting time frames specified in legislation for preparing and submitting the reports to the congressional committees of jurisdiction (other than annual submission), beginning in 2016, AOUSC staff began to work on preparing the redaction report for the previous year by February of the following year. The AOUSC officials acknowledged, though, that they have not implemented a formal process, with designated steps and time frames, to ensure they consistently produce the annual redaction reports in a timely manner. The AOUSC officials also stated that since 2013, the Financial Disclosure Office—the office responsible for preparing the reports—had experienced a series of changes in management, as well as staff turnover in key positions, which contributed to the inconsistent process for developing and completing the annual redaction reports in a timely manner. Given that AOUSC experienced staff turnover in the past, and could experience it in the future, it is important that AOUSC has the necessary controls in place to overcome staffing issues and ensure that it consistently prepares and submits the annual redaction reports to the committees in a timely manner. Standards for Internal Control in the Federal Government state that management should implement control activities by documenting responsibilities through policies for each unit. With guidance from management, each unit determines the policies necessary to achieve the desired objectives. Management should also define objectives in specific terms so they are understood at all levels. This involves clearly defining what is to be achieved, who is to achieve it, how it will be achieved, and the time frames for achievement. AOUSC officials stated that the annual reports cannot be compiled until after the close of the previous calendar year and after all data have been reviewed. While this is true, without a formal process for ensuring that staff complete the reports in a timely manner, there are no assurances that the process will consistently occur on a regular schedule, or at all. Implementing a more formal process, with specified steps and time frames, would ensure staff are fully informed of their responsibilities and allow AOUSC to be better positioned to provide the congressional committees of jurisdiction with timely redaction reports that can be used to conduct oversight of the federal judiciary’s use of its redaction authority. Conclusions The Ethics in Government Act of 1978, as amended, serves the public interest by providing access to selected information from financial disclosure reports filed by judicial officials that could represent conflicts of interest for these officials. At the same time, the law accounts for the security threats faced by judicial officials and grants the judiciary authority to redact personal and sensitive information from their financial disclosure reports if a finding is made that the release of the information could endanger these officials or members of their families. Thus, the Judicial Conference has a responsibility to balance the goals of safeguarding judicial officials’ information and providing timely public access. The Judicial Conference developed a compliance process to ensure judicial officials were filing financial disclosure reports that adhere to applicable laws and regulations, and also had procedures in place to ensure the public had access to copies of judicial officials’ financial disclosure reports when requested. While the Ethics in Government Act of 1978, as amended, provides the Judicial Conference with authority to redact information that could pose a security threat to judicial officials, this authority has been used sparingly. From 2012 through 2016, about 3.2 percent of financial disclosure reports included a redaction request and about 85 percent of those were approved. Nevertheless, the law requires AOUSC to submit an annual report to congressional committees of jurisdiction on the operation of the judiciary’s redaction authority, including information on the total number of reports with redactions and the types of information redacted. Our review of available guidance and documentation shows that AOUSC has not implemented a formal process for producing annual redaction reports and has not submitted these reports to Congress in a timely manner. Implementing a more formal process, with specified steps and timeframes, would allow AOUSC to be better positioned to provide congressional committees of jurisdiction with the required annual redaction reports that can be used to conduct oversight of the federal judiciary’s use of its redaction authority. This is particularly important given that Congress recently passed an extension to the judiciary’s redaction authority through the end of 2027. Recommendation for Executive Action The Director of AOUSC should develop and implement a formal process, with specified steps and associated time frames, to better ensure that required annual redaction reports are completed and submitted to Congress within the following year. Agency Comments and Our Evaluation In April 2018, we requested comments on a draft of this report from DOJ, USMS, and AOUSC. Neither DOJ nor USMS had any comments. AOUSC provided technical comments, which we have incorporated into the report, as appropriate. In particular, based on AOUSC comments, we amended the report title to provide greater clarity into the subject matter of the report and added additional text to the conclusions section to better address all aspects of the report’s findings. In addition to its technical comments, AOUSC provided an official letter for inclusion in the report, which can be seen in appendix III. In its letter, AOUSC stated it concurred with the recommendation and will determine how best to implement a more formalized process to better ensure it can submit annual redaction reports to Congress in a timely manner. We are sending copies of this report to the Administrative Office of the U.S. Courts, the Attorney General, the United States Marshals Service, selected congressional committees, and other interested parties. In addition, this report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any further questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributions to this reported are listed in appendix III. Appendix I: Administrative Office of the U.S. Courts Form AO 10: Blank Financial Disclosure Report for Calendar Year 2016 Appendix II: Administrative Office of the U.S. Courts Form AO 10A Used for Requesting Copies of Judicial Officials’ Financial Appendix III: Comments from the Administrative Office of the U.S. Courts Appendix IV: GAO Contact and Staff Acknowledgements GAO Contact Staff Acknowledgements In addition to the contact named above, Christopher Conrad (Assistant Director) and Valerie Kasindi (Analyst-in-Charge) managed this assignment. Kristiana Moore, Dominick Dale, Melissa Hargy, Eric Hauswirth, Amanda Miller, Jerry Sandau, and Janet Temko-Blinder made key contributions to this report.
Why GAO Did This Study Under the Ethics in Government Act of 1978, as amended, federal judges and certain judicial employees must file financial disclosure reports that can be made available to the public. Federal law accounts for the potential security risks of the judiciary and authorizes the redaction of information from judicial officials' reports if the Judicial Conference, in consultation with the United States Marshals Service (USMS), finds that revealing certain information could endanger judicial officials or members of their families. This report addresses the following for calendar years 2012 through 2016, the most recent years for which full data were available: (1) Actions taken by the Judicial Conference to ensure judicial officials file financial disclosure reports, and the number of reports filed; (2) The judiciary's compliance with procedures for responding to requests for financial disclosure reports and the number of reports released; and (3) The number of redaction requests made, the types of information requested to be redacted, and the judiciary's consistency in reporting results to Congress in a timely manner. GAO interviewed AOUSC and USMS officials, reviewed relevant laws and guidance, and analyzed data on redaction requests. What GAO Found The Judicial Conference, the federal judiciary's principle policy-making body, developed an electronic filing system, guidance, and a compliance process to help ensure judicial officials file financial disclosure reports that adhere to applicable laws and regulations, and data provided by the Administrative Office of the U.S. Courts (AOUSC) show that more than 4,000 reports were required to be filed annually from 2012 through 2016. According to AOUSC officials, as of March 2018, all financial disclosure reports required to be filed from 2012 through 2016 were filed, except for one in 2015 and one in 2016. AOUSC officials are working with the filers to ensure these reports will be filed. The Judicial Conference established procedures for responding to requests for copies of financial disclosure reports, and the number of reports released has varied. From 2012 through 2016, AOUSC annually received, on average, about 70 requests for copies of judicial officials' reports and released approximately 16,000 reports during this time. Each request can vary—from a request for a single judicial official's report to a request for multiple judicial officials' reports. From 2012 through 2016, a small percentage of judicial officials requested redactions from their financial disclosure reports. On average, 3.2 percent of financial disclosure reports filed included a redaction request and about 85 percent of those requests were granted. Of the information requested to be redacted, about 76 percent was related to the unsecured location of a judicial official's spouse, child, or residence. AOUSC is required by federal law to submit annual reports to Congress on use of the judicial redaction authority, such as the number of reports with redactions and types of information redacted, but AOUSC has not consistently submitted the reports on an annual basis in recent years. GAO found that AOUSC does not have a formal process for preparing and submitting the reports to Congress. Implementing a more formal process, with specified steps and timeframes, would better position AOUSC to provide Congress with more timely reports. What GAO Recommends GAO recommends that AOUSC develop and implement a formal process, with steps and timeframes, to better ensure that required annual reports are submitted to Congress within the following year. AOUSC concurred with the recommendation.
gao_GAO-18-158
gao_GAO-18-158_0
Background The U.S. strategic nuclear deterrent is spread among three legs, as depicted in figure 1. DOD has continued to reinforce the high priority of the Columbia class program to the nation’s long-term defense. SSBNs are designed to maximize stealth to remain undetected while on patrol at sea. This survivability gives the United States a credible ability to retaliate if faced with an attack targeting other legs of the triad, and explains DOD’s decision to ultimately deploy up to 70 percent of the nation’s nuclear warheads on SSBNs. As stated in its April 2010 Nuclear Posture Review Report, DOD determined that ensuring a survivable U.S. deterrent requires continuous at-sea deployments of SSBNs in both the Atlantic and Pacific oceans, as well as the ability to surge additional submarines in crisis. Currently, 14 Ohio class SSBNs provide the sea-based strategic deterrent. The Navy commissioned the lead ship of this fleet in 1981. The first Ohio class SSBN to retire—SSN 730—will leave service in 2027 and plans are to retire one per year following this. When these submarines retire, they will have been in service over 40 years, longer than any previous submarines. Navy officials have stated that the legacy Ohio fleet cannot be life-extended any longer than what is planned due to aging issues. The U.S. Strategic Command (STRATCOM) retains operational control of the strategic triad and determines how many SSBNs are needed to patrol on a day-to-day basis. STRATCOM and the Navy have determined that 10 operationally available SSBNs are needed to meet mission requirements. As a result, the lead Columbia class submarine must be available for its first deterrent patrol in the first quarter of fiscal year 2031 to coincide with the planned 2031 retirement of SSN 734, or the Navy will not have 10 operationally available SSBNs, thereby requiring DOD to identify other steps to ensure it can meet current deterrent requirements. The Navy expects that it can meet mission requirements with 12 Columbia class submarines carrying 16 missile tubes (equating to a total of 192 available tubes) in lieu of 14 Ohio class submarines carrying 24 tubes (336 total available tubes). Currently, it takes 14 Ohio class submarines to provide 10 operationally available SSBNs due to maintenance needs that can take up to 4 submarines out of the patrol rotation at any given time. The Navy plans to reduce the number and duration of required maintenance periods for the Columbia class, allowing just 12 Columbia class submarines to provide the required 10 operational submarines at all times. Between fiscal year 2031-2040, the Navy plans to have a mix of 10 operationally available Columbia and Ohio class submarines. In fiscal year 2041, with the retirement of the final Ohio class submarine, this is to increase to 11 Columbia class, and finally to 12 operationally available Columbia class submarines by fiscal year 2042. Columbia Class Technology Efforts The Columbia class program is comprised of several major lines of effort—hull and supporting systems, the strategic weapons system; and the nuclear reactor-based propulsion plant—which are managed by different program offices, as depicted in figure 2. The Navy is introducing new technologies to improve capabilities where required while leveraging systems from existing submarine programs— the Virginia and Seawolf attack submarines and the Ohio class SSBNs— in order to ensure commonality with the submarine fleet and reduce development needs for the Columbia class to limit technical risk. For example, the program is re-using over 19,000 Virginia class standard parts including fittings, valves, and switches and leveraging the Navy’s Submarine Warfare Federated Tactical System program, which integrates more than 40 independent electronics systems into a common combat system for use by multiple program offices. The Navy has identified several key technical efforts for the Columbia class program: (1) the Common Missile Compartment, (2) Integrated Power System, (3) Stern Area System, and (4) propulsor. Other systems that we consider key technical efforts include the nuclear reactor and the coordinated stern, a system-of-systems that includes the propulsor and submarine maneuvering components. These areas are depicted in Figure 3 and defined below. Since 2008, the United States and the United Kingdom (U.K.) have been jointly developing a common system to house the tubes that will carry submarine launched ballistic missiles. Columbia class SSBNs and U.K. SSBNs will carry the Trident II D-5 missile for the first portion of their respective operational lives; the U.S. missiles armed with nuclear warheads which are maintained by the Department of Energy (DOE). Figure 4 shows a notional example of the CMC. In addition to the missile tubes, the CMC also provides systems to support the missiles and the launch equipment, including power, cooling, gas venting, and launch hardware and software. The Navy’s Strategic Systems Program is responsible for CMC development efforts. Integrated Power System (IPS) and Nuclear Reactor The IPS includes an electric drive system to propel the submarine through the water, unlike other current U.S. submarines which use a mechanical drive system. IPS is powered by the nuclear reactor, which is a separate system. As shown in figure 5, with a nuclear electric drive system, steam from the nuclear reactor turns a turbine creating electricity, which is then directly used to power electric motors. This is in contrast with a nuclear mechanical propulsion system, where steam from the nuclear reactor turns a turbine creating high-speed rotation; a reduction gear then slows the speed of this rotation to a speed that is suitable for use by the propulsor. To provide power to the electric drive, the Columbia class nuclear propulsion plant relies on a life-of-the-ship reactor core—called S1B—that is planned to remain in service without refueling, almost 10 years longer than current U.S. Navy nuclear reactors. The Virginia class also uses a life-of the-ship reactor core, but the Columbia class reactor needs to be more powerful to drive the larger submarine, and needs to last longer to allow for the 42.5-year Columbia class service life of versus 33 years for the Virginia class. By using a life-of-the-ship reactor, the Columbia class will not require a mid-life refueling. This will reduce the mid-life maintenance period from 27 months for Ohio class to 16 months for Columbia class. This reactor is being developed by the Naval Nuclear Propulsion Program (also known as Naval Reactors) and the Naval Nuclear Laboratory (operated by Bechtel Marine Propulsion Corporation). Stern Area System (SAS) SAS is a technical feature of the stern that is comprised of three subcomponents; details of which are classified. Propulsor/Coordinated Stern The Columbia class will use a propulsor instead of a propeller to drive the submarine through the water. The design of the propulsor relies on several other technical features that form a system-of-systems, sometimes referred to as the coordinated stern. The coordinated stern is where the rudder and other control surfaces are mounted; these control surfaces are used for submarine maneuvering and are critical to submarine performance. The coordinated stern consists of interrelated technology elements, including the propulsor and advanced propulsor bearing, the stern control surface configuration, and the propulsor shaft and bearing. The propulsion shaft and bearing connects the propulsion system to the propulsor, transferring energy from the propulsion system to the propulsor to drive the submarine through the water. The Navy plans to use a new design “X-stern” configuration instead of the cruciform stern used in other submarines. Figure 6 depicts the major components of the coordinated stern, omitting a depiction of the classified Stern Area System. Acquisition Strategy for the Columbia Class The Navy expects to require over $267 billion (then-year dollars) in total life-cycle costs for the Columbia class program. Figure 7 shows the break-down of this amount between operations and support costs and acquisition costs, as well as the elements comprising the $128 billion in acquisition costs. The approximately $128 billion total acquisition cost includes funding the Navy expects it will need to research, develop, and build its Columbia class SSBN. Due to their size and complexity, submarines require funding for design, long-lead materials (such as nuclear propulsion plant components), and construction over many years. To accomplish these activities, the Navy awards contracts over several phases of design and construction. Figure 8 outlines major acquisition plans for the Columbia class. In 2014, Congress created a National Sea-based Deterrence Fund to provide DOD with greater discretion to fund the design, construction, and purchase of the Columbia class. Since then, Congress has provided the Navy with enhanced acquisition authorities to buy and construct submarines and certain key components early, in bulk, and continuously. The Columbia class program entered the Technology Development phase of the defense acquisition process in January 2011. The schedule to acquire the Columbia class was shifted in 2011 when the Navy decided to delay the start of construction of the lead submarine by 2 years—from 2019 to 2021—due to budget constraints. The first patrol date for the lead ship was also shifted from fiscal year 2029 to fiscal year 2031. In January 2017, the Columbia class program achieved Milestone B—considered the official start of a DOD acquisition program—and moved into the Engineering and Manufacturing Development phase of the acquisition process. The program does not envision holding a Milestone C, which typically denotes a program’s approval to enter the production and deployment phase as shown in figure 9, but does plan to have an OSD- level review prior to authorizing the construction of the lead ship. Shipbuilding programs have slightly different decision points than other DOD weapon systems, partly because of the timing of the Milestone B decision for ships. Milestone B for ship programs usually occurs after development of ship specifications and system diagrams is well under way. As part of the Columbia class Milestone B decision, OSD approved a Low Rate Initial Production quantity of 12 submarines, the total quantity expected for the class. According to the Navy, the program awarded a $5.1 billion detail design contract to Electric Boat in September 2017 for work including design completion, component and technology development, and prototyping efforts. Detail design is typically funded with Shipbuilding and Conversion, Navy funds (the Navy’s procurement fund for buying ships) and represents a further refinement of the design of the ship and ultimately generation of work instructions needed by the shipyard in advance of lead ship construction. The program was granted approval to begin early detail design work in January 2017. In shipbuilding, the design phase generally encompasses three activities: basic design, functional design, and detail design. These steps occur after the Navy sets the technical requirements for the ship. At a high level: basic design serves to outline the steel structure of the ship; functional design routes distributive systems—such as electrical or piping systems—throughout the ship; a three-dimensional (3D) computer-aided design model is often generated; and detail design completes the design work for even the lowest-level items, and ultimately furnishes the work instructions for the shipyard workers to use in constructing the ship. During this phase, all aspects of the ship are defined, and two-dimensional paper or 3D electronic drawings (also called work instructions) are generated. For the Columbia class program, the Navy defines design in two phases: arrangements, which program officials describe as a combination of basic and functional design; and disclosures, which they describe as a combination of detail design and generation of work instructions. Figure 10 shows the phases of design for the program as compared with typical surface ship terminology. Two shipbuilders—General Dynamics Electric Boat and Huntington Ingalls Industries Newport News—are responsible for designing and building nuclear submarines. For the Columbia class program, Electric Boat is the prime contractor for design and construction, with Newport News as a subcontractor. Similar to the Virginia class program, each shipyard will build modules of the submarine, but Electric Boat will be responsible for final delivery of the submarine to the Navy. Technology Readiness Assessment For more than a decade, our work on major acquisitions has shown that part of an effective management process is assessing how far a technology has matured and how it has been demonstrated, which indicates the technology’s readiness to be integrated into a system and the degree of program risk. DOD acquisition instruction requires that programs complete a technology readiness assessment (TRA) at Milestone B. A TRA is a systematic, evidence-based process that evaluates the maturity of hardware and software technologies critical to the performance of a larger system or the fulfillment of the key objectives of an acquisition program. A reliable TRA illuminates concerns and serves as the basis for realistic discussions on how to mitigate potential risks as programs move from the early stages of technology development. TRAs do not eliminate technology risk but, when done well, can illuminate concerns and serve as the basis for realistic discussions on how to mitigate potential risks as programs move from the early stages of technology development, where resource requirements are relatively modest, to system development and beyond, where resource requirements are often substantial. In addition, TRAs help legislators, government officials, and the public hold government program managers accountable for achieving their technology performance goals. A main element of a TRA is the identification of critical technology elements (CTE) and assessment of the appropriate Technology Readiness Level (TRL), used to measure the readiness of technologies to be incorporated into a weapon or other type of system. TRLs range from 1 (least mature) to 9 (most mature), as shown in table 1. Current DOD guidance assigns the program manager responsibility for identifying CTEs. The program manager identifies possible technologies, then, in consultation with officials from the Assistant Secretary of Defense for Research and Engineering—ASD(R&E)—and with the program executive office and component acquisition executive approval, identifies the subject matter experts needed to perform the TRA. For the Columbia class TRA, the expert team was comprised of Navy program management and technical personnel. ASD(R&E) reviews the list of critical technologies provided by the program manager and recommends technologies to add or delete. Ultimately, the program submits the TRA report to ASD(R&E), who independently assesses the maturity of the technologies. The ASD(R&E) prepares a memorandum based on the assessment that is transmitted to the milestone decision authority, along with the TRA Report. The TRA is also an element of the Milestone B approval process. Section 2366b, title 10, U.S. code states that a major defense acquisition program may not receive Milestone B approval until the milestone decision authority has, among other things, certified that the CTE has been demonstrated at a TRL 6. A program may request a waiver from OSD if the maturity provision cannot be met. The statute requires that: Every waiver determination must be submitted in writing to the congressional defense committees within 30 days after the waiver request by the program is authorized. The milestone decision authority reviews the program not less often than annually until the milestone decision authority determines that the program satisfies all certification and determination components. In addition, in 2015 Congress required program acquisition strategies to include a comprehensive approach to risk management, including the consideration of techniques such as technology demonstrations and decision points for disciplined transition of planned technologies into programs or the selection of alternative technologies. Recognizing the importance of the TRA to risk management, in 2016, GAO developed a Technology Readiness Assessment Guide. This guide has two purposes: (1) to describe generally accepted best practices for conducting effective evaluations of technology developed for systems or acquisition programs; and (2) to provide program managers, technology developers, and governance bodies with the tools they need to more effectively mature technology, determine its readiness, and manage and mitigate risk. As noted above, we developed the guide by drawing heavily from DOD, DOE, and NASA best practices, terminology, examples, and credible resources, materials, and tools developed and applied by experts and organizations in order to capture the current thinking on technology readiness and maturity. In our guide, we identify criteria for a CTE, namely that it is a technology that is “new or novel, and needed for a system to meet its anticipated operational performance requirements; or that poses major cost, schedule, or performance risk during design or demonstration”. According to our guide, re-used existing technologies can also become critical if they are being used in a different form, fit, or function—as is the case with the propulsor and coordinated stern. Major Funding Commitments Planned, but Reporting on the Progress of Several Key Immature Technologies Is Not Required Several key technical efforts remain immature as the Columbia class program moves into its design phase—a practice counter to best practices we have previously identified. These efforts include the integrated power system, nuclear reactor, propulsor/coordinated stern, stern area system, and common missile compartment. While the Navy made progress in some areas—such as prototyping efforts for the missile compartment and nuclear reactor—all of these systems continue to require development and testing to mature them to TRL 7, the point at which GAO’s technology readiness guide considers a technology mature. Any challenges in development could put the program at risk of costing more, taking longer to develop, or jeopardizing the program’s ability to meet its expected performance requirements. However, the Navy identified only two of the submarine’s technologies as “critical” in the program’s 2015 TRA, thereby underrepresenting the technology risk in the program. Underreporting technical risks can hinder Congress’ and other decision makers’ full understanding of the program’s progress. This is especially important because the Navy has already requested $1.6 billion for advanced procurement and recently awarded the detail design contract. Moreover, there is no requirement that the Navy report to Congress on its progress in developing and testing the technologies until after the program completes its production readiness review in May 2020 after the Navy requests another $8.7 billion in funding for the construction of the lead submarine. Several Technologies Remain Immature as Detail Design Begins Demonstrating Technology Maturity Based on our work on best practices in weapon system acquisitions, we have previously recommended that programs fully mature technologies to TRL 7—versus TRL 6 as required by DOD—prior to passing Milestone B and entering the engineering and manufacturing development phase. TRL 7 represents a major step up from TRL 6, requiring demonstration of an actual system prototype in an operational environment such as in an aircraft, vehicle, or space. We have previously identified that demonstrating technologies in an operational environment provides a higher level of technology understanding and reduces risk prior to starting product development. DOD has historically disagreed with this recommended practice. added that modeling and simulation should be considered appropriate in some cases in lieu of actual prototype testing. While the Navy has made progress in reducing technical risks in many areas, such as starting construction of the first CMC, the program (according to the Navy) awarded a detail design contract in September 2017, with several key technologies not yet at a TRL 7. The nuclear reactor, IPS, propulsor and coordinated stern, and SAS all have potentially significant effects on design and construction of the Columbia class because they encompass much of the design and physical structure of the submarine. Based on our analysis, we found that IPS, SAS, the propulsor and coordinated stern are not yet at a TRL 7, as depicted in figure 11. The nuclear reactor and CMC are further along in prototyping work but still require testing in an operational environment to achieve a TRL 7. If any of these systems do not develop as planned, the Navy and the shipyards could be required to complete some redesign, or, if risks manifest later, they may force costly workarounds or construction rework. In addition, these systems also enable many performance attributes ranging from weapon launch to speed and maneuverability, so performance could be negatively affected. The status of these technologies is discussed in detail below. Integrated Power System According to officials from Naval Reactors, the permanent magnet motor- based electric drive system—a key component of IPS for the Columbia class—is at a TRL 6, below the TRL 7 recommended by our work on best practices. Naval Reactors has yet to develop an IPS prototype that is near or at the planned operational system configuration (integrated and full-size) and has been tested in an operational environment. The Navy has experimented with electric drive technology on submarines in the past with two now-decommissioned nuclear-powered attack submarines, but these submarines used different motor technology than what is planned for the Columbia class, and thus are not representative. The T- AKE 1 Lewis and Clark class of dry-cargo ammunition ships and DDG 1000 Zumwalt class destroyer are current U.S. Navy electric drive ships in operation, but these two systems are somewhat different than what is planned for the Columbia class and neither is powered by a nuclear reactor. The Navy is currently developing the IPS and producing a number of pre-production prototypes. Naval Reactors officials told us that they are confident that the IPS will meet requirements based on 20 years of development and testing of the underlying permanent magnet motor technology. They also noted that this technology is proven based on testing of the smaller-scale prototype motor to validate the main propulsion motor design. However, Naval Reactors is still developing and producing the system’s major components. Testing of a full-scale prototype under full power, which we would consider evidence that the technology is mature, is not scheduled to occur until fiscal years 2018-2020. In a land-based test facility, the Navy plans to integrate all the IPS systems in a ship-representative layout. Successful completion of this testing is an important step in mitigating risk. In contrast, the DDG 1000 program only tested its electric drive system at the land based test facility at one-half of the ship’s power generation and electric propulsion system configuration, and as a result performance problems were not discovered until well after installation and when system testing on the ship was run at full power. Thus, the Navy’s planned full-scale prototype testing for Columbia class should prevent a similar experience, since it will test a full-sized and full-power system rather than a partial system. Nuclear Reactor According to officials from Naval Reactors, as a result of its statutory mandate, its programs follow a different development process than typical DOD programs and do not use documents typical of other Navy programs, such as an Integrated Master Schedule or a Test and Evaluation Master Plan. Instead, officials from Naval Reactors told us that they use a rigorous process to assess, manage and control technical risk during development and testing to manage its day-to-day technical efforts. Based on descriptions provided by Naval Reactors officials, the Navy has been operating a Columbia-like experimental reactor in a land- based environment for many years to demonstrate some Columbia class submarine systems. Naval Reactors officials said that this experience gives them confidence that the Columbia class reactor will be delivered to the shipyard on time and will meet all requirements. Naval Reactors has design and development work remaining before it awards the contract for reactor core production in fiscal year 2019. Naval Reactors budget documentation shows that reactor design work is planned to be 65 percent complete in fiscal year 2018. While we recognize that it would not be realistic to expect Naval Reactors to test the reactor in a submarine to achieve a TRL 7, a completed design would still be required to produce a final configuration to demonstrate technology maturity. Propulsor/Coordinated Stern Neither the propulsor nor other related components of the coordinated stern have been demonstrated through testing in a near or planned operational system configuration, a key element for achieving TRL 7. Navy officials told us that the propulsor effort is based on prior experience with propulsors and that it will resemble the Virginia-class propulsor design. However, according to Navy documentation, the propulsor will be different in form, fit, and function than prior propulsors, and the final configuration has yet to be selected or tested. Specifically, the following components require additional design work and testing prior to demonstrating a representative prototype: Propulsor: The Navy is working with various partners to refine two different high-level propulsor designs. The program also faced a year delay in completing the first phase of design work, which subsequently delayed large-scale vehicle testing. Further, the Navy still has to complete large-scale prototype testing of different propulsor designs that are being evaluated for an eventual down-select to one vendor for production. Propulsor shaft: The system that connects the propulsion to the motors—which the Navy states is similar to shafting systems used on previous submarine classes but with different materials and size and weight—is still in concept and preliminary design phases. Main shaft design development and testing is being performed to select materials and inform design efforts. Advanced propulsor bearing: The Navy has yet to complete the preliminary design of the advanced propulsor bearing, with prototype test in a full scale configuration planned to begin in fiscal year 2019. Navy officials told us that they believe that the final design and material selections will exceed the reserved weight and size margins of the shafting or bearing system. X-stern: the final X-stern configuration has yet to be tested with a final design propulsor. Our assessment of the propulsor and coordinated stern system design indicates that it is not yet mature enough to provide the basis for a prototype in final form, fit, and function—key elements of achieving TRL 7. Stern Area System The Navy identified the SAS as a TRL 4 at Milestone B. The preliminary design review for the SAS is planned for March 2018. This review establishes the baseline (hardware, software, human/support systems) and underlying architectures to ensure that the system has a reasonable expectation of satisfying requirements within the current budget and schedule. The critical design review—a technical review that ensures that a system can proceed into fabrication and demonstration and can meet stated performance requirements within cost, schedule, and risk—is not planned until March 2020. A TRL 4 represents a relatively low level of maturity compared to the eventual system. At this low level of maturity, there are no assurances that the SAS will work as planned, which would likely result in the Columbia class not meeting certain requirements or in cost and schedule increases. The Navy plans to hold a critical design review for SAS in fiscal year 2019. The Navy has identified existing fleet technologies as backups for two SAS components, but officials noted that if these are used the submarine will not meet current requirements. According to the program office, there is no backup technology for one other SAS component, and, if that element—currently a TRL 4—does not develop as planned, it will be omitted, meaning that the program will lack that capability. Specific details of SAS are classified and cannot be included in this report. Common Missile Compartment The shipbuilders and the Navy have described CMC as complex to build. The Navy and the two shipyards—with consultation from the United Kingdom, which will also leverage the CMC design on its new SSBN— have conducted risk-reducing prototyping work and are building a representative CMC to demonstrate production processes. In fact, Columbia class representative missile tubes will be first installed on a United Kingdom submarine, scheduled for mid-2020. The Navy has plans for a robust land-based test procedure for both the missile tubes and the CMC as a system that will provide an operationally similar environment to a submarine; however, this testing has yet to start and will not conclude for several years. The Navy Has Not Appropriately Identified Technologies as Critical, Which Underrepresents the Program’s Technical Risk While the Navy conducted the 2015 Columbia class TRA in accordance with a DOD-approved plan, it did not follow our identified best practices for identifying all critical technology elements (CTE), resulting in an underrepresentation of the technical risk facing the program. Specifically, the TRA only identified 2 CTEs: the SAS and a carbon dioxide removal system. CTEs are required to be at TRL 6 at Milestone B (the official start of a program). For the Columbia class program, OSD approved Milestone B in January 2017. The Navy received a waiver at Milestone B for the SAS because the system was still immature, as discussed above. The carbon dioxide removal system has matured since the TRA following demonstration on an operational submarine, and no longer requires active risk mitigation efforts. We compared the Navy’s 2015 Columbia class TRA to criteria documented in GAO’s TRA Guide and DOD’s own guidance. In doing so, we found that 4 additional key technical efforts—IPS, nuclear reactor, and propulsor/coordinated stern, and the CMC—meet the criteria for a CTE. Since the Navy did not identify these technologies in the TRA, it also did not assign them a TRL. Their exclusion is significant because the 2015 TRA represents a key independent review and technical risk assessment used by DOD to certify to Congress that the Columbia class program’s technologies had been demonstrated in a relevant environment (TRL 6) at Milestone B. Because not all of the CTEs were identified, DOD and Congress lack an important oversight tool for assessing technology maturity and evaluating program risk. Further, this certification is the only required reporting on technology development prior to the Navy requesting authorization for construction of the lead ship. Some of the concerns that we identified are discussed in detail below. Conflicting Criteria for Identifying Critical Technologies The team responsible for preparing the 2015 Columbia class TRA did not identify all appropriate CTEs because it used a more restrictive definition of a CTE than that recommended in our best practices guide and DOD’s 2011 TRA guide. Table 2 compares the criteria in the three sources. As reflected in table 2, not only does the Navy’s TRA definition require a technology to meet a number of criteria to be considered a CTE, it also has to be considered a technology development effort. According to the Columbia class program office, the TRA team based this definition on a 2011 OSD AT&L memorandum issued contemporaneously with the 2011 TRA guidance that states: “TRAs should focus only on technology maturity, as opposed to engineering and integration risk.” However, our analysis of this memo found that it also directs programs to use DOD’s TRA guidance and CTE definition, which are broader and more consistent with our definition of a CTE. The 2015 Columbia class TRA does not further define what constitutes a technology development effort, with the Navy applying this as a criterion without defining what the criteria actually meant. Moreover, the TRA does not provide any definition or criteria for what it considers engineering and integration risk. We determined that the Navy under-identified program technical risks because the Navy’s criteria were more restrictive than GAO’s CTE definition. Several Critical Technologies Not Identified We further assessed the specific technologies in the Columbia class program against our technology readiness criteria for a CTE, as shown in table 3. As shown in table 3, by applying the additional “technology development effort” criteria in the 2015 Columbia class TRA, the TRA team eliminated several systems from CTE consideration without criteria or a definition of what constitutes a technology development effort. Some of these systems were previously identified as CTEs in other recent Navy documentation. The TRA team did not identify the nuclear reactor as a CTE because this system is under the cognizance of Naval Reactors and not the Columbia class program office. Officials from Naval Reactors told us that they do not conduct TRAs, but rather follow a different and more iterative process to manage their technology development efforts. While the Navy did not identify all of the program’s CTEs as compared with the TRA criteria in our guide, it is tracking these efforts to manage technology risks. For example, 3 of the 4 CTEs we identified are also identified in Navy documents as “key technical efforts” with active risk mitigation plans. We will continue to track the progress of these efforts in our future work. Required Report to Congress on Technology Efforts Will Not Occur Until after Lead Ship Authorization As the Columbia class program moves into its detail design and construction phase, it will be more than 2 years before the next requirement for a formal DOD report to Congress on the progress of the technology efforts. This will occur at some point after the program’s Production Readiness Review is completed in May 2020. In the meantime, the Navy plans to request another $8.7 billion (in addition to the $1.6 billion already requested) for lead ship construction. If a typical budget schedule is followed, this request will come before Congress in February 2020. The Navy plans to begin construction of the lead submarine starting in fiscal year 2020. Congress will be asked to approve lead ship construction absent key information on the maturity of the critical technologies that, at present, are not up to the maturity levels that would provide assurance they will work as intended. Without additional updates on the progress of technology maturity between now and 2020, we believe Congress will not have information it needs to evaluate technical risk in advance of the Navy’s requests for considerable increases in program funding. As previously discussed, there is currently no DOD requirement to submit such reports to congressional oversight committees. The Navy Plans to Leverage Completed Design to Mitigate Aggressive Schedule, but Ongoing Technology Development Likely to Undermine This Goal The Navy is prioritizing design completion before starting construction, which is a good practice that is in accordance with our work on best practices because it helps reduce cost and schedule challenges in construction. However, since some of the key technologies are not fully matured, detail design work is proceeding with notional or placeholder data representing these key systems. As a result, the design will likely remain immature once construction starts in fiscal year 2021. We have previously reported that concurrency of technology development and design increases the risk of design rework—or having to make modifications to design drawings to accommodate any changes needed as a result of technologies changing size, shape, or weight as they mature—and potentially can result in negative cost and schedule impacts. Further, the Navy faces an aggressive production schedule in order to deliver the lead submarine by fiscal year 2031, which will be required to prevent a gap in U.S. nuclear deterrent capabilities. According to our analysis of previous submarine program schedules, the Columbia class program’s schedule is aggressive in its expected short duration to build the lead submarine. The program office intends to mitigate this schedule challenge, in part, by starting construction of portions of the submarine earlier than initially planned. If this early construction occurs and the Navy does not alter design plans, construction of some parts of the lead submarine could outpace a finalized design for developing other components, which increases the risk of rework during construction and could further delay completion. Consistent with Best Practices, Program Has Prioritized Design Completion, but Immature Technologies May Compromise Design Maturity The Columbia class program is prioritizing a high level of design completion prior to the start of construction of the lead submarine of the class. The program plans to complete 100 percent of design arrangements and 83 percent of design disclosures prior to the start of construction of the lead submarine. In our 2009 report on best practices in shipbuilding, we identified design maturity as important step in reducing cost and schedule risk. As such, we recommended that the design be stabilized through completion of basic and functional design and 3D product modeling prior to the start of construction for a new ship. Because, as mentioned previously, the Navy defined design arrangements on the Columbia class program as being equivalent to basic and functional design, having 100 percent of the arrangements completed prior to the start of Columbia class construction would meet the intent of our prior recommendation. Further, our analysis found that the Columbia class program’s planned level of design completion prior to starting construction is much higher than most recent Navy shipbuilding programs. For example, the Virginia class attack submarine program started construction with only 43 percent of the design complete compared with a planned 83 percent completion for the Columbia class. The Columbia class program also plans to have a 52 week buffer between the completion of design for an area of the submarine and the start of construction on that area, which is intended to allow time to address any challenges that may arise and thus minimize schedule delays. Additionally, the Navy plans to have all components fully developed 8 months before they are required in the shipyard, which will provide some additional schedule buffer to address challenges before the components are actually needed for construction. To facilitate design completion, the Navy made a commitment at the start of the program to set realistic and reasonable requirements and to keep those requirements stable throughout the program. This approach is also in keeping with our previously identified best practices, which highlight the importance of demonstrating balance among program requirements, technology demands, and cost considerations. The Columbia class program has not had any significant requirements changes since DOD’s Joint Requirements Oversight Council validated the Capability Development Document in 2015. Setting realistic and reasonable requirements also permitted the Navy and shipyards to reuse some design elements for components of the submarine that are similar in design and function to the Virginia class instead of requiring new design work. Similarly, the program has worked to keep stable ship specifications to minimize design disruptions. The technical specifications for the ship have been set since 2014, and the program manager maintains personal visibility and accountability over any proposed deviations or changes to the specifications. According to the program manager, to date there have been minimal changes made to the technical baseline. These steps help to minimize design rework that can be caused by changing requirements, as was seen on the Littoral Combat Ship program, and that can lead to cost increases or scheduled delays. The program has also conducted some prototyping efforts— including building representative portions of the submarine to demonstrate that its design tool can send the correct information to the shop floor to build the ship—and has plans for more. However, based on our analysis of the program’s current technology development plan and status, it is unlikely that the Navy’s planned 83 percent of design disclosures will be finalized at the time construction begins for the lead ship in 2021. Similar to many shipbuilding programs, the Columbia class program plans to continue to mature technologies into their final form while detail design is underway. As we have previously reported, to offset this risk, shipbuilding programs, including the Columbia class, often include design “reservations” for space, weight, power, cooling, and other key attributes to reserve a footprint for components. As contractors or government employees develop and refine technologies or systems, they provide vendor furnished information (VFI) or government furnished information (GFI) to the shipyards to update the design. Completion of the detail design of the submarine—and subsequent achievement of design stability to support a properly sequenced construction phase—requires shipbuilders to have final information on the form and fit of each system that will be installed on the ship, including the system’s weight and its demand for power, cooling, and other supporting elements. As development proceeds on a new technology, initial assumptions about size, shape, weight, and power and cooling requirements can change, potentially significantly. These changes in VFI or GFI—if not resolved early in the design phase—can introduce considerable volatility to the design process for a lead ship. As such, in our May 2009 report, we recommended that, to attain the level of knowledge needed to retire design risk and reduce construction disruptions, complete—versus notional—VFI or GFI must be incorporated for the design to be truly stable. DOD concurred with this recommendation. We have previously reported that other Navy programs have run into difficulties, including out- of-sequence or more costly construction work, when space, weight, power, and cooling reservations are based on immature or ill-defined technologies or components that have changed in size, weight, or other attributes when they are finalized. Ramifications from such changes can ripple through much of the ship design. For example, we reported in 2009 that during construction of the Seawolf-class attack submarine, the AN/BSY-2 combat system did not fit into the space and weight reservations that the Navy had allocated within the submarine’s design. As a result, a portion of the submarine had to be redesigned at additional cost. However, the Navy has entered the detail design phase for the Columbia class with incomplete technical data on several key components that are either significant in size relative to the submarine or spread throughout a number of spaces of the submarine. These components include IPS, the nuclear reactor, the propulsor and coordinated stern, and SAS. This situation is problematic because even if the Columbia class design is 83 percent complete, if it contains many reservations for systems that are not fully developed the design will continue to be immature and subject to change. Thus, the 83 percent completion metric may be somewhat meaningless since elements of the design are uncertain and could change because of the incomplete technology development efforts. As shown in figure 12, the Columbia class program has entered the detail design phase with a number of technologies still in development or design finalization, which means that the VFI/GFI for these systems are not yet final. This figure also depicts our recommended knowledge points for shipbuilding programs, which align with contract award for detail design and the start of lead ship construction. The concurrency depicted between phases could be further exacerbated if the Navy pursues plans to start construction of some components early. As is shown in figure 12, the Navy plans to continue technology development while executing detail design; this concurrency may potentially extend through construction if the Navy pursues its plans for early construction. For example, the Navy and the shipyards are currently designing the stern of the submarine—with 95 percent of stern arrangements planned to be complete by December 2017—but the final configuration of the propulsor has yet to be determined. As currently planned, the Navy will not complete prototype testing until the third quarter of fiscal year 2020, and development and design of the SAS is planned to continue until the end of fiscal year 2021—almost a year after the start of lead ship construction. The Navy believes it is managing this stern risk by controlling the interfaces through an Interface Control Document that identifies set design constraints. According to Navy officials, all aspects of the propulsor design that could impact the overall ship design such as size, weight, and arrangements of major sub- assemblies of the propulsor are already finalized, and that the systems are currently tracking to the reservation allowances. However, until a final representative prototype is tested as a system, the possibility of design changes and broader design impacts remains. Although the Navy plans to have arrangements for the stern 100 percent complete at construction start, the VFI or GFI for these important systems will not be finalized until later after these systems finish development. Additionally, the electric drive of IPS has already experienced manufacturing problems that could compromise its ability to meet its schedule if further challenges arise. According to Naval Reactors officials, a manufacturing defect was identified in February 2017 that affected the assembly of the first production-representative propulsion motor intended for installation in the land-based test facility to prove out the integration of all the electric drive components. The officials explained that the vendor responsible for the motor is in the process of repairing the defect—a process that will take up to 9 months to complete. As a result, Naval Reactors is now executing a schedule recovery plan to regain some schedule margin. Part of this plan involves using a smaller scale prototype motor in initial land-based test facility testing to prove out system integration. This plan means that initial full-scale system testing will be conducted with a different motor, albeit one with the same electromagnetic properties. Further, this delay will leave less margin to account for any unexpected challenges encountered in developmental testing. Aggressive Construction Schedule for Lead Submarine Unprecedented The Columbia class program has an aggressive schedule to deliver the lead submarine in time to begin patrols in fiscal year 2031. The Navy plans for 84 months, or 7 years, to build the lead submarine. While imperatives associated with our nation’s nuclear deterrent are driving this planned schedule, our analysis shows that it is significantly shorter than what the Navy has achieved on any recent lead submarine construction effort—including during high levels of Cold War submarine production. The Navy expects that the Columbia class will be built in the same timeframe as was planned for the lead Virginia class submarine—a submarine that is one and a half times smaller and has less estimated construction man hours than the Columbia class. Figure 13 shows the estimated and actual timeframes for constructing prior lead submarines as compared with the 84 month estimate for the Columbia class lead submarine. Further, there are industrial base implications to this aggressive schedule. The Navy and the two shipyards will be trying to attain this level of unprecedented schedule performance with the lead submarine while the shipbuilders are also starting work on the first few Virginia class submarines built in a new Block V configuration. Virginia class program officials told us that the ramp-up to building two attack submarines per year has resulted in recent cost and schedule growth at the shipyards. The addition of Block V and Columbia-class will likely create additional schedule pressures with the increase in workload required to build those submarines compared with non-Block V version submarines. In an effort to mitigate the risks associated with its aggressive delivery schedule, the Navy is planning to start construction of a number of parts of the structure of the lead submarine years earlier than the date of lead ship authorization in fiscal year 2021. This plan, called advanced construction, would use expanded acquisition authorities provided by Congress in the National Sea-Based Deterrence Fund. The Navy and its shipbuilders intend to start construction as early as 2019 on numerous areas of the submarine’s structure. Specifically, the Navy and shipyards plan to start building the stern, bow and missile command and control module as early as 6 months before fiscal year 2021, citing the disruptive effects of delays to these three “super-modules” that are also critical to ensuring an on-time delivery. These super-modules also comprise vital areas of the submarine, including the CMC, IPS and the coordinated stern. The shipyards have proposed moving 500,000-600,000 labor hours of construction work to before ship authorization. Figure 14 shows the super-modules of the submarine that the Navy plans to start early. However, the Navy has yet to finalize or fund the approach for this type of early work. Starting construction early for the lead and follow submarines provides schedule relief to the Navy and shipbuilders, but these plans may further exacerbate the existing overlap of technology development and design and construction, which was discussed above. Moving construction earlier could challenge the Navy’s goal to have all components developed 7 months before they are required in the shipyard. Further, the shipbuilders acknowledge that early construction plans will result in increased overlap between various stages of design activities in certain areas, including the bow and stern. If Congress funds the Navy’s plans to fund advanced construction work, this incomplete VFI/GFI situation will likely be worsened and could disrupt the optimal build strategy. We have previously reported that programs starting construction of the lead ship of a class without a mature, stable design has been a major source of cost growth and schedule delays in Navy programs. We have also reported that when a schedule is set that cannot accommodate program scope, delivering an initial capability is often delayed and higher costs are incurred because problems typically occur that cannot be resolved within compressed, optimistic schedules. The Navy’s Columbia class plans put the program at risk of cost and schedule growth. However, its options for reducing concurrency are, at this point, limited due to the schedule imperatives driven by the lead ship patrol deadline. Columbia Class Is Not Funded Adequately to Address Program Risks Our analysis determined that it is more likely than not that the Columbia class program will exceed the Navy’s $128 billion (then-year dollars) estimate of total acquisition cost to which the program will be funded. Specifically, the program’s 2017 Milestone B cost estimates are optimistic because they do not account for a sufficient amount of program risk due to ongoing technology development, as well as the likely costs to design and construct the submarines. In addition, the Navy has budgeted the program to a confidence level for the program that is lower than what experts recommend, with a particularly optimistic estimate for the lead ship. While there may be situations when this would be appropriate, this is not the case for the Columbia class program due to the technical and design risks that we identified above. As a result, program costs will more likely than not exceed requested funding, particularly for lead ship construction. Due to the significant level of funding required for this program, even a small percentage of cost growth could have far-reaching consequences on the Navy’s long-range plans to fund construction of its future fleet. For this review we conducted an initial analysis of the Navy’s cost estimate but did not assess if it was conducted in accordance with all of the best practices identified in our cost estimating guide. We plan to more fully assess the Navy’s life-cycle cost estimate for the entire Columbia class, including the program’s risk analyses, in future work. Confidence Levels and the Navy’s Estimate Confidence Levels A confidence level is stated as a percentage depicting the probability that the program’s cost will actually be at some value or lower, calculated after conducting a risk analysis to identify and quantity program risks and determine the effects of these risks on its point estimates. From early on, the Navy recognized the need to control costs for the Columbia class. In fact, the program’s cost estimates have decreased significantly since the program’s inception due to Navy decisions early in the program to trade off some capabilities and the incorporation of updated actual cost data from the continued procurement of Virginia class submarines. At Milestone B, OSD determined that Columbia class procurement costs had fallen almost 40 percent since the program’s original estimate. However, while the Navy did conduct a risk analysis for its recent Columbia class cost estimates, the confidence level of the Navy’s estimate at Milestone B for acquisition of the entire class is 45 percent. This means that it is more likely than not that actual costs to research, develop, and buy the submarines will exceed the Navy’s $128 billion estimate. This situation is particularly apparent at this point with regard to costs to design the class and build the lead submarine. Any difficulties in ongoing technology development efforts would likely worsen the picture. At Milestone B, the Navy’s point estimate to develop the technologies, design the class, and build the lead Columbia was at a 43 percent confidence level. Experts agree that programs should be budgeted to at least the 50 percent confidence level, but budgeting to a higher level (e.g., 70 to 80 percent, or the mean) is a common practice to cover increased costs resulting from unexpected design complexity and technology uncertainty, among other things. Navy cost guidance recommends using the “risk adjusted mean” for the cost for the program, which usually lies between 50 and 60 percent. If the Navy budgeted to an estimate at a higher confidence level like the risk adjusted mean, its Milestone B point estimates—meaning the selected estimate of cost—would be higher, reducing the probability of overruns occurring. According to Navy cost analysts, the program’s total acquisition cost, which the Navy estimated at Milestone B at $128 billion (then-year dollars), would exceed $131 billion (then-year dollars) at 50 percent confidence, which is the bottom range of the risk adjusted mean confidence level. Cost Growth Potential Based on the Navy’s Estimate Even if the Navy budgeted to the 90 percent—a “worst-case” scenario where significant programmatic challenges are realized and the probability of cost overruns is low—confidence level, Columbia class lead ship costs would not be dissimilar to cost outcomes on other lead ship programs. We have observed in prior work that cost growth for recent lead ships across the Navy’s shipbuilding portfolio is 28 percent on average. For example, the Navy’s lead Virginia class submarines (SSN 774 and SSN 775)—the most similar class to Columbia in terms of technology and component development as well as aspects of its design and build plans—experienced 15 and 24 percent budget growth respectively, with average cost growth of 28 percent for the three most recent lead submarines (see figure 15). The 28 percent cost growth we have observed is slightly more than the 22 percent cost increase between the Navy’s point estimate and the 90 percent confidence level, meaning that even if the Navy budgeted the program to the 90 percent confidence level there would still be historical shipbuilding precedence for further cost growth. In particular, if costs to build the lead Columbia class submarine grow similar to the lead Seawolf and Virginia class submarines, the cost to construct the submarine would exceed the Navy’s Milestone B estimate by more than $2.5 billion. This would represent a total approaching $12 billion (then-year dollars) versus the current estimate of $9.2 billion for the lead submarine. Due to the magnitude of the Columbia class program’s expected cost, any cost growth, including for design and construction of the lead ship could impact the availability of funds for other Navy priorities. The Congressional Budget Office (CBO) and CAPE also analyzed Columbia class program costs. CBO predicted higher costs than the Navy estimate. In its 2017 assessment of the Navy’s long-term shipbuilding plans, CBO concluded that the Navy underestimated the cost of the total Columbia class procurement by $8 billion (2017 dollars). CAPE estimated a lower cost, but also identified areas where reliable cost data were unavailable. The independent cost estimate prepared by CAPE in support of the program’s Milestone B reflects a 3 percent lower total program life-cycle cost (2017 dollars) than the Navy estimate. In setting the program baseline in January 2017, DOD pragmatically opted to use the Navy’s higher estimate ($7.3 billion) instead of CAPE’s $7 billion estimate for the average unit cost to procure a Columbia class submarine (calendar year 2017 dollars). According to CAPE officials, this difference in estimates is largely due to CAPE incorporating more recent Virginia class actual cost data into its estimate than the Navy. However, CAPE also identified that there is a lack of reliable cost data on some contractor- furnished materials and government furnished equipment (GFE) for the Columbia class program, which limited the quality of the estimate. GFE comprises critical areas of the Columbia class submarine, including the strategic weapon system managed by Strategic Systems Program and the IPS developed by Naval Reactors. Conclusions The Columbia class submarine will be a significant DOD acquisition for the next several decades due to cost and mission importance in guaranteeing the nation’s strategic deterrence. Failure to meet the aggressive patrol dates required of the program could challenge the Navy in effectively meeting strategic patrol requirements, and not delivering the required level of performance could compromise the Navy’s plan to operate this class through 2080. Given the risks facing the program and the significance of potential delays or cost growth, we believe this program warrants increased attention to and scrutiny over what we consider to be its critical technologies (inclusive of the program’s stated technology development efforts), several of which remain immature. Specifically, technologies such as IPS and the propulsor and coordinated stern demand more specific congressional visibility to ensure they stay on track. These areas also warrant specific assurances from the Navy that they will be delivered on time and will perform as required. This assurance could augment the Milestone B certifications which were predicated on a TRA that was not representative of the technical risk facing the program. Further, such information would help bolster confidence for Congress that the program technologies will be matured in time to support construction, which is especially important as the Navy pursues plans to start construction of the lead ship early. Without putting in place a requirement for the Navy to provide these assurances on a periodic basis, Congress will not have the information until after the Navy has asked for another $8.7 billion in funding for lead ship construction. It is also important for Congress to be informed of the impact on performance requirements if technologies are delayed or fail to mature as planned. The Columbia class program is also facing risks from its aggressive and concurrent schedule as a result of the continued and pressing need for it to meet the Navy’s nuclear deterrent requirements as the legacy submarine fleet that cannot be life extended any longer. Typically addressing risks of such concurrency is accomplished by, among other things, delaying milestones until more knowledge is obtained. Doing so helps reduce concurrency and bring more stability to the design before construction activities begin. Recognizing the mission imperatives that are driving Columbia class’s aggressive and concurrent schedule it is unlikely that the Navy will have the ability to slow the pace of the program in order to reduce cost and schedule risk. Therefore, additional reporting to decisionmakers on the status of key technologies could help ensure they fully understand the risks of such an approach and account for such risks when making programmatic decisions. Matter for Congressional Consideration In our draft report we had suggested a matter for congressional consideration related to additional Navy reporting on the Columbia class technologies, but we have since removed it because the recently passed National Defense Authorization Act (NDAA) for 2018 includes Navy reporting requirements for the Columbia class program that would achieve the intent of our matter. Agency Comments and our Evaluation We provided a draft of this product to DOD for comment. The Navy provided technical comments earlier in the review process which we incorporated where appropriate. In its written comments, reproduced in appendix III, DOD’s position was that there is not a need for additional congressional reporting on the Columbia class program because there are new reporting requirements in the conference report accompanying the NDAA for fiscal year 2018. We agree that the reporting requirements in the section 231 of the NDAA for Fiscal Year 2018 meet the intent of our matter for congressional consideration. These new reporting requirements for the Navy became law on December 12, 2017, after we sent the report to DOD and appropriate congressional committees. We agree that the reporting requirements meet the intent of our matter for congressional consideration. Accordingly, we have removed our matter from this report. In addition, DOD also disagreed with our characterization of technical risks facing the Columbia class program and its TRA. Specifically, DOD stated that the program is meeting statutory and DOD maturity standards and met or exceeded DOD technology maturity requirements. DOD also stated that the program’s TRA was conducted in accordance with a 2011 DOD policy memo that directed TRA’s should focus only on “technology maturity, as opposed to engineering and integration risk.” However, neither this policy memo nor the Columbia class TRA define what constitutes engineering and integration risk and it is unclear what criteria the Navy used in making these determinations. Our report acknowledges that DOD followed statutory and DOD requirements for the two technologies that the Navy identified as critical technologies in the program’s TRA. However, our report also identifies several other technologies that we believe should have also been subject to these requirements had the Navy conducted a TRA in accordance with our identified best practices. By applying our identified best practices, we believe these efforts would have been considered critical technologies and would have been subject to an evaluation of technology maturity levels, additional reporting requirements and, potentially, identification of additional risk mitigation efforts. DOD also disagreed with our criteria for identifying a critical technology and assessing maturity. DOD asserted that applying our criteria would result in nearly every system on a submarine becoming a critical technology. We disagree. Our criteria are consistent with DOD’s own criteria for identifying critical technologies, and only focus on those that are most significant to a program. Given the program’s cost and schedule risks and operational imperatives, we believe that appropriately identifying the critical technologies is an important step in acknowledging and mitigating program risk. DOD also stated that achieving a TRL 7 by milestone B would be unrealistic because of the difficulties in testing some systems in an operational environment prior to launching the submarine. We agree that in some cases testing at sea is not practical and testing in a relevant environment may be sufficient to demonstrate maturity. However, achieving a TRL 7 is not only based on the test environment; it is also based on demonstrating a prototype near or at the planned operational system configuration, which requires a design resembling the final configuration. The Columbia class program has yet to complete this type of prototype for the key systems we identified. As we stated in the report some systems, like the propulsor, do not yet have a final design. While we do not expect the Navy to test every critical technology on a submarine at sea to demonstrate maturity, we would expect testing of a prototype near or at the planned operational system configuration prototype in a relevant environment. For example, prototype testing of the electric drive at a land-based test facility would demonstrate maturity—but is not planned for several years—well after the submarine’s design and potentially construction is underway. While such concurrency introduces cost, schedule and technical risk, we have previously reported that programs may choose to move forward with these risks, but should acknowledged and appropriately resource the program to address the risks should they materialize. As we stated in the report, this is not the case for the Columbia class program: some risks have not been properly identified and the cost estimate does not fully account for the margin of technical and schedule risks facing the program. DOD also provided a table of Columbia class practices, reprinted with our comments in appendix III. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense, the Secretary of the Navy, and other interested parties. This report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to the report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines (1) the status of key Columbia class technologies and congressional reporting requirements on this status, (2) risks, if any, with the Navy’s planned approach for design and construction, and (3) whether expected funding levels for the Columbia class will be adequate moving forward. To assess the status of key Columbia class technologies, we reviewed the Navy’s technology development plan and the planned technical approach and the status of key prototyping efforts to all of the systems that comprise the program, focusing on the technology readiness level of the major components that are key to enabling program success and that are key cost and schedule drivers. We also compared technology development efforts with program requirements and with GAO’s identified best practices for shipbuilding programs. We also evaluated the program’s Technology Readiness Assessment, which included applying the GAO-developed criteria documented in GAO’s Technology Assessment Guide. GAO’s guide draws heavily from the Department of Defense (DOD), Energy (DOE), and National Aeronautics and Space Administration (NASA) best practices, and establishes a methodology based on those best practices that can be used across the federal government for evaluating technology maturity, particularly as it relates to determining a program or project’s readiness to move past key decision points that typically coincide with major commitments of resources. We also interviewed relevant officials from the Navy’s Columbia class submarine program office; the Office of the Chief of Naval Operations- Undersea Warfare; Naval Sea Systems Command Naval Nuclear Propulsion Program; Navy Strategic Systems Program; Naval Undersea Warfare Center Newport; Naval Surface Warfare Center Carderock Division; Office of the Secretary of Defense (OSD) Director Operational Test and Evaluation; OSD Acquisition, Technology, and Logistics (AT&L); OSD Cost Analysis and Program Evaluation (CAPE); and the prime contractor shipyard General Dynamics Electric Boat and their sub- contractor Huntington Ingalls Industries Newport News Shipbuilding. To determine the congressional reporting requirements on this status we reviewed relevant DOD acquisition instructions and statute. To assess the risks, if any, with the Navy’s planned approach for design and construction, we compared the status of design maturity with Navy and shipyard plans to identify any delays, and compared planned design maturity and schedule projections with those of prior U.S. submarine efforts (the Virginia, Seawolf, and Ohio classes) to assess realism of Columbia class estimates. We also interviewed and analyzed available documentation from Naval Reactors (NAVSEA 08) related to nuclear reactor and Integrated Power System status. We also interviewed relevant officials from the Navy’s Columbia class submarine program office; Naval Sea Systems Command Naval Nuclear Propulsion Program; Naval Surface Warfare Center Carderock Division, and the prime contractor shipyard General Dynamics Electric Boat and their sub- contractor Huntington Ingalls Industries Newport News Shipbuilding. We also assessed the Navy’s acquisition strategy and the Integrated Enterprise Plan that tracks shipyard workload across the Columbia and Virginia class submarines and the Ford class aircraft carrier to identify any factors related to potential schedule challenges. To assess whether expected funding levels for the Columbia class will be adequate moving forward, we compared program cost estimates prepared at Milestone B to historical data on lead ships and submarine estimates and actuals to assess the realism of these requirements. We also analyzed program documentation to identify risk factors, if any, related to cost projections, including the program’s Independent Cost Estimate created by the OSD Cost Analysis and Program Evaluation, and the Navy’s Service Cost Position and Program Life Cycle Cost Estimate. This evaluation leverages, among other things, prior GAO work on cost estimating and the Navy’s acquisition of lead ships. We also interviewed relevant officials from the Navy’s Columbia class submarine program office; the Office of the Chief of Naval Operations- Undersea Warfare; Naval Sea Systems Command Naval Nuclear Propulsion Program; Naval Undersea Warfare Center; Naval Surface Warfare Center Carderock Division; OSD Director Operational Test and Evaluation; OSD AT&L; CAPE; and the prime contractor shipyard General Dynamics Electric Boat and their sub-contractor Huntington Ingalls Industries Newport News Shipbuilding. We conducted this performance audit from May 2016 to December 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Department of Defense Technology Readiness Levels Appendix II: Department of Defense Technology Readiness Levels Description Lowest level of technology readiness. Scientific research begins to be translated into applied research and development (R&D). Examples might include paper studies of a technology’s basic properties. Invention begins. Once basic principles are observed, practical applications can be invented. Applications are speculative and there may be no proof or detailed analysis to support the assumptions. Examples are limited to analytic studies. Active R&D is initiated. This includes analytical studies and laboratory studies to physically validate the analytical predictions of separate elements of the technology. Examples include components that are not yet integrated or representative. Basic technological components are integrated to establish that they will work together. This is relatively “low fidelity” compared with the eventual system. Examples include integration of “ad hoc” hardware in the laboratory. Fidelity of breadboard technology increases significantly. The basic technological components are integrated with reasonably realistic supporting elements so they can be tested in a simulated environment. Examples include “high-fidelity” laboratory integration of components. Representative model or prototype system, which is well beyond that of TRL 5, is tested in a relevant environment. Represents a major step up in a technology’s demonstrated readiness. Examples include testing a prototype in a high-fidelity laboratory environment or in a simulated operational environment. Prototype near or at planned operational system. Represents a major step up from TRL 6 by requiring the demonstration of an actual system prototype in an operational environment (e.g., in an aircraft, in a vehicle, or in space. Technology has been proven to work in its final form and under expected conditions. In almost all cases, this TRL represents the end of the true system development. Examples include developmental test and evaluation of the system in its intended weapon system to determine if it meets design specifications. Actual system proven through successful mission operations. Actual application of the technology in its final form and under mission conditions, such as those encountered in operational test and evaluations. Examples include using the system under operational conditions. Appendix III: Comments from the Department of Defense GAO Comments DOD also provided the above table of Columbia class practices. These practices align with GAO’s identified best practices in shipbuilding—stable requirements, design maturity at construction start, and manufacturing readiness. However, we have several observations on the DOD’s statements: Stable Operational and Technical Requirements: We have previously identified maintaining stable requirements as a best practice; in this report we note that the Navy has provided a stable basis for the Columbia class program by adhering to this practice. High Design Maturity at Construction Start: While we give credit to the program for striving for a high level of design maturity at construction start for the Columbia class program, we identify in this report that we have concerns about the Navy’s ability to stabilize design drawings while technology development continues. As we point out in this report, we are concerned with the maturity of the Columbia class design due to the unknowns with key technologies. In this table the Department identifies that the program is leveraging proven Virginia class technology for the propulsor, which it identifies as a TRL 9. Although this technology is indeed mature in the context of Virginia class submarines (i.e., not new or novel), it is nevertheless novel in the context of Columbia class submarines and should thus be considered a CTE to be evaluated and risk managed. As such, we dispute the Navy’s assertion that the Virginia class propulsor is TRL 9 in the context of the Columbia class program, since the Navy has yet to complete a design for the propulsor nor has it tested a production representative prototype, which would achieve a TRL 6 or 7 (depending on the test environment). Manufacturing and Construction Readiness: We have not conducted adequate work in this area to comment on DOD’s statements of manufacturing and construction readiness; we plan to address this in future work. Aggressive Action to Reduce Costs: While the Navy has made significant progress in reducing potential costs for the Columbia class program, we believe that the risks identified in this report, coupled with the optimistic cost estimate and aggressive schedule, could result in cost growth that reduces the actual savings identified by the program. Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition the contact name above, the following staff members made key contributions to this report: Diana Moldafsky, Assistant Director; C. James Madar; Jacob Leon Beier; Brian Bothwell; Herb Bowsher; Kurt Gurka; Stephanie Gustafson; Tim Persons; and Robin Wilson.
Why GAO Did This Study The Navy's Columbia class ballistic missile submarines will replace the 14 Ohio class that currently provide the sea-based leg of the U.S. nuclear triad, slated to begin retiring in 2027. The first Columbia must begin patrols in 2031 to prevent a gap in deterrent capabilities; the class will ultimately carry up to 70 percent of the nation's strategic nuclear capability. The program is a top Navy priority with an expected cost of $267 billion over its life cycle, including $128 billion to research, develop, and buy 12 submarines. House Report 114-102 included a provision for GAO to examine the Columbia class program. Among other things, this review examines (1) the status of key Columbia class technologies; and (2) potential risks with the Navy's planned approach for design and construction. GAO reviewed the Navy's technology readiness assessment, technology development plan, and the status of key prototyping efforts, and compared efforts with GAO's identified best practices for shipbuilding programs and technology readiness assessments. GAO also assessed the status of design maturity and the Navy's acquisition strategy and interviewed relevant officials. What GAO Found Additional development and testing are required to demonstrate the maturity of several Columbia class submarine technologies that are critical to performance, including the Integrated Power System, nuclear reactor, common missile compartment, and propulsor and related coordinated stern technologies (see figure). As a result, it is unknown at this point whether they will work as expected, be delayed, or cost more than planned. Any unexpected delays could postpone the deployment of the lead submarine past the 2031 deadline. Further, the Navy underrepresented the program's technology risks in its 2015 Technology Readiness Assessment (TRA) when it did not identify these technologies as critical. Development of these technologies is key to meeting cost, schedule, and performance requirements. A reliable TRA serves as the basis for realistic discussions on how to mitigate risks as programs move forward from the early stages of technology development. Not identifying these technologies as critical means Congress may not have had the full picture of the technology risks and their potential effect on cost, schedule, and performance goals as increasing financial commitments were made. The Navy is not required to provide Congress with an update on the program's progress, including its technology development efforts, until fiscal year 2020—when $8.7 billion for lead ship construction will have already been authorized. Periodic reporting on technology development efforts in the interim could provide decision makers assurances about the remaining technical risks as the Navy asks for increasing levels of funding. Consistent with GAO's identified best practices, the Navy intends to complete much of the submarine's overall design prior to starting construction to reduce the risk of cost and schedule growth. However, the Navy recently awarded a contract for detail design while critical technologies remain unproven—a practice not in line with best practices that has led to cost growth and schedule delays on other programs. Proceeding into detail design and construction with immature technologies can lead to design instability and cause construction delays. The Navy plans to accelerate construction of the lead submarine to compensate for an aggressive schedule, which may lead to future delays if the technologies are not fully mature before construction starts, planned for 2021. What GAO Recommends GAO had suggested a matter for congressional consideration related to additional reporting on the Columbia class technologies, but removed it because of recent legislation that implements this requirement. Department of Defense comments on the draft were incorporated as appropriate in this report.
gao_GAO-18-137
gao_GAO-18-137_0
Background VA comprises a Veterans Affairs Central Office (VACO) and over 1,000 facilities and offices throughout the nation, as well as the U.S. territories and the Philippines. As shown in figure 1, VA’s three major administrations are the Veterans Health Administration (VHA), Veterans Benefits Administration (VBA), and National Cemetery Administration (NCA). The largest of the administrations, in terms of workforce, is VHA and its associated Veterans Integrated Service Networks (VISN). VHA is estimated to employ about 316,800 employees in 2017, followed by the VBA and NCA with about 22,700 and 1,850 employees, respectively. The remaining 15,000 employees are in various staff offices. VA’s budget request for fiscal year 2018 of $186.4 billion includes $82.1 billion in discretionary resources and $104.3 billion in mandatory funding. The following offices are involved in addressing misconduct at VA. Office of Human Resource Management (OHRM): OHRM develops policies with regard to performance management and assesses the effectiveness of department-wide human-resource programs and policies. Office of Accountability Review (OAR): OAR was established in 2014 within VA’s Office of General Counsel and was intended to ensure leadership accountability for improprieties related to patient scheduling and access to care, whistle-blower retaliation, and related disciplinary matters that affect public trust in VA. Office of Inspector General (OIG): The VA OIG provides oversight through independent audits, inspections, and investigations to prevent and detect criminal activity, waste, abuse, and mismanagement in VA programs and operations. Office of Accountability and Whistleblower Protection (OAWP): As required by the Department of Veterans Affairs Accountability and Whistleblower Protection Act of 2017, OAWP will take on the responsibility of, among other things, receiving whistle-blower complaints. Corporate Senior Executive Management Office (CSEMO): CSEMO supports the entire life-cycle management of VA’s senior executives by developing policy and providing corporate-level personnel services, such as training and coaching to VA’s senior executive workforce. Client Services Response Team (CSRT): CSRT serves to centralize and streamline internal processes to improve VHA’s overall responsiveness to concerns of veterans, employees, and other internal and external stakeholders. This office works closely with VA and VHA program offices and facilities to review, research, and respond to inquiries sent to the Office of the Under Secretary for Health, Office of the Secretary, and other concerns received via program offices within VACO, which lack a formalized response process. National Cemetery Administration (NCA): NCA honors veterans and their families with final resting places in national shrines that commemorate their service. NCA’s Office of Management oversees and administers all human-resource management, including activities associated with labor and employee relations. Office of the Medical Inspector (OMI): OMI is responsible for assessing the quality of VA health care through investigations of VA facilities nationwide, which include employee whistle-blower allegations referred to VA by the OSC; veteran complaints referred by the OIG, Congress, or other stakeholders; and site-specific internal reviews directed by the Office of the Under Secretary for Health. Office of Research Oversight (ORO): ORO promotes the responsible conduct of research, serves as the primary VHA office in advising the Office of the Under Secretary for Health on matters of research compliance, and is to provide oversight of compliance with VA and other federal requirements related to research misconduct. Office of Resolution Management (ORM): ORM provides Equal Employment Opportunity (EEO) discrimination complaint processing services to VA employees, applicants for employment, and former employees, which include counseling, investigation, and final agency procedural decisions. Office of Security and Law Enforcement (OS&LE): OS&LE develops policies, procedures, and standards that govern VA’s infrastructure law- enforcement program. The Law Enforcement Oversight and Criminal Investigations Division is responsible for conducting investigations of serious incidents of misconduct. Veterans Benefits Administration (VBA): VBA provides benefits and services to veterans, their families and survivors. VBA’s Office of Management directs and oversees nationwide human-resources activities and supports ORM in processing EEO complaints filed by employees and applicants who allege employment discrimination. The process for addressing employee misconduct involves various components within VA that are responsible for investigating and adjudicating allegations, as shown in figure 2. Receipt of Allegation: The OIG receives allegations of criminal activity and employee misconduct from VA employees, the OSC, members of Congress, the public, or other stakeholders. The allegations received by the OIG are initially routed to the OIG Hotline Division. The OIG also receives other types of allegations outside the scope of this review, such as issues pertaining to VA employee benefits and contracts. In addition to reporting allegations of employee misconduct to the OIG, VA employees may also report allegations of misconduct directly to their supervisors. Review and Referral of Allegation: Due to the substantial number of allegations received through the OIG Hotline Division, the OIG exercises a “right of first refusal” on misconduct cases, which allows it to take no further action, refer the case to program offices within VA for review and response, or open an investigation. For example, the OIG can either decide to (1) take no further action on matters not within the OIG’s jurisdiction or too vague to warrant further review; (2) refer allegations that warrant some action to the OMI, OAR, or VA facilities or program offices within each administration to conduct an independent review of the allegations; or (3) open cases for further review for serious allegations of criminal activity, fraud, waste, abuse, and mismanagement. Cases opened by the OIG typically involve misconduct by senior officials, or matters relating to the quality of care provided by licensed professionals. In contrast, the OIG typically refers allegations to VA facility or program offices for matters where the OIG does not have sufficient resources to open an internal case. The OIG generally does not review matters that are addressed in other legal or administrative forums, such as allegations of discrimination or whistle-blower retaliation. Notice to Employee Once Allegations Are Substantiated: The type of appointment an employee holds determines whether an employee is to be provided advance notice of planned disciplinary action once misconduct is substantiated at the conclusion of an investigation. Employees holding a permanent appointment are entitled to receive a notice of proposed action that states the specific charges for which the proposed disciplinary action is based and informs the employee of his or her right to review the material that is relied upon to support the reasons for the action. Employees in the competitive service serving in a permanent appointment (who have completed their probationary period) are treated differently than those who are still in their probationary period or serving under temporary appointments. An employee serving a probationary period or under a temporary appointment does not receive a notice of proposed action and may be immediately terminated because his or her work performance or conduct fails to demonstrate fitness or qualifications for continued employment. Temporary employees are terminated by notifying employees in writing as to why they are being separated and the effective date of the action. Disciplinary Action: VA Handbook 5021, Employee/Management Relations, governs policy for disciplinary and grievance procedures for all employees. Supervisory staff or appropriate higher-level officials use the results from investigations to help determine whether any disciplinary actions are warranted and, if so, the type and severity of each action. Other VA staff, such as human-resources and general-counsel staff may also provide guidance to management in determining appropriate disciplinary actions. After determining the facts in a case, VA may employ either disciplinary or adverse action. Adverse action involves a more- severe type of discipline (e.g., removal, suspension more than 14 days, or reduction in grade) as described in table 1. Employee Misconduct and Disciplinary- Action Data Are Hampered by Completeness and Data-Reliability Issues VA Collects Misconduct and Disciplinary-Action Data Using Fragmented Systems As a federal agency, VA is required to report department-wide information on certain disciplinary personnel actions to the Office of Personnel Management (OPM). OPM’s Enterprise Human Resources Integration (EHRI) system currently collects, integrates, and publishes data for executive-branch employees on a biweekly basis. This system provides federal workforce data to other government systems and the public. To adhere to this reporting requirement, VA provides information on certain disciplinary actions such as terminations and removals to OPM. The Department of Veterans Affairs Accountability and Whistleblower Protection Act of 2017 also requires the Secretary to provide a report on the disciplinary procedures and actions of the department to Congress. To understand the depth and breadth of misconduct and related issues in a large entity, such as VA, comprehensive and reliable information is needed. Standards for Internal Control in the Federal Government states that an information system represents the life cycle of information used for the entity’s operational processes that enables the entity to obtain, store, and process quality information. Therefore, management should design the entity’s information system to obtain and process information to meet each operational process’s information requirements and to respond to the entity’s objectives and risks, such as the ability to systematically analyze misconduct department-wide to identify trends and make management decisions regarding misconduct. A deficiency exists when (1) a control necessary to meet an objective is missing or (2) an existing control is not properly designed, so that even if the control operates as designed, the objective would not be met. We identified 12 fragmented information systems that VA has used, or continues to use, to collect employee misconduct and disciplinary actions. Although VA has made efforts to develop repositories to collect information pertaining to misconduct and disciplinary action, none of the 12 information systems contain complete information. Six of these systems collect partial misconduct and disciplinary action information and contain fields that could potentially be shared with other systems to obtain additional information, while the other six systems are intended for internal office use only, each containing their own unique fields and values tailored to the needs of that particular office, which are not shared. Therefore, the number of eligible fields for each information system was also limited to those not specifically designated for internal use. On the basis of our review, the 12 information systems are not currently able to communicate, or interoperate, with one another to provide a complete picture of misconduct and disciplinary actions across VA. Table 2 provides an overview of VA’s six information systems and associated data files that collect partial misconduct and disciplinary-action data that could potentially be shared with other systems. According to OHRM officials, VA’s information system for recording adverse disciplinary actions—the Personnel and Accounting Integrated Data (PAID) system—was not designed to track all misconduct cases. In addition, OHRM stated that the 53-year-old PAID system was developed primarily to track payroll actions for all employees and is the system of record that holds department-wide personnel information that is reported to OPM’s EHRI system. It contains information about adverse disciplinary actions that affect employee leave or salary, or result in a Notification of Personnel Action Form (Standard Form 50). However, PAID does not track comprehensive information on instances of misconduct such as the offense, or the date of occurrence, and it does not include instances of other types of disciplinary actions, such as admonishments or reprimands that would not affect leave or salary, or result in a Standard Form 50. OHRM officials stated VA implemented a system called HR Smart in June 2016 that is intended to replace PAID, but the agency does not plan to upgrade the functionality of the new system to enable reliable collection of misconduct information. According to OHRM, HR Smart includes the same personnel-processing functions as PAID but will allow for tracking data changes and transaction history over time. However, as with the PAID system, adverse disciplinary actions involving leave and salary will be tracked, but other actions, such as reprimands and admonishments, will not. It also will not track information related to the offense that prompted the disciplinary action. While the HR Smart system has the capability to include modules to enhance performance features, such as the ability to track misconduct, according to OHRM officials VA does not currently have plans to implement these modules. As a result, the HR Smart system will not have the capability to track all employee misconduct department-wide and will not improve VA management’s visibility over the depth and breadth of misconduct so that it can systematically understand misconduct department-wide. VA has five additional information systems for tracking complaints or allegations of misconduct and disciplinary actions, but, similar to the agency’s PAID information system, each of these information systems contains a subset of the information that would be needed to understand all misconduct department-wide. Data-Reliability Issues Impair VA’s Ability to Systematically Analyze Data to Evaluate Department-Wide Employee Misconduct According to Standards for Internal Control in the Federal Government, systems should include relevant data from reliable internal sources that are reasonably free from error and faithfully represent what they purport to represent. Additionally, management is advised to process data into quality information that is appropriate, current, complete, accurate, accessible, and provided on a timely basis. Management should also evaluate processed information, make revisions when necessary so that the information is quality information, and use the information to make informed decisions. Additionally, according to Standards for Internal Control in the Federal Government, management should design the entity’s information system and related control activities to achieve objectives and respond to risks. The standards add that the information system design should consider defined information requirements for each of the entity’s operational processes. Defined information requirements allow management to obtain relevant data from reliable internal and external sources. In order to achieve complete and accurate data, internal controls are needed, among other things, to ensure that fields are not left blank, data elements are clearly defined and standardized, and common data elements are included across data systems to allow for interoperability and aggregation. Our analysis of VA’s 14 data files identified the following three categories that reduced the reliability of the data: lack of data standardization, and lack of identifiers. Missing Data We found that 7 data files we reviewed contained a majority of data within fields, but 5 data files were missing a significant amount of data within certain fields. We were unable to analyze the remaining two files due to a number of data-quality issues. Among the fields that were missing data were several that would be useful for analyzing misconduct, including complainant name, proposed action, and person of interest, as shown in table 4. For example, we found that in the OAR Legacy Referral Tracking List, 97 percent of the entries for the Proposed Action field (1,210 of 1,245) and 96 percent for the Disciplinary Action field (1,190 of 1,245) were blank. If available, comparison of the proposed and disciplinary action-taken fields would allow VA to assess whether actions are consistently implemented department-wide. However, the high percentages of blank values in multiple fields impair VA’s ability to conduct a comprehensive analysis of misconduct to identify and address trends. See appendix II for a further listing of the five data files and the corresponding fields that were missing data. In addition, we found that several data files had options such as “not applicable” or “no” for certain fields so that the field would not be left blank, but these options were not consistently used. For example, the Complainant Name field within the Legacy Referral Tracking List was blank for some entries and not applicable (N/A) for others. Accordingly, we did not know whether data were intentionally omitted or not entered by mistake. In addition, the Offense Sustained field found within the VA-Wide Adverse Employment Action and Performance Improvement Plan Database was blank for some entries and either a yes or no for others. Lack of Data Definition Standardization Eight of the 14 data files we reviewed did not have key data elements that were defined within and across information systems. In other words, the data files contained entries that described similar information in different ways. For example: The Complaints Automated Tracking System (CATS) Employment data were not mutually exclusive, or independent of one another. For example, this field includes two distinct categories of information: employment status, such as full time or part time; or hiring authority, such as Title 5 or Title 38. This method of storing information resulted in undercounting each of the separate values due to the system’s inability to account for expected overlap. For instance, an employee could be both a full-time and Title 5 employee and the field only tracks one or the other. ORM officials stated that this field has since been modified to capture more options to account for the overlap. The NCA data file’s Action Proposed/Decided/Taken data were tracked in a single field and updated with the most-recent action, rather than capturing proposed actions, decided actions, and actions taken in separate fields. We also identified standardization issues with the newly updated VA- Wide Adverse Employment Action and Performance Improvement Plan Database. For example, we found 15 different variations of Registered Nurse, such as “Registered Nurse,” “Staff RN,” and “RN” position names. In addition, we found 28 alternate values that identified Diagnostic Radiologic Technologists (e.g., Diagnostic Radiologic Technologist, Diagnostic Radiological Technologist, and Radiologic Technologist). See appendix III for a description of the fields that did not have standardization within the eight data files. Lack of Identifiers We determined that 5 out of the 14 data files did not have identifiers that would allow comparisons of information across systems. Identifiers are important because they reference one unique individual or case, which makes it possible to analyze historical data pertaining to all records with the same identifier and analyze trends in employee misconduct over time. For example: The OAR VA-Wide Adverse Employment Action Database and the VBA data files had a combined total of 4,487 closed cases of misconduct that received adverse corrective action during the combined period of November 2013 through December 2016. We found that these two data files did not contain unique identifiers for complainants or accused individuals for a given case. OAR officials stated that this system does not contain Personal Identifying Information since its purpose is to track proposed and taken adverse actions. If more-specific information is needed, OAR staff coordinate with the human-resource point of contact. Although OAR may obtain additional information, this information is not entered into the OAR VA-Wide Adverse Employment Action Database, which would assist with conducting analysis. VAPS tracks misconduct in two separate subsystems, one of which tracks traffic violations and other administrative offenses, and one of which also tracks more-egregious offenses such as criminal violations. These subsystems also do not have unique identifiers that would allow data matching between the two subsystems, which could impede the analysis of this information. Even with both files, there is no ready way to capture the complete number of individuals with misconduct in both files due to the lack of a shared identifier. The newly updated VA-Wide Adverse Employment Action and Performance Improvement Plan Database does not contain unique identifiers, such as employee identification number, name of the complainant or accused, or other linking variables, to allow for the analysis of historical trends or comparison of information among other information systems. The data-quality issues described above are due in part to most of VA information systems not having data dictionaries, field definitions, or other documented guidance and procedures on data entry and automated edit checks to control for erroneous entries and blank fields. Absent guidance and procedures, VA lacks assurance that employees will enter complete and accurate information in the various data systems. Further, the lack of unique identifiers such as employee identification number, case number, or other linking variables for each of the records does not allow for analysis of historical trends or comparison of information among different information systems. Consequently, this precludes VA from determining the frequency and nature of allegations by specified category, or identifying trends, thus impeding senior officials’ ability to analyze misconduct department-wide and develop corrective actions. VA Does Not Consistently Adhere to Policies for File Retention and Adjudication Documentation Pertaining to Employee Misconduct Allegations VA Directive 5021, Employee/Management Relations, governs policy for disciplinary procedures for all employees and outlines the provisions for the adjudication of each disciplinary action and associated file documentation requirements. Specifically, files must be established before a notice of proposed adverse action is issued to the employee to document that the adjudication procedures were followed. The file must contain all available evidence upon which the notice of proposed action is based and that supports the reasons in that notice. In addition, each file should contain specific documentation related to the adjudication of employee misconduct. VA Handbook 5021 states that disciplinary actions and associated adjudication procedures for all VA employees appointed under Title 5 are governed by three basic principles: (1) an employee shall be informed in writing honestly and specifically why the action is being brought against him or her; (2) an employee shall be given a reasonable opportunity to present his or her side of the case; and (3) the employee and representative shall have assurance of freedom from restraint, interference, coercion, discrimination, or reprisal in discussing, preparing, and presenting a defense. Our review of a generalizable sample of 544 misconduct case files (from a universe of 23,622 files) associated with disciplinary actions that affect pay from October 2009 through May 2015 revealed that VA officials did not consistently adhere to VA’s policy for retaining files containing evidence of misconduct. Specifically, VA was unable to provide the files for 10 percent (55 of 544) of the files we requested. We determined that administrations and program offices within VA have various record- retention schedules. Offices that have not established a record-retention schedule refer to the general records schedule developed by the National Archives and Records Administration (NARA). However, we found that some offices are misinterpreting OPM and NARA guidance and specify the record retention period for adverse action files as a range between 4 to 7 years rather than selecting a specific number of years in their record- retention schedules. All of the files were within VA’s record-retention range specified during the time of our review. The files that were unaccounted for were dispersed throughout most of the VISNs, but one VISN was not able to account for 19 of the missing files in our sample. On the basis of our weighted analysis of the generalizable sample, we estimate that VA would not be able to account for approximately 1,800 files in the full population that were within the record-retention period specified. In addition, VA officials did not consistently adhere to VA’s policy for documenting that procedures were followed in the adjudication of misconduct cases. We identified 22 out of 36 file requirements where VA was not able to consistently demonstrate compliance with VA policy due to the lack of documentation contained in files, based on our generalizable sample. Specific to both Title 5 and Title 38 permanent- employee misconduct case files, table 5 shows the estimated number and percentage that deviated from file documentation requirements. A list of the 22 identified requirements and the percentage of files not in compliance can be found in appendix IV. As table 5 indicates, Title 5 and Title 38 permanent employee files did not always contain documentation that employees were informed of the reason the action was brought against them. For example, on the basis of our generalizable sample, we estimate that the advance notice of proposed action, which includes a statement of the specific alleged misconduct upon which the proposed action is based, was not included in 16 percent of the files department-wide. A final decision letter, which contains a statement of the decision official’s determination regarding which charges, if any, in the advance notice were sustained, was not included for an estimated 15 percent of all files. Further, an estimated 35 percent of all files did not include a written acknowledgement from the employees that they received the final decision letter in person, and an estimated 23 percent of all files did not include the required return receipt for certified mail indicating that the decision letter was mailed to the employee. In addition, Title 5 and Title 38 permanent employee files did not always contain documentation that employees were provided a reasonable opportunity to present their side of the case. Our review found that permanent-employee disciplinary files did not adhere to basic principles outlined in VA Handbook 5021 and lacked evidence to demonstrate that employees were adequately informed regarding their rights during the adjudication procedure. Specifically, our generalizable sample found that an estimated 21 percent of all files did not include statements regarding the employee’s rights to due process, such as his or her entitlement to be represented by an attorney or other representative. In addition, an estimated 8 percent of all files did not mention that more information regarding appeal rights could be obtained by consulting Human Resources Management offices. For files where the employee provided an oral reply in response to proposed disciplinary action, an estimated 29 percent of files did not include the required written summary, which is to be signed by the official hearing the oral reply. Where a written reply was submitted, an estimated 11 percent of files did not include a copy of the employee’s written reply. Further, VA officials did not consistently adhere to VA’s best practices specific to Title 5 permanent employees only. We found that in a majority of these files (an estimated 72 percent) the proposal letters did not include a statement that assured the employee he or she had freedom from restraint, discrimination, or reprisal in discussing, preparing, and presenting a defense. VA Handbook 5021, Employee/Management Relations, also states that Title 5 employees should provide their written responses through supervisory channels to the decision official. We estimate that a total of 6,819 files (47 percent) of Title 5 permanent employee files did not provide their written reply through supervisory channels to the decision official. Although OHRM is responsible for assessing the effectiveness of department-wide human resource programs and policies of VA Handbook 5021, according to OHRM officials, each facility is responsible for oversight of implementing policies and guidelines pertaining to how disciplinary actions are processed. We found no evidence that OHRM has assessed whether documentation exists that demonstrates adherence to policy governing cases involving disciplinary actions or provided oversight of VA’s implementation of record-retention requirements, or that human- resource personnel adhere to basic principles outlined in policy to ensure employees are informed of their rights during the adjudication process. The resulting lack of oversight to HR policies increases the risk that employees will not be adequately informed of their rights during the adjudication process. Accordingly, employees may not (1) be provided with information on why an action is being brought against them, (2) be provided with a reasonable opportunity to present their case, and (3) be adequately protected from potential reprisal in preparing their defense. Regarding retention of records, according to NARA, disciplinary and adverse action case files should be destroyed no sooner than 4 years but no later than 7 years after the case is closed. According to OPM, to implement this authority, each agency must select one fixed retention period between 4 and 7 years and publish the retention in the agency’s records disposition manual. We determined that some offices are misinterpreting OPM and NARA guidance by not selecting a specific number of years in their record- retention schedules. For example, three of the six policy record-retention schedules we reviewed did not establish a specific number of years for record retention. Specifically, record-retention policies for the Office of Information and Technology, VACO staff offices, and VBA specified the record-retention period for adverse action files as a range between 4 to 7 years rather than selecting a fixed retention period. For example, we found that the Records Control Schedule pertinent to VACO was dated June 30, 1967, without references to new or revised items since 1969. Our results are consistent with an October 2016 inspection conducted by NARA. The inspection report contained 16 findings and 19 recommendations for improvement of the records-management program at VA. Among the findings and recommendations were the following. Finding: The VA records management program has not ensured that the VACO maintains a current Records Management Handbook and a current Records Control Schedule, which together establish program objectives, responsibilities, and authorities for the creation, maintenance, and disposition of agency records. Recommendation: The Department Records Office must update and maintain the VACO handbook and the Records Control Schedule for Central Office Staff Offices and the Offices of the Assistant Secretaries to include specific Records Management roles and responsibilities for all VACO staff and to include mandates for implementation of records management policies and procedures in accordance with Federal statutes and regulations. Finding: The VA Departmental Records Management program does not conduct regular records management evaluations within VACO and the Offices of the Secretary and Assistant Secretaries or monitor the oversight activities of the Administrations. Recommendation: The VA Departmental Records Management program, working with the Administrations, VACO, and Enterprise Risk Management, must establish effective Records Management evaluation programs to monitor VA compliance with Federal regulations. Recommendation: The VA Departmental Records Management program, working with the Senior Agency Official for Records Management, must establish effective Records Management evaluation programs to monitor the records management practices within the Office of the Secretary and Assistant Secretaries to ensure compliance with Federal regulations. Finding: VACO Staff Offices and the Offices of the Assistant Secretaries are not routinely conducting records inventories. Recommendation: VACO Staff Offices and the Offices of the Assistant Secretaries, with support from the Department Records Management program, must conduct inventories of existing electronic and non-electronic records to identify scheduled, unscheduled, and vital records. In response to NARA findings, VA is to submit a plan of corrective action that specifies how the agency will address each inspection report recommendation, including a timeline for completion and proposed progress reporting dates. VA does not have a method in place to evaluate the implementation of records-management practices outside of those being conducted by VHA and VBA. Accordingly, VA has not been conducting records-management oversight with any uniformity department-wide. Further, VA’s use of multiple retention periods for adverse action files, and in some cases the lack of adherence to OPM and NARA guidance in defining a specific retention period for these files, results in inconsistent retention of these files across VA. Investigative Standards Were Not Consistently Followed to Ensure That Senior Officials Were Held Accountable VA Facility and Program- Office Responses to Allegations of Misconduct Did Not Consistently Follow OIG Policy The OIG receives allegations of employee misconduct from VA employees, the OSC, members of Congress, the public, and other stakeholders. When the OIG receives allegations it can either take no further action, open an investigation, or refer the case to facility or program offices within VA for review and response. For cases referred to facility or program offices, the OIG has developed a policy for VA facilities and program offices to use when investigating allegations of misconduct. This policy includes six elements that VA facility and program officials are to incorporate in their investigations, as shown in table 6. According to OIG officials, if the reviewing employees have concerns about the adequacy of the response provided, the OIG can either ask for additional information to supplement the response or open an internal case. Departmental heads (Under Secretaries for Health, Benefits, and Memorial Affairs, Assistant Secretaries, and other key officials) are responsible for ensuring that referrals are properly reviewed, documented, and answered within specified time frames. Our review of the 23 OIG cases of alleged misconduct between calendar years 2011 and 2014 involving senior officials found that VA facility and program offices did not consistently follow policies and procedures established by the OIG for investigating such allegations. In several instances, VA facility and program offices did not include one or more of the six elements required in their investigative response to allegations of misconduct. In addition, our review of the 23 cases found instances in which VA facility and program offices did not include sufficient documentation for their findings, or provide a timely response to the OIG. The OIG was not able to produce the documentation provided by the facility or program office that was used to close 2 of the 23 cases in our review. All of the requested files were within the OIG’s 7-year record- retention period during the time of our review. As shown in table 7, we identified four cases that did not contain evidence of an independent review by an official separate from and at a higher pay grade than the accused. In three of the four cases that were not reviewed by an independent official at a higher grade, the review was performed by the medical center director, who was one of the accused named in the allegation. For example, in one case involving alleged time-and-attendance abuse by a physician, the medical center director, who was also named in the allegation as having received a similar complaint against the physician 2 years earlier, reviewed the allegations made against the physician and himself. The documentation provided showed that the medical center director conducted the investigation of allegations and found the allegations were not substantiated and no corrective actions were implemented. In all four cases, both the independence and higher-grade criteria were not followed when the accused senior officials investigated allegations against themselves. As shown in table 8, we generally found that VA facility and program offices reviewed each allegation contained in the original referrals, although in one case the reviewer did not respond directly to all allegations. As shown in table 9, VA facility and program offices clearly indicated their findings for each allegation in 14 of the 21 cases of misconduct involving senior officials for which files could be located, as well as their assessment of whether the allegations were substantiated or unsubstantiated. However, we identified seven cases in which VA discussed its findings but did not provide a clear indication of whether all allegations were substantiated or unsubstantiated. Responses lacking a clear statement of substantiation may be more difficult for subsequent reviewers, including OIG and OAR investigators, to track and perform follow-up where necessary. For example, in one case involving 11 allegations, no statement of substantiation was provided, but VA’s response included seven recommendations, three of which involved disciplinary action. We did not find evidence in the case file that follow-up was performed by the OIG personnel to clarify this discrepancy, and the case was closed. As shown in table 10, most allegations involving senior officials (16 of 21 cases for which files could be found) were not formally substantiated and did not require a recommendation for corrective action based on OIG case-referral criteria. Specifically, the criteria require a description of corrective actions taken or proposed as a result of substantiated allegations, but make no mention of allegations that were not substantiated as part of VA’s response. For one substantiated allegation, however, we found no evidence of a recommendation for corrective action. Table 11 shows that 17 cases from VA facility and program offices did not provide the supporting documentation they used to reach their conclusions about the OIG case referrals. In 17 cases, including one case reviewed by an AIB panel, VA referenced documents reviewed but did not attach any of the supporting evidence. OIG case-referral criteria state that VA facility or program offices must provide supporting documentation used in their review, such as copies of pertinent documents. However, the criteria do not specify whether copies of all documentation reviewed must be included in the file. Supporting documentation, which must be provided according to OIG policy, will vary depending on the circumstances of the case, but those used to support the findings and recommendations should be included. For example, we reviewed one case where pertinent documents were referenced to support the allegation, but documents supporting the findings and recommendation were not included. The case contained allegations involving false patient wait-time documentation and abuse of authority. Specifically, a medical center director instructed staff to review patient wait times between follow-up appointments in order to meet VA’s 14-day timeliness metric. The investigation revealed that VA staff had changed several hundred veteran appointment wait times. The investigation concluded that the false documentation allegation was substantiated, but attributed the cause to the staff not understanding how to enter a follow-up appointment date into the system. However, there was no documentation in the files to support (1) that the medical center director had not abused his authority by instructing staff to review wait times greater than 14 days to determine how they could be reduced, and (2) findings for the conclusion that the original wait times were entered in error. Absent supporting documentation, it is difficult for the OIG to determine whether enough evidence was gathered before closing alleged cases of misconduct that were found to be unsubstantiated or closing substantiated cases of misconduct that required further action. VA Directive 0701 states that copies of voluminous transcripts of interviews, the entire claims folder, and medical charts are not necessary. However, VA Directive 0701 further states that such materials should be available if the OIG subsequently requests them within the record-retention period. Case examples of allegations reviewed, and subsequently closed, by the VA OIG based on its evaluation of evidence provided by facility and program offices in response to allegations of misconduct can be found in appendix V. As shown in table 12, VA facilities’ or program offices’ response letters, which were sent to the OIG, included a point of contact for further questions in 15 of the 21 OIG case referrals involving senior officials, including the individual’s name and a means of contact (phone or e-mail). In 2 of the 15 cases where a point of contact was provided, the contact was also one of the accused in the allegation. Although that is not technically a violation of OIG criteria, it likely presents a conflict of interest in regard to independent reviewers obtaining objective case information. In six other cases, no contact was listed, although the letter was signed by the reviewer. If a specific point of contact is not identified, including position title, it may be assumed erroneously by employees involved in the case, or following up on the case, that the default contact is the reviewer, who may not be the appropriate point of contact, and may or may not be able to provide objective case information. OIG guidelines state that VA facilities and program offices assigned Hotline case referrals are responsible for reporting written findings to the Hotline Division within 60 days, unless an extension is requested. Our review of the 21 cases found instances in which VA facility or program offices did not always provide a timely response to the OIG. Table 13 shows five instances in which VA facility or program offices submitted a response after the deadline requested by the OIG. One response was not reported timely after an extension was provided by the OIG. In one of the five cases involving allegations of abuse of authority by a VA medical center director, the reviewer requested an extension, which is permitted by OIG policy, but still missed the revised deadline. The five case files did not contain any information regarding any follow-up actions taken in response to delays. According to OIG officials, when a case has been referred to a program office for investigation, the OIG reviews the program office’s response for completeness and sufficiency before closing the case. However, there is no requirement for the OIG to ensure that the responses contain the six elements listed in VA Directive 0701 and confirm that case referral allegations have been addressed. Consequently, the lack of verification could have contributed to insufficient evidence that does not meet the requirements outlined by the OIG. Additionally, VA facility and program offices have not consistently adhered to VA Directive 0701 policy and do not always provide supporting documentation for their findings and recommendations, or always provide a timely response when reporting findings to the OIG’s Hotline division. Inconsistent adherence to the reporting standards provided by the OIG to VA facilities and program offices for investigating and resolving misconduct case referrals from the OIG Hotline impedes VA’s ability to ensure that misconduct cases are being handled appropriately. According to OIG officials, the OIG has taken steps to enhance the review of case responses. Specifically, OIG officials stated that in April 2018 the OIG implemented a new Enterprise Management System to reduce reliance on certain manual processes. According to OIG officials, Hotline analysts will now have more time to review their work and perform other quality-assurance activities. In implementing this new system, it will be important for the OIG to consider how the system can assist in ensuring requirements are met and responses are received timely. OAR Data Indicate That Senior Officials Involved in Substantiated Cases of Misconduct May Not Always Be Held Accountable Our review of VA’s information systems that track misconduct involving senior officials department-wide indicates that they may not always be held accountable for misconduct. Specifically, (1) misconduct was sometimes substantiated, but the proposed disciplinary action was not taken; (2) misconduct was sometimes substantiated, but no disciplinary action was recommended; (3) previous penalties did not have the corrective effect for officials found to have engaged in repeated acts of misconduct and who have remained in VA management positions; and (4) senior officials violated separation-of-duty policy when taking disciplinary action. VA Handbook 5021 allows the deciding official to determine the appropriate disciplinary action if one or more allegations are substantiated. However, the disciplinary action may not be more severe than what had been proposed. Misconduct Substantiated, but Proposed Action Not Always Taken In several cases, misconduct was substantiated, but the proposed action was not always taken. Our review of the OAR Legacy Referral Tracking List identified 17 officials between calendar year 2011 through May 2015 with substantiated misconduct where action was proposed. However, in some of these cases, the officials were given a lesser penalty than the one proposed, while in other cases there is no evidence that action was taken. As shown in figure 3, we found that for 12 of the 17 officials with substantiated misconduct, an adverse disciplinary action (removal) was proposed. Of those 12 officials, 3 were removed, 2 received a suspension, 4 received a reprimand or admonishment, 2 were allowed to resign or retire before receiving disciplinary action, and we found no evidence of disciplinary action for the remaining individual. For the other 5 officials, actions such as counseling, admonishment, suspension, or reprimand were proposed. Of the 5 officials, 2 received the actions that were proposed, 1 received a lesser penalty than what had been proposed, 1 was allowed to retire before receiving action, and we found no evidence of the proposed action for the remaining individual. For the two officials for whom there was no evidence that disciplinary action was taken, we found no evidence within the PAID information system or personnel files that these officials received the action proposed in the OAR Legacy Referral Tracking List. Counseling was proposed for one official, and removal from the position for the other official. OAR did not provide us with evidence that the officials had received the action proposed. We also reviewed an additional 15 cases that involved a fact-finding or an AIB. Our review of these cases found that 11 out of 23 officials were associated with instances of substantiated misconduct and proposed action was recommended. For 4 of the 23 officials where the proposed action field was populated, the information within the OAR Legacy Referral Tracking List reflected the action recommended. The applicable data fields for the remaining 19 officials within the OAR Legacy Referral Tracking List were not in agreement with the action recommended, or blank. This review also identified two officials with substantiated misconduct where OAR did not provide evidence that the disciplinary action proposed was taken: Two officials were involved in a case concerning alleged whistle- blower retaliation at the Phoenix VA Health Care System. The investigative report documented that allegations were sustained. The retaliation included allegations of involuntary reassigning the whistle- blower to another position, placement of the whistle-blower on administrative leave, and lowered performance pay ratings following disclosures regarding poor patient care and nursing triage errors. Appropriate administrative action for persons identified as having engaged in retaliation was recommended. We did not find any evidence in the PAID system that these two officials involved in retaliation received disciplinary action. OAR provided documentation to show that no action was taken against one official, and was unable to provide documentation to show that the disciplinary action had been taken for the other. The official who received no action received approximately $11,500 in performance pay during a 2-year period following the allegations. Misconduct Substantiated, but No Action Recommended OAR’s quality-review process for investigative reports does not ensure that reports with findings of substantiated misconduct include recommendations for action. Our review of OAR’s Legacy Referral Tracking List identified 70 out of 1,245 closed cases involving officials where misconduct was either substantiated, or partially substantiated, but no disciplinary action was recommended. One case involved three allegations of poor dental care provided to patients by three different senior officials. One physician cut underneath a patient’s tongue with the bur of a hand-piece drill (substantiated), another administered medication the patient was allergic to (partially substantiated), and the final senior official extracted the wrong tooth (substantiated). We did not find any evidence in the PAID system that these senior officials received disciplinary action. Further, OAR did not provide documentation to show that any disciplinary action had been proposed or taken. The physician that cut underneath a patient’s tongue received performance pay totaling $15,000 approximately 6 days after the investigation had concluded that misconduct was substantiated. As of March 2018, two of these senior officials received performance pay, and appear to still be employed at VA. While an investigation was conducted that substantiated (or partially substantiated) the allegations, there is an increased risk that some substantiated misconduct will go unaddressed if there is no recommendation for corrective action. Senior Officials with Misconduct Remain in Management Positions Our review of OAR’s Legacy Referral Tracking List indicated that some officials who had been disciplined for misconduct remained in positions where they were responsible for proposing or deciding disciplinary action for other employees. We identified 15 officials in the OAR VA-Wide Adverse Employment Action Database who received disciplinary action between 14 days to 1 year prior to proposing disciplinary action for another employee. Most of the 15 officials (12 officials) had received a suspension. We also found that five officials in the OAR Legacy Referral Tracking List had received prior disciplinary actions for offenses unrelated to the new OAR allegations. A prior history of disciplinary actions indicates that some officials may be repeat offenders for whom the previous penalties did not have the desired corrective effect. For example, 4 out of 5 officials were suspended for a different offense prior to being the subject of a new allegation. One of the four officials was suspended less than 2 months prior to being the subject of a new allegation, while another received a suspension before, and again approximately 7 months following, the OAR allegation. According to VA Handbook 5021, the deciding official must use the “Douglas” factors, which include the employee’s past disciplinary record, to determine a reasonable penalty. One of the five VA officials was eventually removed approximately 6 months after the new OAR allegation. In analyzing cases involving senior management, we noted that the OAR Legacy Referral Tracking List often did not accurately reflect the disciplinary action that was decided based on the results of the investigation. In numerous instances for the OAR Legacy Referral Tracking List, the applicable data fields indicating the proposed and final disciplinary action were blank. In these cases where the disciplinary fields were populated, the data usually did not agree. Specifically, for 32 out of the 40 records we reviewed where misconduct was substantiated, the final disciplinary action taken did not reflect the information within the OAR Legacy Referral Tracking List. When disciplinary actions are taken in response to findings of misconduct but are not entered within an appropriate information system, or are inaccurately recorded, it is more difficult to (1) monitor whether disciplinary actions have been implemented, and (2) ensure information relevant to management for making decisions is available. Further, without a prior record of misconduct or disciplinary action, senior officials who are repeat offenders may not receive the appropriate penalty required. Pursuant to the Department of Veterans Affairs Accountability and Whistleblower Protection Act of 2017, OAWP will now be responsible for receiving, reviewing, and investigating allegations of misconduct, retaliation, or poor performance involving senior officials. According to OAWP officials, their office investigates allegations of misconduct at the senior level only. OAWP officials also stated that misconduct issues that occur below the senior level will be referred to each of the three major VA administrations for investigation and resolution. In addition to the VA- Wide Adverse Employment Action and Performance Improvement Plan Database, OAWP officials stated that it has implemented two additional information systems that are used concurrently to capture case information. OAWP officials stated that they are currently working with VA Information Technology to assess options for other case-management systems that could consolidate these three information systems into one comprehensive system. VA Officials Violated Separation-of-Duty Policy When Taking Disciplinary Action against VA Employees VA Handbook 5021, Employee/Management Relations, states that the decision on a proposed major adverse action will be made by an official who is in a higher position than the official who proposed the action, unless the action is proposed by the Secretary. Standards for Internal Control in the Federal Government states that management should divide or segregate duties among different people. Our review of the OAR VA-Wide Adverse Employment Action Database, OAWP VA-Wide Adverse Employment Action and Performance Improvement Plan Database, and VBA data file identified examples where VA officials did not follow separation-of-duty requirements. As shown in table 14, 73 (out of an estimated 7,886) VA officials acted as both the proposing and deciding official in cases involving removals for employees found to have engaged in misconduct. Fourteen VA officials acted as both the proposing and deciding official in two or more cases. One of these 14 officials acted as both the proposing and deciding official for seven different removal cases. Further, our review of 29 VA officials found that none received disciplinary action for violating separation-of-duty policy. The systemic lack of adherence to VA’s separation-of-duty policy is reflective of a lack of controls that would allow such activity to occur. Focusing on ensuring such controls are implemented would help ensure that VA decreases the risk of abuse when officials act as both proposing and deciding officials. VA Has Procedures for Investigating Whistle-Blower Allegations of Misconduct, but Investigations Can Lead to Potential Conflicts of Interest VA has procedures in place to ensure that allegations of misconduct are investigated, but these procedures allow VA program offices or facilities where a whistle-blower has reported misconduct to conduct the investigation. According to VA officials, investigations that are deemed necessary are occasionally ordered directly from the head of the facility or VA leadership, which takes the lead on an investigation into the allegation. Alternatively, an OIG official stated if allegations of misconduct are received by the OIG, the OIG has the option of investigating the allegation or exercising a “right of first refusal” whereby it refers allegations of misconduct to VA facilities or program offices where the allegation originated to complete an independent review and provide a response to the OIG. As shown in figure 4, the majority of contacts the OIG received (127,265 out of 133,435) from calendar years 2010 through 2014 were not investigated due to several reasons, such as insufficient evidence or lack of jurisdiction. Of those contacts that were investigated, the majority (4,208 of 6,170 investigated contacts) were not investigated by the OIG but rather were referred to facility or program offices for investigation. Whistle-blowers also have the option of reporting alleged misconduct outside VA by filing a disclosure with the OSC, and may do so if they believe there has not been a resolution to their complaint internally. If the OSC determines that there is substantial likelihood of wrongdoing, it may refer the disclosure back to the Secretary of Veterans Affairs for further investigation. According to OSC officials, as a general policy, the OSC will not refer a disclosure to the Secretary if the OIG is already conducting an investigation of that particular complaint and defers to the OIG to finalize the investigation. According to OIG officials, the OIG may, in turn, exercise its “right of first refusal” when cases are referred from the OSC. Consequently, this process can result in a disclosure that was originally made to the OSC being referred back to the facility or program office where the allegation originated. As shown in figure 4, the OSC referred 172 of 942 disclosures (18 percent) filed by VA employees back to the Secretary of Veterans Affairs for further investigation from calendar years 2010 through 2014. Of the 172 disclosures referred, VA referred 53 back to the facility or program offices where the complaint originated and 119 to the OIG. The independence of officials conducting or reviewing the results is paramount to the integrity of the process both in deed and appearance. According to VA Directive 0700, the decision whether to conduct an investigation should not be made by an official who may be a subject of the investigation, or who appears to have a personal stake or bias in the matter to be investigated. Moreover, according to OIG policy, investigations referred to VA offices must be reviewed by an official independent of and at least one level above the individual involved in the allegation. VA does not have oversight measures to ensure that all referred allegations of misconduct are investigated by an entity outside the control of the facility or program office involved in the misconduct, to ensure independence. VA OIG officials acknowledged that there have been concerns about referring cases back to the chain of command because the OIG is unsure where cases go once they are referred. The investigation of allegations of misconduct by the program office or facility where the complaint originated may present the appearance of a conflict of interest in which managers and staff at facilities may investigate themselves or other allegations where they may have a personal stake or bias in the matter to be investigated. Consequently, there may be an increased risk that the results of the investigation are minimized, not handled adequately, or questioned by the OSC or the individual who made the original allegation. Disclosures Investigated by VA Facility and Program Offices According to VA Directive 0700, significant incidents occurring, and issues arising, within VA facilities or offices shall be reported and investigated as necessary to meet the informational and decision-making needs of VA. Primary responsibility in this regard rests with the chief executives of the facility or staff office involved, and with their supervisors in VA and its administrations. According to an OIG official, VA (the Secretary or a delegate) sends disclosures received from the OSC to the OIG, which may then refer to VA facility or program offices for further review and investigation. According to OSC officials, for cases that are referred to a program office, the OSC requires that the Secretary or delegate provide a report that outlines its conclusions and findings. This reporting is not required for disclosures where an ongoing OIG investigation is already under way. According to OSC officials, for each disclosure, the OSC is to review the report for statutory sufficiency and determine whether the findings of the agency head appear reasonable. The OSC is to send its final determination, report, and any comments made by the whistle-blower to the President and responsible congressional oversight committees. The OSC has raised concerns in its reports to the President about investigations conducted by VA program offices and facilities. Of the 172 whistle-blower disclosures referred by the OSC between calendar years 2010 through 2014, the Secretary of Veterans Affairs referred 53 to facility and program offices. Our review of these 53 OSC reports found that the OSC had concerns about the conclusions VA reached in 21 (40 percent) of the 53 disclosure cases. For example, the OSC found that the conclusions in some VA reports were unreasonable because VA reached its conclusion without interviewing the witness, provided shifting explanations that strained credibility and did not provide evidence of an unbiased investigation, ignored whistle-blower concerns by refusing to investigate allegations, and refused to acknowledge the impact on the health and safety of veterans seeking care after confirming problems in these areas. For disclosure cases that were referred from the OIG to facility and program offices during the 2010–2014 time frame of our review, the OIG acknowledged that these concerns arose because of a lack of communication between the department and the OIG regarding the scope of the review. At the time of our review, VA did not have a procedure in place to ensure the conclusions reached for investigations involving OSC disclosure cases are reasonable and meet the informational and decision- making needs of VA whereby all allegations are addressed. More recently, the OIG has started to communicate the scope of its reviews that involve matters referred by the OSC to the Office of the Secretary. In implementing this new process, it will be important for the Office of the Secretary to ensure that any allegations outside the purview of the OIG’s investigation are fully addressed by a departmental entity in accordance with OSC requirements. Disclosures Investigated by the VA OIG As shown in figure 4, of the 172 disclosure cases referred to VA by the OSC, a total of 119 cases were referred to the OIG. The OIG had conducted, or was already conducting, an investigation of the particular allegations for all 119 disclosures. Since these 119 disclosure cases were already under investigation by the OIG, the OSC deferred to the OIG’s investigation for these cases. A total of 37 of these 119 disclosure cases that were referred to the VA OIG were submitted to the OSC anonymously. Therefore, we were unable to conduct a review of these investigations because there was no information available to identify the individuals involved. According to Standards for Internal Control in the Federal Government, management’s ability to make informed decisions is affected by the quality of information. Accordingly, the information should be appropriate, timely, current, accurate, and accessible. The oversight body oversees management’s design, implementation, and operation of the entity’s organizational structure so that the processes necessary to enable the oversight body to fulfill its responsibilities exist and are operating effectively. Our review of the remaining 82 disclosure cases determined that the OIG does not have procedures in place to track cases that were referred from the OSC for further investigation. According to OIG officials, the OIG’s information system did not have a method in place to ensure that OSC case numbers are linked to the OIG investigative case number and final report. Consequently, the OIG was unable to produce the investigative documentation for these 82 disclosures. According to OIG officials, OSC case numbers and associated Hotline numbers are currently tracked in a spreadsheet until the implementation of a new system. The inability to locate investigative documentation prevents a third party from verifying whether the OIG examined the disclosures, whether any recommendations were addressed, or whether appropriate disciplinary action was taken for these cases. In addition, because the OSC defers to the OIG’s investigation for allegations that were already conducted, or being conducted, the OSC and individuals that made the allegations do not have documentation to demonstrate that the allegations were addressed. This information, or lack of it, has direct influence on management’s ability to make sound decisions relating to investigative matters. Pursuant to the Department of Veterans Affairs Accountability and Whistleblower Protection Act of 2017, OAWP will be responsible for recording, tracking, reviewing, and confirming implementation of recommendations from audits and investigations involving whistle-blower disclosures, including the imposition of disciplinary actions and other corrective actions contained in such recommendations. According to OAWP officials, the whistle-blower disclosure process will be similar to the current process when cases are referred to facility and program offices for investigation. OAWP will follow up on any open points with the level of leadership that is most appropriate in each case, such as the medical center or VISN director. Case details will be tracked through the three active databases that are being used concurrently. OAWP is currently working to develop an internal process that will bring the investigative communities together. For instance, OAWP would like to monitor cases that are referred to VA facility and program offices, but it does not currently have documented criteria to guide the process. According to OAWP officials, OAWP is finalizing new policies in the form of a policy manual and handbook. However, these officials were unable to provide a time frame for completion of the published guidance. VA Data and Whistle- Blower Testimony Indicate That Retaliation May Be Occurring Individuals Who Reported Wrongdoing Are More Likely to Receive Disciplinary Action and Leave the Agency Than Their Peers Our analysis of VA data shows that individuals who filed a disclosure of misconduct with the OSC received disciplinary action, and left the agency, at a higher rate than the peer average for the rest of VA. We identified 135 disclosure cases that were received by the OSC between calendar years 2010 and 2014 and were alleging misconduct. Of the 135 disclosures, a total of 129 employees made a total of 130 disclosures nonanonymously. We compared the 129 employees who made nonanonymous disclosures to the PAID information system using the complainants’ information. As shown in table 15, on average approximately 1 percent of all employees in the VA roster received an adverse action in any given fiscal year. For the 129 nonanonymous whistle-blowers, we found that approximately 2 percent received an adverse action in the fiscal year prior to their disclosure, while 10 percent had received an adverse action in the fiscal year of their disclosure, and 8 percent received an adverse action in the year subsequent to this disclosure. While the fact that nonanonymous whistle-blowers faced higher rates of adverse action subsequent to their disclosure than the VA population as a whole is consistent with a pattern of retaliation for nonanonymous whistle-blowers, it is only an indication that retaliation could be occurring. Our analysis also showed that among employees who could be matched to the PAID end-of-year roster, attrition rates were higher for those individuals who filed a nonanonymous disclosure with the OSC. On average, approximately 9 percent of all VA employees on the end-of-year roster in one fiscal year were not on the subsequent year’s roster. In contrast, 66 percent of the 129 nonanonymous whistle-blowers did not appear in the subsequent year’s roster. Attrition rates were higher among employees who had filed a disclosure than among their peers who had not filed disclosures, for all fiscal years in our review (see table 16). Our analysis did not confirm the reasons for disciplinary action or attrition involving any of the 129 employees who made nonanonymous disclosures to the OSC. According to VA officials, employees who have a history of poor performance or conduct may be more likely to file a disclosure with the OSC or allege misconduct, which could explain some of the disparities between whistle-blowers and other employees. However, we also could not rule out instances where retaliation by senior officials may have occurred after misconduct was disclosed. Testimony of Whistle- Blowers Describes Retaliation and Lack of Understanding of the Disclosure Process The Civil Service Reform Act of 1978, as amended, states, among other things, that federal personnel management should be free from prohibited personnel practices (PPP). The law also authorizes the OSC to investigate allegations involving PPP that include reprisals against employees for the lawful disclosure of certain information pertaining to individuals who engage in such conduct or other wrongdoing. According to Standards for Internal Control in the Federal Government, laws and regulations may require entities to establish separate lines of communication, such as whistle-blower and ethics hotlines, for communicating confidential information. Management informs employees of these separate reporting lines, how they operate, and how they are used, and how the information will remain confidential. Reporting lines are defined at all levels of the organization and provide methods of communication that can flow down, across, up, and around the structure. Our interviews with six VA whistle-blowers who claim to have been retaliated against provided anecdotal evidence that retaliation may be occurring. Whistle-blowers we spoke to alleged that managers in their chain of command took a number of actions that were not traceable to retaliate against the whistle-blowers after they reported misconduct. These alleged actions included being reassigned to other duty locations or denied access to computer equipment necessary to complete assignments, and socially isolating these individuals from their peers, among other things. Whistle-blowers we spoke to also expressed concerns regarding the lack of guidance available to employees about how to file a disclosure through VA and the OSC. Whistle-blowers stated that employees are not provided adequate information on how to document or file a claim of misconduct or retaliation. Employees can file disclosures regarding misconduct and complaints of retaliation through multiple reporting lines. As mentioned previously, however, the OSC will generally not refer to the Secretary under its statutory process a disclosure if the OIG has conducted or is already conducting an investigation of that particular complaint. Thus, whistle-blowers may limit their chance of having an independent, non-VA entity oversee their complaint if they file a complaint with the OIG first. The Department of Veterans Affairs Accountability and Whistleblower Protection Act of 2017 requires the Secretary, in coordination with the Whistleblower Protection Ombudsman, to provide training regarding whistle-blower disclosures to each employee of VA. This information shall include, among other items, an explanation of each method established by law in which an employee may file a whistle-blower disclosure, an explanation that the employee may not be prosecuted or reprisal taken against him or her for disclosing information, and language that is required to be included in all nondisclosure policies, forms, and agreements. The Secretary shall also publish a website and display the rights of an employee making a whistle-blower disclosure. In August 2017, VA began providing additional information on its website for potential whistle-blowers who wish to report criminal or other activity to the OIG. The information provided focuses on reporting misconduct to the OIG and provides other lines of reporting established by law in which an employee may file a whistle-blower disclosure, such as directly to an immediate supervisor or the OSC. In addition, the information provided explains the process after misconduct is reported through the OIG Hotline, but does not clarify the process for referred disclosure cases received from the OSC. As mentioned previously, OIG officials stated that a disclosure made to the OSC or the OIG can be referred back to the facility or program office where the allegation originated, which may compromise confidentiality. Consequently, employees may not be aware that their information may be shared among the OSC, the OIG, OAWP, or VA facility and program offices when a disclosure is made to the OSC. Adequately communicating the investigative process to employees may alter their decision to report wrongdoing. Without a clear understanding of the lines for reporting misconduct and how they operate, whistle-blowers may be uncertain as to their options for reporting misconduct, which increases the risk that they may not report workplace misconduct. According to OSC, it has learned through its cases that OAWP has a practice of allowing VA employees, who are the subject of the allegations brought forward by whistle-blowers to review or participate in investigations, or both, which could make the whistle-blower feel uncomfortable or intimidated. This practice has led to confusion regarding the role and responsibilities of OAWP personnel. OAWP’s use of VA employees that are employed at the facility under investigation in the review of allegations creates the possibility of a conflict of interest or an appearance of a conflict of interest. For example, in a case OSC described in its comments on a draft of our report, an OAWP representative who was also associated with the human-resource office at the VISN that oversees the whistle-blower’s facility, placed the whistle- blower under oath and questioned her about issues unrelated to the referred allegations. OSC has since sought clarification of OAWP’s role and the OAWP employee’s possible connection to the VISN. Conclusions While VA collects data on some types of disciplinary actions, it is limited in its ability to use those data because it does not collect all misconduct and associated disciplinary-action data through a single information system, or multiple interoperable systems. Absent a process to collect such data department-wide, VA does not have the ability to analyze and report data systematically. In addition, the data currently collected are not always reliable or useful. The inclusion of appropriate documented guidance and standardized field definitions would help to ensure VA collects reliable misconduct and associated disciplinary-action data. With high-quality information that is accurate and comprehensive, VA management would be better positioned to make knowledgeable decisions regarding the extent of misconduct occurring and how it was addressed, department-wide. VA has not ensured that program and facility human-resources personnel adhere to policy governing documentation contained within evidence files to support conclusions reached. In addition, VA often had no record of the evidence involved with the adjudication of these actions and could not verify whether these individuals received reasonable and fair due process. The absence of documentation in some files also raises the possibility that VA may not always be in compliance with its procedures for governing the adjudication of alleged employee misconduct. Additionally, ensuring that human-resources personnel adequately inform employees of their rights during the adjudication process would provide them with a reasonable opportunity to present their case when preparing their defense. VA also does not consistently adhere to OPM and NARA guidance in defining a specific retention period for adverse action files. This results in an inconsistent retention of these files across VA which complicates department-wide analysis. VA’s inconsistent adherence to the standards provided by the OIG to facilities and program offices for investigating and resolving misconduct cases increases the risk that misconduct case are not being handled appropriately. Additionally, the lack of verification of responses received to ensure documentation supports findings and recommendations has contributed to evidence that does not always meet the requirements outlined by the OIG. Finally, timely responses are not consistently provided when facility and program offices report findings to the OIG’s Hotline Division. OAR did not monitor whether substantiated instances of misconduct involving senior officials received disciplinary action. OAR’s Legacy Referral Tracking List also did not accurately reflect the disciplinary action that was decided based on the results of the investigation. When disciplinary actions are taken, in response to findings of misconduct, but are not entered within an appropriate information system, or are inaccurately recorded, it is more difficult to monitor whether disciplinary actions have been implemented in substantiated instances of misconduct involving senior officials. As demonstrated, this may result in no action being taken for substantiated misconduct or the previous penalties not having the corrective effect for repeat offenders. There is also an increased risk that substantiated misconduct will go unaddressed if there is no recommendation for corrective action. Further, VA also does not have internal controls to ensure adherence to proper separation-of-duty standards involving the removal of an employee. Such controls would minimize the risk of abuse when officials act as both proposing and deciding officials. In addition, VA does not have oversight measures to ensure that all allegations of misconduct referred by the OIG to facility and program offices are investigated by an entity outside the control of the facility or program office involved in the misconduct. The investigation of allegations of misconduct by the program office or facility where the complaint originated may present the appearance of a conflict of interest in which managers and staff at facilities may investigate themselves or other allegations where they may have a personal stake or bias in the matter to be investigated. Therefore, the risk that the results of the investigation are minimized, or not handled adequately, is increased. VA’s newly developed process to communicate the scope of its reviews that involve matters referred by the OSC to the Office of the Secretary will be important to ensure any allegations outside the purview of the OIG’s investigation are fully addressed by a departmental entity in accordance with OSC requirements. Further, the OIG’s inability to locate investigative documentation prevents a third party from verifying whether the OIG examined the disclosures, whether any recommendations were addressed, or whether appropriate disciplinary action was taken for these cases. This lack of information has direct influence on management’s ability to make sound decisions relating to investigative matters. According to OIG officials, a spreadsheet is being used for tracking case numbers associated with disclosures, but plans to implement a process within the new system. Nonanonymous whistle-blowers faced higher rates of adverse action subsequent to their disclosure than the VA population as a whole. In addition, these individuals also had attrition rates higher than their peers who had not filed a disclosure. The disparities between whistle-blowers and other employees may be an indication that retaliation by senior officials may have occurred after misconduct was disclosed. Although VA has started to provide additional information for potential whistle-blowers who wish to report criminal or other activity to the OIG, VA does not have a process to inform employees of how their information may be shared between organizations when misconduct is reported. Without a clear understanding of how the lines for reporting misconduct operate, whistle- blowers may be uncertain as to their options for reporting misconduct, increasing the risk that they may not report workplace misconduct. Recommendations for Executive Action We are making the following 16 recommendations to VA. The Secretary of Veterans Affairs should develop and implement guidance to collect complete and reliable misconduct and associated disciplinary-action data department-wide, whether through a single information system, or multiple interoperable systems. Such guidance should include direction and procedures on addressing blank data fields, lack of personnel identifiers, and standardization among fields, and on accessibility. (Recommendation 1) The Secretary of Veterans Affairs should direct applicable facility and program offices to adhere to VA’s policies regarding employee misconduct adjudication documentation. (Recommendation 2) The Secretary of Veterans Affairs should direct the Office of Human Resource Management (OHRM) to routinely assess the extent to which misconduct-related files and documents are retained consistently with VA’s applicable documentation requirements. (Recommendation 3) The Secretary of Veterans Affairs should direct OHRM to assess whether human-resources personnel adhere to basic principles outlined in VA Handbook 5021 when informing employees of their rights during the adjudication process for alleged misconduct. (Recommendation 4) The Secretary of Veterans Affairs should adhere to OPM and NARA guidance and establish a specific record-retention period for adverse action files. In doing so, the Secretary should direct applicable administration, facility, and program offices that have developed their own record-retention schedules to then adhere to the newly established record-retention period. (Recommendation 5) The Department of Veterans Affairs (VA) Inspector General should revise its policy to include a requirement to verify whether evidence produced in senior-official case referrals demonstrates that the six elements required in VA Directive 0701 have been addressed. (Recommendation 6) The Secretary of Veterans Affairs should direct the Office of Accountability and Whistleblower Protection (OAWP) to review responses submitted by facility or program offices to ensure evidence produced in senior-official case referrals demonstrates that the six elements required in VA Directive 0701 have been addressed. (Recommendation 7) The Secretary of Veterans Affairs should direct OAWP to issue written guidance on how OAWP will verify whether appropriate disciplinary action has been implemented for all substantiated misconduct by senior officials. (Recommendation 8) The Secretary of Veterans Affairs should direct OAWP to develop a process to ensure disciplinary actions proposed in response to findings of misconduct are recorded within appropriate information systems to maintain their relevance and value to management for making decisions and take steps to monitor whether the disciplinary actions are implemented. (Recommendation 9) The Secretary of Veterans Affairs should direct OAWP to issue written guidance on how OAWP will review the disposition of accountability actions for all substantiated misconduct cases involving senior officials resulting from investigations. (Recommendation 10) The Secretary of Veterans Affairs should implement internal controls to ensure that proper adherence to separation-of-duty standards involving the removal of an employee are consistent with policy. (Recommendation 11) The Secretary of Veterans Affairs should develop oversight measures to ensure all investigations referred to facility and program offices are consistent with policy and reviewed by an official independent of and at least one level above the individual involved in the allegation. To ensure independence, referred allegations of misconduct should be investigated by an entity outside the control of the facility or program office involved in the misconduct. (Recommendation 12) The VA Inspector General, in consultation with the Assistant Secretary of OAWP, should develop a process to ensure that OSC case numbers are linked to the investigative case number and final report. (Recommendation 13) The Secretary of Veterans Affairs should direct OAWP to develop a time frame for the completion of published guidance that would develop an internal process to monitor cases referred to facility and program offices. (Recommendation 14) The Secretary of Veterans Affairs should ensure that employees who report wrongdoing are treated fairly and protected against retaliation. (Recommendation 15) The Secretary of Veterans Affairs should direct OAWP to develop a process to inform employees of how reporting lines operate, how they are used, and how the information may be shared between the OSC, the OIG, OAWP, or VA facility and program offices when misconduct is reported. (Recommendation 16) Agency Comments and Our Evaluation We provided a draft of this report to the Department of Veterans Affairs (VA), VA Office of Inspector General (OIG), and the Office of Special Counsel (OSC) for review and comment. In its comments, VA concurred with nine of our recommendations and partially concurred with five (see app. VI for a copy of VA’s letter). Regarding our recommendations to the Inspector General, the OIG concurred with one recommendation and partially concurred with the other. The OIG also provided comments on our findings (see app. VII for a copy of the OIG’s letter). We received technical comments by e-mail from OSC’s Principal Deputy Special Counsel, which we incorporated in the report as appropriate. Regarding VA’s comments, in its response to our first recommendation that the Secretary develop and implement guidance to collect complete and reliable misconduct and associated disciplinary-action data department-wide, VA concurred and outlined steps it plans to take to address our recommendations. These steps include the creation of new policies to address blank data fields, lack of personnel identifiers, lack of standardization among fields, and accessibility issues related to misconduct and associated disciplinary-action data department-wide. The target date for system implementation, dependent on approved funding and acquisition-related requirements, is January 1, 2020. On our second recommendation, that the Secretary direct applicable facility and program offices to adhere to VA’s policies regarding employee-misconduct adjudication documentation, VA concurred. It stated that a memorandum will be distributed to reiterate facility and program-office requirements to adhere to VA Handbook 5021, Employee/Management Relations, no later than October 1, 2018. VA also concurred with our third recommendation, that the Secretary direct the Office of Human Resource Management (OHRM) to routinely assess the extent to which misconduct-related files and documents are retained. According to VA, OHRM will assess, during periodic Oversight and Effectiveness Service reviews, the extent to which misconduct- related files and documents are retained. The first assessment is to be incorporated into the fiscal year 2019 Oversight and Effectiveness Service schedule no later than November 1, 2018. VA concurred with our fourth recommendation, that the Secretary direct OHRM to assess whether human-resources personnel adhere to basic principles outlined in VA Handbook 5021. VA stated that OHRM will assess, during periodic Oversight and Effectiveness Service reviews, whether human-resources and administration personnel adhere to basic principles outlined in VA Handbook 5021. The first assessment is to be incorporated into the fiscal year 2019 Oversight and Effectiveness schedule no later than November 1, 2018. In its response to our fifth recommendation, that the Secretary adhere to Office of Personnel Management (OPM) and National Archives and Records Administration (NARA) guidance and establish a specific record- retention period for adverse-action files, VA concurred and indicated that the Human Resources and Administration Assistant Secretary will establish VA guidance regarding the retention period for adverse-action files. In addition, the Human Resources and Administration Assistant Secretary is to advise applicable administration, facility, and program offices that have developed their own record-retention schedules to adhere to the newly established directive. According to VA, the directive will be established no later than November 1, 2018. VA partially concurred with our seventh recommendation, that the Secretary direct departmental heads to review responses submitted by facility or program offices to ensure evidence produced in senior-official case referrals demonstrates that the six elements required in VA Directive 0701 have been addressed. VA stated that the process described in our report pertaining to OIG findings or results will be changed to require all such reports to be submitted to OAWP. VA also indicated that it expects to publish new guidance by October 1, 2018, that will require the Office of Accountability and Whistleblower Protection (OAWP) to review responses and recommendations from facilities or program offices. Given VA’s comments, we have revised our draft recommendation to have the Secretary direct OAWP, not the department heads, to ensure evidence demonstrates that the six elements have been addressed. VA also partially concurred with our eighth recommendation, that the Assistant Secretary of OAWP review all substantiated misconduct by senior officials to verify whether disciplinary action has been implemented. VA stated that all substantiated misconduct by senior leaders in VA is handled by OAWP from intake, through investigation, working with the proposing and deciding officials. VA also stated that it expects to publish written guidance by October 1, 2018, that will clarify how OAWP will work with the appropriate servicing personnel office to ensure that the recommended disciplinary actions decided are implemented for substantiated misconduct involving senior officials. Given VA’s comments, we have revised our draft recommendation to have the Secretary of Veterans Affairs direct OAWP to issue written guidance on how OAWP will verify that appropriate disciplinary action has been implemented for all substantiated misconduct by senior officials. VA partially concurred with our ninth recommendation, that the Assistant Secretary of OAWP develop a process to ensure disciplinary actions proposed are recorded within appropriate information systems. VA stated that the VA-wide discipline tracking system currently used by OAWP will eventually be phased out. It added that once the Human Resources Information System (HRSmart) is capable of capturing and recording similar data, it will be used for this purpose. Accordingly, we have not revised our draft recommendation. VA partially concurred with our 10th recommendation, that the Assistant Secretary of OAWP assess all misconduct cases involving senior officials to ensure investigative reports with findings of substantiated misconduct include recommendations for action. According to VA, OAWP has instituted several processes since our review. VA plans to issue written guidance that outlines the process for the review and disposition of appropriate accountability actions for allegations of misconduct by senior officials by October 1, 2018. Given VA’s comments, we have revised our draft recommendation to have the Secretary of Veterans Affairs direct OAWP to issue written guidance on how OAWP will review the disposition of accountability actions for all substantiated misconduct cases involving senior officials resulting from investigations. In its response to our 11th recommendation, that the Secretary implement internal controls to ensure that separation-of-duty standards involving the removal of an employee are consistent with policy, VA concurred. It stated that it will also establish and distribute internal controls to ensure that separation-of-duty standards involving the removal of an employee are consistent with policy no later than November 1, 2018. VA partially concurred with our 12th recommendation, that the Secretary take steps to ensure independence of referred allegations of misconduct by requiring that investigations be conducted outside the control of the facility or program office involved in the misconduct. VA stated that OAWP is responsible for recording, tracking, reviewing, and confirming the implementation of recommendations from audits and investigations. However, VA did not address how it will ensure the independence of the entity responsible for conducting an investigation. As we discuss in our report, during the review OAWP officials stated that the process of referring cases of misconduct back to facilities and program offices where the misconduct occurred will continue. Accordingly, we have not revised our draft recommendation and believe implementation of it will help ensure independence. VA concurred with our 14th recommendation, that the Assistant Secretary of OAWP develop a time frame for the completion of published guidance for the development of an internal process to monitor cases referred to facility and program offices. VA provided an expected date of October 1, 2018, for publishing the internal VA guidance, with the subsequent Directive and Handbook to be published as rapidly as staff coordination permits. In its response to our 15th recommendation, that the Secretary ensure that employees who report wrongdoing are treated fairly and protected against retaliation, VA concurred. It stated that OAWP and OSC have developed a functional process to ensure whistle-blower protections are implemented, but did not indicate what the process entails. The VA Secretary has also delegated authority to the Executive Director, OAWP, to put individual personnel actions on hold if the actions appear motivated by whistle-blower retaliation. VA added that OAWP has also hired two whistle-blower program specialists specifically to increase awareness of whistle-blower protections and work with individuals that disclose employee wrongdoing to ensure individuals are treated fairly and protected from retaliation for their disclosures. VA concurred with our 16th recommendation, that the Assistant Secretary of OAWP develop a process to inform employees of how reporting lines operate. VA stated that it will provide whistle-blower training to all employees on a biennial basis, which will include the reporting lines for disclosures of wrongdoing, the manner in which disclosures flow once they are made, how information is shared among the whistle-blower entities, and what protections exists for those who disclose wrongdoing. Regarding our recommendations to the Inspector General, the OIG partially concurred with our sixth recommendation, to revise its policy to include a requirement to verify whether evidence produced in senior- official case referrals demonstrates that the six elements required in VA Directive 0701 have been addressed. The OIG indicated that VA Directive 0701 is currently being updated to require a written or electronic signature from the person preparing the responses as an attestation that the specific requirements of the directive were met. The OIG also indicated in its letter that the OIG’s Hotline staff carefully review the case response but Hotline staff are not required to request an updated response from VA to address matters not necessary to the resolution to the referral. The OIG asserted that requesting an update would detract from the resources for other important VA activities. On page 4 of the OIG’s letter, the OIG states that Hotline analysts are allowed to exercise some discretion in accepting responses that may include minor departures from the six elements. We continue to believe that, in order to have a complete response to a referral, all six elements required by the directive should be addressed. In addition, Directive 0701 does not allow for the use of professional judgement to decide which elements to include or not to include in a response. While we agree that requiring a written or electronic signature from the person preparing the responses as an attestation will help ensure that the specific requirements of the directive were met, we maintain that not requiring Hotline analysts to review responses to ensure that all elements of the directive are addressed is inconsistent with the intent of the directive. In its response to our 13th recommendation, that the OIG develop a process to ensure that an OSC case number is linked to the investigative case number and the final report, the OIG concurred. It stated that it will engage with the Executive Director of OAWP to develop a process to ensure that OSC case numbers are linked to OIG and OAWP investigative case numbers, as appropriate, and linked to any final report of investigation. In addition to its response to recommendations, the VA OIG also raised a number of concerns with our findings. Page 1 of its letter summarizes some of these concerns and then provides more detail on each concern raised, starting on page 2. Our responses to each of these detailed concerns are provided below. The OIG stated that our report does not focus on the most important cases, but focuses primarily on case referrals regarding senior officials that were not handled by the OIG because the allegations were lower risk or because of resource constraints. In addition, the OIG stated that GAO risks presenting a skewed picture of the OIG’s oversight work. We disagree with this characterization of our findings. We requested that the OIG provide us with data from the OIG’s Master Case Index (MCI) information system that would allow us to select a sample of cases, in accordance with the scope of our review. The OIG was unable to provide the requested information due to several reasons. Instead, the OIG provided data from the OIG Hotline and Office of Investigations case- management systems (subsystems within MCI) that contained a limited number of fields for analysis and 23 cases pertaining to SES misconduct that were referred to VA for investigation during GAO’s period of review. Therefore, as we discuss in the report, we were only able to review the 23 senior-official misconduct cases included in our report because the OIG was only able to provide related documentation for these cases. The OIG stated that we only reviewed a sample of just 23 case referrals from fiscal years 2011 through 2014. As described, we reviewed all 23 senior-official misconduct cases that were referred to VA for investigation that the OIG was able to provide us, not a sample. The OIG stated that our report inaccurately states that the extracts received from the MCI information system contained missing information. We disagree with this characterization of our findings. Our review included a comprehensive assessment of the reliability of the OIG’s data. To conduct this assessment, we requested an explanation of each data field to clarify when fields are normally populated and how they are used. Our findings are consistent with the information provided in response to this request. For example, the OIG’s response to our data-reliability assessment stated that the data field used to identify the type of allegations being investigated should never be blank. However, we found that field to be blank in some cases in the data that were provided to us, though the OIG asserted that the MCI information system is a relational database where each case may be associated with multiple allegations and codes. In response to the OIG’s comments on our report, we requested supporting documentation to demonstrate that the fields analyzed during our period of review did not contain missing data. The OIG provided the MCI information system user’s manual that contains detailed procedures for accessing and entering data into the MCI information system, and a compilation of various internal documents. However, the documentation did not provide evidence of the completeness of data entered into the MCI information system as part of quality-assurance reviews performed by the OIG or other designated entity. Absent evidence of data-quality reviews aimed at assessing the accuracy and completeness of data contained in the MCI information system, we did not change the conclusions based on our previous analysis. The OIG stated that our report provides incomplete information regarding sampled cases and mischaracterized one of the OIG’s case referrals in the body of the report. We disagree with this characterization of our findings. Specifically, the OIG said that we inaccurately stated that a medical center director conducted the investigation into his own alleged misconduct and found no allegations were substantiated. The synopsis included in our report clearly articulates that the medical center director was named in the allegation for having received a similar complaint involving time and attendance abuse by a physician. The medical center director, who provided the response to the OIG, was implicated in the allegation as having not addressed a similar time and attendance complaint regarding the same physician 2 years earlier. The OIG did not provide any supporting documentation to demonstrate that the alleged time and attendance abuse allegations against the physician were not substantiated. The OIG stated that our report inaccurately stated that the medical center director conducted his own investigation of himself and found no allegations were substantiated. We disagree. In response to the OIG’s comments on our report, we requested that the OIG provide additional support used to determine that the medical center director did not investigate the allegation in which he was named. The additional case documentation provided by the OIG further reaffirmed our assessment that the medical center director performed his own investigation and found no allegations were substantiated. Additional documentation provided by the OIG indicated that the OIG referred the case to the Veterans Integrated Service Network (VISN) for a response. However, documentation we examined during the course of our audit, and the documentation provided in response to our draft report, indicates that the medical center director performed the investigation of the allegations and then the results were routed through the VISN back to the OIG. The OIG stated that routing the response through the VISN should address our concerns of independence. This process does not address our concerns regarding independence because VA Directive 0701 states that all responses to Hotline case referrals must contain evidence of an independent review by an official separate from and at a higher grade than the subject / alleged wrongdoer. In this case, the name of the medical center director who signed the facility response provided to the OIG was the same individual named in the allegations. The OIG stated that the report does not provide a balanced presentation of the rigor with which the OIG reviews all incoming Hotline contacts and case responses. We disagree with this characterization of our findings. As described above, the OIG was unable to provide comprehensive data to select a sample of OIG audits, evaluations, and inspections for review due to the limitations cited. We focused on misconduct involving senior officials consistently with the scope of our review and thoroughly reviewed all 23 senior-official misconduct cases that were referred to VA for investigation, which were the only cases that the OIG was able to provide. The OIG stated that the description for one of the cases included in appendix V of the draft report was incomplete because we misunderstood the OIG’s process. We disagree with this characterization. In response to the OIG’s comments on our report, we requested additional supporting documentation. The documentation provided reaffirmed our assessment that another medical center director performed his own investigation and found no allegations were substantiated. Similar to the case described above, the medical center director completed his own investigation and then routed the response through the VISN back to the OIG. In contrast to the previous case, however, the Hotline Workgroup reviewed the response to the OIG and found it to be insufficient. Specifically, the OIG noted that the medical center director who provided the response was the subject of the complaint, despite the response being directed to the VISN, and requested clarification. The VISN informed the OIG that it is not its policy for the complainant to have any involvement in the review and data submission on a case in which the complainant is involved. The VISN stated that while this did obviously occur in this instance, it has taken steps to ensure it does not occur in the future. The supporting evidence provided for the case included in appendix V also contradicts the OIG’s previous assertion that the routing of a response through the entity with oversight (VISN) over the medical center director should have addressed GAO’s concerns of independence. The OIG stated our report provides a misrepresentation of the OIG’s failure to follow internal policies for department responses. We disagree. According to OIG statements on page 4 of the OIG’s letter, Hotline analysts are allowed to exercise some discretion in accepting responses that may include minor departures from the six elements required in VA Directive 0701. We continue to believe that in order to have a complete response to a referral, all six elements required by the directive should be addressed. On the basis of our review, the OIG does not have an effective method to ensure that cases referred to VA are reviewed in accordance with VA Directive 0701. Out of the 23 cases we reviewed, only 4 included sufficient documentation needed to support VA’s findings, and we could not identify a case that contained all six elements required in VA Directive 0701. This suggests that the current OIG review process is not adequately resolving case referrals, as asserted by the OIG’s response. In addition, VA Directive 0701 does not currently include a provision that would allow Hotline analysts to deviate from the six required elements. As stated in our report, the OSC also raised concerns regarding 40 percent of disclosure cases that were referred to VA facility and program offices. The OIG stated that much of the information in the draft report is dated and ignores system updates, specifically several key Hotline-related process improvements since 2014. Although our review began in 2015, we disagree with this characterization of our findings. In our report, we included relevant improvements to demonstrate where the OIG was able to provide support for those improvements. For example, our report discusses: (1) a new process for communicating the scope of reviews that involve matters referred by the OSC to the Office of the Secretary, (2) a description on the VA website of the process for employees who wish to report criminal or other activity to the OIG, (3) a new Enterprise Management System, and (4) a new process for receiving whistle-blower disclosures by the Secretary. In response to the OIG’s comments on our report, we requested additional documentation for any systems, practices, or personnel changes that have been implemented since 2011, including improvements to Hotline-related processes since 2014 that were not included in our report. In response, the OIG provided a copy of the OIG’s organizational chart (current as of Apr. 23, 2018), described the oversight responsibilities of each OIG component, and summarized the pertinent staff positions within each component. On the basis of this documentation, we identified a new office (the Office of Special Reviews), a promotion, staff reassignments, and numerous vacancies during our review period. However, the OIG did not provide evidence of any measures to improve the MCI information system, case-referral processes, or relevant staff roles that were not already included in our report. As described above, the OIG was unable to provide comprehensive data to select a sample of OIG audits, evaluations, and inspections for our initial review due to the limitations cited. In response to the OIG’s comments on our report, we requested documentation related to any significant changes that have been made to the MCI information system that allows the OIG to identify all allegations of misconduct for export and analysis. The OIG provided additional information regarding overall departmental achievements that are highlighted in the OIG’s Semiannual Report to Congress, and other products from its website published between fiscal years 2011 through 2018. We recognize the OIG’s broader administrative and oversight work described in the published reports. However, this information does not address changes specifically made to the MCI information system that would enable the OIG to analyze cases pertaining to alleged misconduct by senior officials that we requested. We are sending copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5045 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Appendix I: Objectives, Scope, and Methodology Our objectives were to determine the extent to which the Department of Veterans Affairs (VA) (1) collects reliable information associated with employee misconduct and disciplinary actions that is accessible and could be used to analyze misconduct department-wide; (2) retains documentation that demonstrates VA adheres to its policies when adjudicating cases of employee misconduct; (3) ensures allegations of misconduct involving senior officials are reviewed in accordance with VA investigative standards, and these officials are held accountable; (4) has procedures to investigate whistle-blower allegations of misconduct; and the extent to which (5) data and whistle-blower testimony indicate whether retaliation for disclosing misconduct occurs at VA. For the first objective, we obtained VA employee misconduct data from 12 information systems operated by various VA components covering October 2009 through July 2017, where available. To determine the reliability of VA’s misconduct data, we analyzed the contents of the 12 information systems operated by various offices across VA. These data encompass each of the three major administrations that constitute VA— the National Cemetery Administration (NCA), Veterans Benefits Administration (VBA), and Veterans Health Administration (VHA). We selected the information systems based on our discussions with VA officials and staff that oversee the data and, hence, identified databases capable of collecting information pertaining to either employee misconduct or disciplinary actions. Data fields were selected based on whether they would provide beneficial information to better understand the disciplinary process. VA’s Personnel and Accounting Integrated Data (PAID) system, which was developed to track payroll actions, contains information about adverse disciplinary actions that affect employee salary department-wide. We obtained an extract of all adverse disciplinary actions from the PAID system. We assessed the reliability of each system for the purposes of identifying and tracking misconduct cases. To do this, we performed electronic tests on 12 information systems to determine the completeness and accuracy of the fields contained in the data files. We also submitted to the overseeing offices for all 12 information systems general data-quality questions regarding the purpose of the data, their structure, definitions and values for certain fields, automated and manual data-quality checks to ensure the accuracy of the data, and limitations. As discussed further, the data were generally not reliable for a department-wide assessment of all misconduct and disciplinary actions due to the lack of completeness and compatibility of the data across all information systems. VA staff could not confirm whether some of the missing data we identified were artifacts of the database extraction process VA used to assemble the data files we used in our review. Despite challenges with aspects of the data, we found the data sufficiently reliable for conducting analysis where fields were populated and field definition concurrence was obtained by program offices. For the second objective, we selected a generalizable stratified random sample of 544 misconduct cases from October 2009 through May 2015. Where available, we reviewed the employees’ disciplinary-action files and Electronic Official Personnel Folders to determine the extent to which VA’s actions were consistent with disciplinary policy outlined in VA Handbook 5021, Employee/Management Relations. These data encompass each of the three major administrations that constitute VA— NCA, VBA, and VHA. We determined the data to be sufficiently reliable for analysis of disciplinary actions affecting salary that resulted from misconduct that was not reported to supervisors directly from employees. Accordingly, our sample only includes misconduct cases that resulted in a change in salary or were reported to departmental organizations within the 12 information systems selected. We developed a data-collection instrument to document the results of our case reviews. We revised our data-collection instrument to address issues found during the course of our analysis, and developed a companion document that outlined the decision rules for reviewing cases. We also designated two primary reviewers to ensure the decision rules were consistently applied across all cases. Our review of laws and regulations revealed that disciplinary rules sometimes vary depending on whether employees fall under Title 5, Title 38, or hybrid Title 5 and Title 38 hiring authority. To minimize confusion associated with these differences, we incorporated criteria into our data- collection instrument. In addition, we were unable to obtain complete case information for 25 percent of the cases. For these cases, we obtained direct access to the Office of Personnel Management’s (OPM) Electronic Official Personnel Folders system to attempt to recover some of the missing information. Ultimately, we were unable to complete our review for 10 percent of cases in our sample because of missing files. In addition to reporting missing case information, we used our generalizable analysis results to project VA-wide figures for several data elements that were not in compliance with VA policy. Unless otherwise noted, estimates in this report have a margin of error of +/-7.4 percentage points or less for a 95 percent confidence interval. For the third objective, we analyzed data from the Office of Accountability Review (OAR) Legacy Referral Tracking List and VA Office of Inspector General (OIG) case-referral and investigative case-management systems, and we selected cases for in-depth review. We selected these two systems based on discussions with VA officials who were knowledgeable with databases that have the capacity to track misconduct information pertaining to senior officials. The OAR Legacy Referral Tracking List comprises referrals from January 2011 through May 2015. The OIG provided 23 case-referral files involving senior officials from calendar years 2011 through 2014. As part of our review of the OIG case files, we evaluated specific data elements contained in VA’s response documents using VA policy for referring and reviewing misconduct cases. We assessed the reliability of the OAR Legacy Referral Tracking List and OIG case-management systems for the purposes of identifying and tracking misconduct cases. To do this, we performed electronic tests on each database to determine the completeness and accuracy of the fields contained in the data files, including senior-official indicators. Where feasible, we opted to match individual datasets to PAID to determine whether disciplinary actions were administered as prescribed. We also submitted to OAR and the OIG general data-quality questions regarding the purpose of the data, their structure, definitions and values for certain fields, automated and manual data-quality checks to ensure the accuracy of the data, and limitations. On the basis of this information, we found the OAR data to be sufficiently reliable for conducting analysis where fields were sufficiently populated. For the OAR data, we matched the persons of interest to adverse-action files from PAID to determine whether adverse disciplinary actions were administered as prescribed during the available time frame (January 2011 through May 2015). We also obtained VA’s response documents for the 23 case-referral files provided by the OIG to evaluate whether VA was adhering to its own policy for referring and reviewing misconduct cases. Through our OAR Legacy Referral Tracking List analysis, we identified illustrative case examples of misconduct involving senior officials. Further, based on our evaluation of the 23 OIG case referrals using VA’s referral policy, we developed several illustrative case examples. For the fourth objective, we interviewed senior officials from VA and the OSC responsible for investigating whistle-blower complaints. We obtained the OSC’s procedures for referring disclosure complaints and VA’s policy for investigating these complaints once received at the agency. In addition, we obtained whistle-blower disclosure data from the Office of Special Counsel (OSC) covering calendar years 2010 through 2014. To determine the reliability of the data, we conducted electronic testing and traced data elements to source documentation. We determined the data to be sufficiently reliable to identify the total number of cases that were investigated by the OIG, or referred to facility and program offices. We also observed a course to assess VA’s training provided to VA employees conducting investigations. We identified 135 OSC disclosure cases for analysis based on two criteria: (1) they contained at least partial complainant information (i.e., the allegations were not anonymously reported or could be identified with supplemental information) and (2) they contained an indicator that the case had been closed by the OSC pending an ongoing investigation by VA or the OIG. These cases represent the universe of VA disclosures accepted by the OSC. Of the 135 disclosure cases referred to VA, 53 cases were referred to VA facility and program offices for further investigation. The remaining 82 disclosure cases indicated that they were investigated by the OIG. We reviewed the results of OSC’s assessment of investigative documentation developed by VA for these whistle-blower disclosure cases. For the fifth objective, we analyzed the 135 whistle-blower disclosure cases obtained from the OSC. These cases represent the universe of VA disclosures accepted by the OSC from calendar years 2010 through 2014, which were investigated by VA. We obtained an extract of year-end rosters from the PAID system as of September for fiscal years 2010 through 2014, with a final extract through May 30, 2015. Finally, we interviewed representatives from whistle-blower advocacy groups, as well as established whistle-blowers who disclosed wrongdoing or retaliation at VA and who were referred to us by one advocacy group. Of the 135 disclosures received by the OSC, a total of 129 employees made a total of 130 disclosures nonanonymously. For these 130 disclosure cases, we reviewed OSC and OIG investigative reports, as well as PAID roster files, to gather additional information to perform analysis of potential retaliation. We also interviewed six individual whistle- blowers with formal disclosure cases accepted by the OSC, indicating that the OSC had previously reviewed the case and determined that it contained sufficient evidence and merit to warrant further investigation. Our analysis of potential retaliation comprised two parts. First, we compared the 129 employees associated with the selected OSC cases to the PAID rosters using the complainants’ information to determine whether employees associated with the selected OSC cases were more likely to leave the agency. We identified the overall count and proportion (across years) of roster-matched employees who made a disclosure between fiscal years 2010 through 2014 but were not employed at VA the following fiscal year. Second, to determine whether employees associated with the selected OSC cases were more likely to receive disciplinary action, we also calculated the yearly totals and proportion of roster-matched employees identified above for whom a record existed in the PAID disciplinary action information system. We did this by comparing the proportion of employees who received one or more disciplinary actions in the year prior to the appearance in the roster, in the same fiscal year as the roster, and in the subsequent fiscal year. We also completed this analysis utilizing the PAID roster file to determine the yearly proportion of all VA employees who left the agency. On the basis of the results of our analysis, we reported by fiscal year the percentage of whistle-blowers that received disciplinary action or left VA at a higher rate than the overall VA population following a disclosure. To address all objectives, we interviewed senior officials from VA’s major components responsible for investigating and adjudicating cases of employee misconduct. We also reviewed standard operating procedures, policy statements, and guidance for staff charged with investigating and adjudicating allegations of employee misconduct. We conducted this performance audit from January 2015 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Department of Veterans Affairs (VA) Data Files and Corresponding Data Fields That Are Missing Information Office of Accountability Review (OAR)—Legacy Referral Tracking List This data file is designed to track referrals made to OAR, including allegations of misconduct related to senior officials. Through our analysis of this spreadsheet, we identified 11 fields out of 92 that were missing information that could be used to analyze misconduct. Complainant #1 (First Name) (20 percent of 1,245 blank)— According to OAR, this field is populated when there is a known complainant for a matter. Some matters referred to OAR may be anonymously disclosed and not contain complainant information. This file also contains a case-origin field that specifies whether a case was anonymous. Our review of both the complainant and case-origin field indicated that only 11 percent of the complaints were generated from an anonymous source, and the remaining records should have included a complainant name. Disciplinary Action (96 percent of 1,245 blank)—This field should be populated to indicate whether a disciplinary action was taken after the completion of an investigation. Grade (97 percent of 1,245 blank)—According to OAR, this field should be populated with the grade of the Person of Interest (POI), if known. Our review of this field indicated that 97 percent were blank, therefore we were unable to analyze the variation in grade level for officials who were the subject of complaints. OAR Action (77 percent of 1,245 blank)—According to OAR, this field should be populated as an internal reference to describe what stage in the administrative process the matter was in when received. The lack of information did not allow for analysis of the types of actions taken for each case. Person of Interest (POI) (47 percent of 1,245 blank)—This field specifies the first and last name of the Department of Veterans Affairs (VA) employee who is the subject of an allegation. POI (Person of Interest) Last Name 1–5 (51 to 99 percent of 1,245 blank)—This field specifies the last name of VA employee who is the subject of an allegation. There are a total of five person-of-interest (POI) fields for each matter. According to OAR, blank POI fields occur when the case has fewer than five POIs or the POI was not specified in the matter referred. Our review of the five POI fields indicated that more than half of the records did not contain at least one POI because no individual was specified. The lack of information did not allow for further investigation of senior-level officials involved in misconduct. Proposed Action (97 percent of 1,245 blank)—According to OAR, this field should be populated to notify OAR staff if any disciplinary action was proposed. Our review of this field indicated that 97 percent were blank, suggesting that very few high-level officials received corrective action, or the field was not consistently completed for each record. Due to the large share of blank values, the data posed limitations when analyzing how many senior officials received corrective action as a result of a complaint. Office of Accountability Review (OAR)—VA-Wide Adverse Employment Action Database This data file is designed to track misconduct and disciplinary actions taken against VA employees. Through our analysis of this spreadsheet, we identified 9 fields out of 21 that were missing information that could be used to analyze misconduct. Action Taken (3 percent of 9,851 blank)—According to OAR, this field should be populated with the action that the deciding official takes, with the exception of pending actions. If there is a pending action, this field will remain blank. Our review of the action-taken field found three records that were annotated as a “pending decision” within this field, which indicates that there is an option for entering information into this field when there is an action pending and the field should never be blank. Admin Leave (30 percent of 9,851 blank)—According to OAR, this field should be populated if an employee is placed on administrative leave while an adverse action is pending. If an employee is not placed on administrative leave, the field may be left blank. Our review of the admin-leave field found that about 66 percent of the records were annotated as “no” within this field, which indicates that there is an option for entering information into this field when an employee was not placed on administrative leave. Date Proposed (11 percent of 9,851 blank)—According to OAR, this field is used when an adverse action is proposed for an employee. There are some actions that are not proposed, such as probationary terminations or admonishments that may be taken without being proposed, and therefore result in this field being blank. Our analysis found that about 95 percent of these records containing blank proposed date fields also had an entry in the proposed adverse-action field, which contained such entries as removals, suspensions, and demotions that require a proposed date. Deciding Official (14 percent of 9,851 blank)—According to OAR, this field should be populated with the name of the official who issued the action taken. Effective Date (5 percent of 9,851 blank)—According to OAR, this field should be populated with the date of the action taken. Some entries will not have an effective date if an entry is pending decision. Also, if no action is taken, the decision was counseling, or the proposed action was rescinded, this field may not have an effective date. Our review found that about 14 percent of the cases that included adverse actions, such as a suspension, removal, reassignment, or demotion, and that should have included a date of action, were blank. Offense 2 and 3 (79 and 95 percent of 9,851 blank)—According to OAR, this field tracks the second- and third-most-significant charge against the employee when applicable. Proposing Official (16 percent of 9,851 blank)—According to OAR, this field should be populated with the name of the official who makes the proposed adverse action. Instances where a proposing official has left the agency at the time of entry and could not be found in the lookup feature that relies on the e-mail global address list may produce blank fields. Also, disciplinary actions that were taken without proposal would not have a proposing official. In these instances, the human-resources specialists who enter the actions are instructed to include the name in the other-comments box. Our review of these records indicated that a majority of records lacking an entry in the proposing official field also lacked an annotation in the other- comments field to accurately identify the proposing official. Settlement (14 percent of 9,851 blank)—According to OAR, this field tracks whether a settlement agreement occurred. Office of Inspector General (OIG)— Master Case Index This data file is designed to collect allegations of criminal activity, waste, abuse, and mismanagement received by the OIG Hotline Division. Through our analysis of this information system, we identified one field out of seven that was missing information that could be used to analyze misconduct. Nature of Complaint (54 percent of 896 blank)—According to the OIG, this field should contain a brief description of the issue that most closely matches the allegation. Each case can have more than one nature of complaint and corresponding administrative action, if any. OIG officials stated that this field identifies the type of allegations being investigated and should never be blank. Our review of these cases found that over half of the cases involving the OIG contained entries for administrative action taken, but the nature-of-complaint fields corresponding to these actions were blank. Veterans Benefit Administration (VBA)—Misconduct and Disciplinary Action Report This data file is designed to track misconduct and disciplinary action taken against VBA employees. Through our analysis of this spreadsheet, we identified 3 fields out of 20 that were missing information that could be used to analyze misconduct. Alleged Offense 2 and 3 (92 and 99 percent of 1,375 blank)— According to VBA officials, this field should be populated if an individual is charged with multiple offenses, or has additional offenses in the same reporting period. In most instances, there is typically only one offense at the time of reporting. Sustained (52 percent of 1,375 blank)—According to VBA, this field should be populated if an offense is sustained at the time of reporting. Office of Accountability and Whistleblower Protection (OAWP)— VA-Wide Adverse Employment Action and Performance Improvement Plan Database This data file is designed to track all allegations of misconduct and associated disciplinary actions taken against VA employees. Through our analysis of this spreadsheet, we identified 8 fields out of 34 that were missing information that could be used to analyze misconduct. Deciding Official (3 percent of 5,571 blank)—According to OAWP, this field should be populated with the name of the official who makes the decision for adverse action. Detail Position (89 percent of 5,571 blank)—According to OAWP, this field should be populated with the position an employee was detailed to if removed from official position. Offense 2 and 3 (69 and 91 percent of 5,571 blank)—According to OAWP, this field tracks the most-significant charges against the employee. If there are fewer than three charges, these fields are left blank. Offense 1 Sustained (14 percent of 5,571 blank)—According to OAWP, this field should be populated if an individual’s first offense has been sustained. Offense 2 and 3 Sustained (73 and 91 percent of 5,571 blank)— According to OAWP, these fields should be populated if an individual’s second and third offenses have been sustained. The majority of cases only involve one offense. Proposing Official (9 percent of 5,571 blank)—According to OAWP, this field should be populated with the name of the official who makes the proposed adverse action. Appendix III: Department of Veterans Affairs (VA) Data Files and Corresponding Data Fields That Lack Standardization Office of Resolution Management (ORM)—Complaints Automated Tracking System This data file tracks Equal Employment Opportunity (EEO) discrimination complaints. Through our analysis of this information system, we identified 1 field out of 66 that did not have standardization that could be useful to analyze misconduct. Employment—We found that the values for this field were not mutually exclusive, or independent of one another. For example, this field includes two distinct categories of information: employment status, such as full time or part time; and hiring authority, such as Title 5 or Title 38. This method of storing information resulted in undercounting each of the separate values due to the system’s failure to account for expected overlap. For instance, an employee could be both a full-time and Title 5 employee and the field only tracks one or the other. ORM officials stated that this field has since been modified to capture more options to account for the overlap. Office of Accountability Review (OAR)—VA-Wide Adverse Employment Action Database This data file is designed to track misconduct and disciplinary actions taken against Department of Veterans Affairs (VA) employees. Through our analysis of this spreadsheet, we identified 1 field out of 21 that did not have standardization that could be useful to analyze misconduct. Position—We found the VA-Wide Adverse Employment Action Database contained variations within this field, such as multiple values for the “Cemetery Caretaker” position name. According to OAR, this field is a free-text field, and the office conducts manual searches to review and analyze position titles when needed. Our review found that the different variations in position titles made it difficult to successfully determine the frequency and nature of allegations by position. Office of Inspector General (OIG)— Master Case Index This is an information system designed to collect allegations of criminal activity, waste, abuse, and mismanagement received by the OIG Hotline Division. Through our analysis of this data file, we identified 1 field out of 7 that did not have standardization that could be useful to analyze misconduct. Nature of Complaint—Our review of the Master Case Index file found variations of similar values in this field. For example, this field contained 21 different claim types pertaining to similar types of fraud, which made it difficult to assess the frequency and nature of claims entered into the system. OIG officials stated that they do not attempt to account for these variations or assess the frequency of use because they are assigned based on a “best match” to the allegations of the case. Veterans Benefit Administration (VBA) This data file is designed to track misconduct and disciplinary action taken against VBA employees. Through our analysis of this spreadsheet, we identified 1 field out of 20 that did not have standardization that could be useful to analyze misconduct. Position—Our review found some duplication and overlapping values among this field. For example, the position title for “service representative” contained 21 similar categories with numerous variations in spelling (i.e., Veteran Service Representative vs. Veterans Service Representative, and Rating Veteran Service Representative vs. Rating Veterans Service Representative). We were unable to verify the number of distinct positions due to the lack of standardization within this field. National Cemetery Administration (NCA) This data file is a tracking spreadsheet for monitoring misconduct and disciplinary action workload. Through our analysis of this spreadsheet, we identified 5 fields out of 12 that did not have standardization that could be useful to analyze misconduct. Action Proposed/Decided/Taken—We were unable to analyze this data field because the action taken was tracked in one single field and updated with the most-recent action, rather than each distinct action being entered in a separate field. Consequently, we were not able to distinguish those cases where corrective action may have been taken, to verify whether the corrective action had been implemented. Current Status—We were unable to analyze this data field because it was not a standardized field. For example, we were unable to determine the total number of cases that were closed, open, or pending due to the variations in the data field (e.g., Open, open, open – pending, open-pending). Full Name of Employee, Grievant, Appellant, Complainant, Non- Employee—We were unable to distinguish whether the individual filing a complaint was an employee, grievant, appellant, complainant, or nonemployee because the information entered into this single field only provided the employee’s full name and did not provide a distinction as to which category the record was assigned, as indicated by the field name. NCA Facility—We were unable to analyze this data field because it was not a standardized field. For example, we were unable to run demographic information on the different facilities involved because this field contained erroneous information. Examples of erroneous information included the name of the Memorial Service Network in one case, and the region, rather than the facility name, in another. Supervisor Name—We were unable to analyze this data field because it was not a standardized field. For example, we were unable to determine the total number of supervisors that were associated with each case due to the variations in the names entered within this field, which included misspelled first or last names, addition/omission of middle initials, or no first name. Client Service Response Team (CSRT)—ExecVA This data file is a tracking spreadsheet for all allegations received by the VA Secretary regarding misconduct, patient care, or other wrongdoing. Through our analysis of this spreadsheet, we identified 1 field out of 9 that did not have standardization that could be useful to analyze misconduct. Subject—Our review of this tracking spreadsheet found over 380 different possible categories that could be assigned to one record. These categories contained a significant number of variations. For example, we found 38 different categories that contained possible EEO-related issues such as “EEO/Whistleblower,” “Potential EEO,” and “EEO Violations.” We were unable to distinguish the different subject categories for this field due to the lack of standardization. CSRT officials stated that ExecVA reports contain only data corresponding to specific search criteria. Office of Security and Law Enforcement (OS&LE)—Veterans Affairs Police System (VAPS) This is an information system for tracking allegations of misconduct at all VA facilities that include violation of law and misdemeanors. Through our analysis of this data file, we identified 3 fields out of 29 that did not have standardization that could be useful to analyze misconduct. Classification—We found that this field contained at least three different variations of assault categories (i.e., assault, assault-other, and assault-aggravated). VAPS officials stated that this field is determined and entered by the user. Crime Type—We found this field contained at least five different variations of alcohol-consumption categories, such as “entering premises under the influence” and “alcohol – under the influence.” VAPS officials stated that this field is determined and entered by the user. Final Disposition—We found this field contained at least two different variations of charge type (i.e., charged, charged – Issued Ticket), six different variations of open type (for example, open/referred to Court, open/cvb), and two different variations of closed type (i.e., closed, case closed). VAPS officials stated that this field is determined and entered by the user. Office of Accountability and Whistleblower Protection (OAWP)— VA-Wide Adverse Employment Action and Performance Improvement Plan Database This data file is designed to track all allegations of misconduct and associated disciplinary actions taken against VA employees. Through our analysis of this spreadsheet, we identified 1 field out of 34 that did not have standardization that could be useful to analyze misconduct. Position Title—We found this field contained at least 15 different variations of Registered Nurse, such as “Registered Nurse,” “Staff RN,” and “RN.” Appendix IV: Misconduct File Review Appendix V: Illustrative Case Examples of Senior-Official Misconduct during Calendar Years 2011 through 2014 Case 1 Allegations surrounding inadequate staffing, patient care, and safety at a Department of Veterans Affairs (VA) emergency room were investigated by the medical center director of the facility. The medical center director found that the inadequate patient care and safety issue was unsubstantiated based on a review of patient safety incidents for the last 6 months. The medical center director did not provide a copy of her report review to support this conclusion. She also indicated that an external consultant was hired to assess staffing issues, and found generally that improvements could be made for staffing to address surge capacity. The director stated the medical center was in the process of implementing the recommendations made by the consultant, but her response did not discuss the specific improvements planned or include the external consultant’s report. Case 2 A fact-finding was performed by a panel comprised of VA Connecticut Healthcare System officials in response to alleged violations of law, gross mismanagement, and waste of funds that included the improper billing of services for a Las Vegas conference and paying contracts through a VA nonprofit corporation to handle such expenditures. The allegations specifically requested a cost-benefit analysis for the conference location. The response received from the program office stated that an outside accounting firm performs an annual financial audit of the VA nonprofit corporation and found no material issues. Neither a copy of the annual financial audit nor a cost-benefit analysis was provided in the response as support. Additionally, the response did not address allegations regarding the status of several essential positions vacated over the prior 3 years. Case 3 Allegations involved time-and-attendance abuse by a physician who was accused of not responding to calls from peers or coming into the clinic, in favor of his private practice. According to the complainant, physician assistants (PA) examine the physician’s patients at the clinic for him. The medical center director, who was also named in the allegation as having received a similar complaint against the physician 2 years earlier, reviewed the case against the physician and himself. The medical center director’s response claimed that the location indicated in the allegation was not a private practice, but rather a location where the physician reviews medical records and sometimes serves as an expert witness. He did not provide evidence to support his claim. The medical center director also stated that he had not received any reports against the physician for missed calls or clinics and that PAs are expected to participate in these activities. However, he did not provide the physician’s work log, or the PA position descriptions showing that they are allowed to perform these functions autonomously. Finally, the medical center director claimed he did not recall the allegation made against the physician 2 years prior and neither formally substantiated nor disproved the current allegations against the physician. No recommendations were made. Case 4 The medical center director was accused of hiring an unqualified individual to a Quality Manager position due to their romantic relationship. The response received from the human-resources consultant noted that it was unusual to find a Nurse II manager with only an associate’s degree, but was not illegal, and the employee was qualified based on prior experience. Concerns were also raised concerning the medical center director’s use of over $400 in government funds to “soundproof” the Quality Manager’s office, including having panels attached to one wall and the hollow office door replaced with a solid door. The Chief of Engineering Services was interviewed regarding the request and stated it was an odd request, and the first time he was asked to soundproof an administrative employee’s office. The response provided by the program office did not address why the director used government funds to soundproof the Quality Manager’s office. The response provided also did not address whether recommendations that the Quality Manager’s retention allowance be reviewed for compliance and that she be counseled for appropriate office dress code were implemented. Case 5 The medical center director was alleged to have misrepresented a plan to track and provide mental-health services to veterans in non-VA hospitals and created a hostile work environment against African-American veterans and employees. The medical center director investigated the allegations against himself and provided a response that was eventually submitted late to the OIG. His response indicated that a review was completed and all allegations were unsubstantiated. Several documents provided with his response showed that only 12 contacts were made to veterans with mental-health care needs during the requested 24-month period, the percentage of patients experiencing wait times greater than 14 days before receiving mental-health services averaged 18 percent, and two veteran suicides occurred. The medical center director did not address allegations of creating a hostile work environment for African American veterans and employees. Case 6 The medical center director improperly reannounced a vacancy in order to hire an individual with whom he allegedly had a close personal relationship to an Assistant Director position. He also requested the master key to the facility be issued to her against regulations. The allegation involving the master key was substantiated, but the Deputy Under Secretary who conducted the investigation stated that while there was no record of the key being returned, the key was returned and the general engineer brought the facility into compliance with VA regulations. Nonetheless, the Deputy Under Secretary found that a master key was issued in violation of policy, but no recommendations were made to the medical center director for corrective action. Case 7 Allegations involved false patient wait-time documentation and abuse of authority. Specifically, a medical center director instructed staff to falsify patient wait times between follow-up appointments in order to meet VA’s 14-day timeliness metric. The investigation concluded that the false documentation allegation was substantiated, but attributed the cause to staff not understanding how to enter a follow-up appointment date into the system. It was also concluded that the correction of several hundred dates in the system improved the performance of the department for the national wait-time metric. However, no documentation was provided to (1) prove the medical center director had not abused his authority by instructing staff to review wait times greater than 14 days to determine how they could be reduced, and (2) support the conclusion that the original wait times were entered in error. Appendix VI: Comments from the Department of Veterans Affairs Appendix VII: Comments from the Department of Veterans Affairs Office of Inspector General Appendix VIII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Dave Bruno (Assistant Director), Erica Varner (Analyst in Charge), Hiwotte Amare, Chris Cronin, Carrie Davidson, Ranya Elias, Colin Fallon, Mitch Karpman, Grant Mallie, Anna Maria Ortiz, Sabrina Streagle, Reed Van Beveren, and April Van Cleef made key contributions to this report.
Why GAO Did This Study VA provides services and benefits to veterans through hospitals and other facilities nationwide. Misconduct by VA employees can have serious consequences for some veterans, including poor quality of care. GAO was asked to review employee misconduct across VA. This report reviews the extent to which VA (1) collects reliable information associated with employee misconduct and disciplinary actions, (2) adheres to documentation-retention procedures when adjudicating cases of employee misconduct, (3) ensures allegations of misconduct involving senior officials are reviewed according to VA investigative standards and these officials are held accountable, and (4) has procedures to investigate whistle-blower allegations of misconduct; and the extent to which (5) data and whistle-blower testimony indicate whether retaliation for disclosing misconduct occurs at VA. GAO analyzed 12 information systems across VA to assess the reliability of misconduct data, examined a stratified random sample of 544 misconduct cases from 2009 through 2015, analyzed data and reviewed cases pertaining to senior officials involved in misconduct, reviewed procedures pertaining to whistle-blower investigations, and examined a nongeneralizable sample of whistle-blower disclosures from 2010 to 2014. What GAO Found The Department of Veterans Affairs (VA) collects data related to employee misconduct and disciplinary actions, but fragmentation and data-reliability issues impede department-wide analysis of those data. VA maintains six information systems that include partial data related to employee misconduct. For example, VA's Personnel and Accounting Integrated Data system collects information on disciplinary actions that affect employee leave and pay, but the system does not collect information on other types of disciplinary actions. The system also does not collect information such as the offense or date of occurrence. GAO also identified six other information systems that various VA administrations and program offices use to collect specific information regarding their respective employees' misconduct and disciplinary actions. GAO's analysis of all 12 information systems found data-reliability issues—such as missing data, lack of identifiers, and lack of standardization among fields. Without collecting reliable misconduct and disciplinary action data on all cases department-wide, VA's reporting and decision making on misconduct are impaired. VA inconsistently adhered to its guidance for documentation retention when adjudicating misconduct allegations, based on GAO's review of a generalizable sample of 544 out of 23,622 misconduct case files associated with employee disciplinary actions affecting employee pay. GAO estimates that VA would not be able to account for approximately 1,800 case files. Further, GAO estimates that approximately 3,600 of the files did not contain required documentation that employees were adequately informed of their rights during adjudication procedures—such as their entitlement to be represented by an attorney. The absence of files and associated documentation suggests that individuals may not have always received fair and reasonable due process as allegations of misconduct were adjudicated. Nevertheless, VA's Office of Human Resource Management does not regularly assess the extent to which files and documentation are retained consistently with applicable requirements. VA did not consistently ensure that allegations of misconduct involving senior officials were reviewed according to investigative standards and these officials were held accountable. For example, based on a review of 23 cases of alleged misconduct by senior officials that the VA Office of Inspector General (OIG) referred to VA facility and program offices for additional investigation, GAO found VA frequently did not include sufficient documentation for its findings, or provide a timely response to the OIG. In addition, VA was unable to produce any documentation used to close 2 cases. Further, OIG policy does not require the OIG to verify the completeness of investigations, which would help ensure that facility and program offices had met the requirements for investigating allegations of misconduct. Regarding senior officials, VA did not always take necessary measures to ensure they were held accountable for substantiated misconduct. As the figure below shows, GAO found that the disciplinary action proposed was not taken for 5 of 17 senior officials with substantiated misconduct. As a result of June 2017 legislation, a new office within VA—the Office of Accountability and Whistleblower Protection—will be responsible for receiving and investigating allegations of misconduct involving senior officials. VA has procedures for investigating whistle-blower complaints, but the procedures allow the program office or facility where a whistle-blower has reported misconduct to conduct the investigation. According to the OIG, it has the option of investigating allegations of misconduct, or exercising a “right of first refusal” whereby it refers allegations of misconduct to the VA facility or program office where the allegation originated. VA does not have oversight measures to ensure that all referred allegations of misconduct are investigated by an entity outside the control of the facility or program office involved in the misconduct, to ensure independence. As a result, GAO found instances where managers investigated themselves for misconduct, presenting a conflict of interest. Data and whistle-blower testimony indicate that retaliation may have occurred at VA. As the table below shows, individuals who filed a disclosure of misconduct with the Office of Special Counsel (OSC) received disciplinary action at a much higher rate than the peer average for the rest of VA in fiscal years 2010–2014. Additionally, GAO's interviews with six VA whistle-blowers who claim to have been retaliated against provided anecdotal evidence that retaliation may be occurring. These whistle-blowers alleged that managers in their chain of command took several untraceable actions to retaliate against the whistle-blowers, such as being denied access to computer equipment necessary to complete assignments. What GAO Recommends GAO makes numerous recommendations to VA to help enhance its ability to address misconduct issues (several of the recommendations are detailed on the following page). GAO recommends, among other things, that the Secretary of Veterans Affairs develop and implement guidance to collect complete and reliable misconduct and disciplinary-action data department-wide; such guidance should include direction and procedures on addressing blank fields, lack of personnel identifiers, and standardization among fields; direct applicable facility and program offices to adhere to VA's policies regarding misconduct adjudication documentation; direct the Office of Human Resource Management to routinely assess the extent to which misconduct-related files and documents are retained consistently with applicable requirements; direct the Office of Accountability and Whistleblower Protection (OAWP) to review responses submitted by facility or program offices to ensure evidence produced in senior-official case referrals demonstrates that the required elements have been addressed; direct OAWP to issue written guidance on how OAWP will verify whether appropriate disciplinary action has been implemented; and develop procedures to ensure (1) whistle-blower investigations are reviewed by an official independent of and at least one level above the individual involved in the allegation, and (2) VA employees who report wrongdoing are treated fairly and protected against retaliation. GAO also recommends, among other things, that the VA OIG revise its policy and require verification of evidence produced in senior-official case referrals. VA concurred with nine recommendations and partially concurred with five. In response, GAO modified three of the recommendations. The VA OIG concurred with one recommendation and partially concurred with the other. GAO continues to believe that both are warranted.
gao_GAO-19-46
gao_GAO-19-46_0
Background Establishment, Mission, and Objectives of the Lab In 2014, USAID established the Lab as a USAID bureau by merging and restructuring two offices—the Office of Science and Technology and the Office of Innovation and Development Alliances. According to USAID officials, the agency moved a number of the two offices’ core programs and activities, along with staffing functions, to the Lab. In a January 2014 notification, USAID informed Congress of its intent to establish the Lab and noted initial staffing levels, funding, and short-term plans. The Lab is generally subject to guidance pertaining to operating units and bureaus, including policies and procedures set out in USAID’s ADS. It also publishes and contributes to various performance and financial reporting of information, such as USAID’s Annual Performance Plan and Report, which are provided to Congress and available to the public, according to Lab officials. The Lab was created to work collaboratively within USAID and with other government and nongovernment partners to produce development innovations, among other things. According to Lab officials, the Lab seeks to improve USAID’s ability to harness the power of science, technology, innovation, and partnerships (STIP) with private and public sectors by funding and scaling breakthroughs that would accelerate the completion of foreign policy and development goals. The Lab has a two-part mission: 1. Produce development breakthroughs and innovations by funding, testing, and scaling proven solutions that will affect millions of people. 2. Accelerate the transformation of development enterprise (i.e., to build capacity of the public and private sectors to work in the development arena) by opening it to people everywhere with good ideas, promoting new and deepening existing partnerships, applying data and evidence, and harnessing scientific and technological advances. The Lab’s mission, objectives, and goals are laid out in its strategic plan, which has evolved since the Lab’s creation. In fiscal years 2014 and 2015, the Lab operated under an initial strategy that focused on examining the delivery capabilities and constraints of current and ongoing Lab programs; prioritizing investments of time and resources; and confirming new activities and programs. The strategy for fiscal years 2016 through 2020 presents a results framework that includes the Lab’s two- part mission statement as well as five objective statements and corresponding intermediate result statements explaining how the Lab intends to achieve its goals (see fig. 1). Structure of the Lab The Lab, which is headed by an Executive Director, includes five centers—the Center for Development Research, the Center for Digital Development, the Center for Development Innovation, the Center for Transformational Partnerships, and the Center for Agency Integration— each focused on one of the Lab’s five strategic objectives. The Lab also includes two offices, the Office of Engagement and Communication and the Office of Evaluation and Impact Assessment, which provide support services. Figure 2 shows the Lab’s organizational structure. Table 1 describes each of the Lab’s centers and offices. In April 2018, the USAID Administrator announced agency reorganization plans that will affect the Lab. USAID leadership plans to create a new Bureau for Development, Democracy, and Innovation and a Bureau for Policy, Resources, and Performance. According to USAID, the new bureaus will combine existing operating units that provide technical and program design support and expertise into a “one-stop shop” of consultancies that USAID missions can utilize. The new bureaus will absorb the Lab, along with other units, and track its contributions using new metrics that measure customer service to determine whether missions and bureaus have access to the right expertise at the right time, according to the USAID Administrator. As of October 2018, USAID had not indicated time frames for implementing the reorganization plans. Funding Mechanisms for Lab Activities To achieve its objectives and goals, the Lab funds and manages awards (which result in activities) that cover STIP programming as well as the Lab’s operations. The Lab uses a number of different mechanisms—for example, broad agency announcement procedures, annual program statements, and requests for applications—when making awards, which include grants, cooperative agreements, and contracts. Global Development Alliance A Global Development Alliance (GDA) is a partnership involving the U.S. Agency for International Development (USAID) and the private sector. GDA partners work together to develop and implement activities that leverage and apply assets and expertise to advance core business interests, achieve USAID’s development objectives, and increase the sustainable impact of USAID’s development investments. Generally, according to USAID, the value of private sector expertise, capabilities, and resources contributed to an alliance must equal and, in general, should significantly exceed the value of resources provided by USAID. The Lab also holds competitions focused on new ideas, approaches, and technologies to address development problems, and awards prizes to individuals or groups that meet the competition’s requirements. Some awards include funding from USAID as well as cash or in-kind contributions from non-USAID sources in the private or public sector. The Lab refers to the use of all non-USAID contributions as leverage and reports leverage as a programmatic performance indicator. According to USAID documents, the agency seeks to build partnerships that leverage the assets, skills, and resources of the public, private, and nonprofit sectors to deliver sustainable development impact. Examples of such leverage contributions include donated cash, services, or supplies from implementing partners or third parties to specific awards managed by the Lab. Third parties contributing to Lab managed programs have included foreign governments, international organizations, businesses and corporations, philanthropic foundations, non-governmental organizations, and higher education institutions, among others. One method USAID has approached this goal is through Global Development Alliances (see sidebar). The Lab Aligns Programs to Support Its Five Strategic Objectives; Funding and Staffing Have Decreased since Fiscal Year 2015 The Lab Aligns Programs with Its Strategic Objectives Staff in the Lab’s five centers, offices, and Lab-Wide Priorities manage more than 25 programs and portfolios, which encompass projects and activities under a specific issue, aligned with the Lab’s five strategic objectives. The programs focus on development research (science objective), digital development (technology objective), innovation ventures (innovation objective), and private-sector engagement (partnerships objective). Table 2 shows examples of programs and portfolios aligned with each strategic objective. Examples of the Lab’s programs and activities include the following (see app. II for more information about these and other Lab programs): Staff in the Lab’s Center for Development Innovation manage the Grand Challenges for Development initiative, intended to foster innovations to address key global health and development problems. Since 2011, USAID and its partners have launched 10 Grand Challenges that are implemented by USAID bureaus, including the Lab. The Lab is responsible for managing the Securing Water for Food Grand Challenge and also the Scaling Off-Grid Energy Grand Challenge. Other USAID bureaus implement the other eight Grand Challenges (see app. III for a description of the Grand Challenges). Staff in the Lab’s Center for Development Research manage the Higher Education Solutions Network. The program is a partnership with seven universities that also work with partners in academia, the private sector, civil society, and governments worldwide. The universities established eight development labs that focus on efforts to solve a range of development problems. The Lab’s two offices support various aspects of the centers’ programs and portfolios, such as internally promoting center programs throughout USAID and conducting monitoring and evaluation activities. Types of STIP Services Provided by the Global Development Lab Digital development: Technologies and data-driven approaches to extend the reach of development programs Catalyzing innovation: Integration of design methodologies, development innovations, and programming solutions to solve development challenges differently. Partnerships/private sector engagement: Relationships between USAID and one or many organizations, including private sector entities, in an effort to create development impact. Scientific research and capacity building: Application of science and research to solve development problems. In addition to managing programs, the centers provide a variety of STIP- focused services and support, including assistance with programming, to USAID field missions and headquarters bureaus as part of the Lab’s mission to accelerate development impact. According to Lab documentation, the Lab can provide services related to country and regional strategic planning; project design and implementation; activity design and implementation; and monitoring and evaluation. The Lab’s STIP services fall into several categories—digital development, catalyzing innovation, partnerships and private sector engagement, and scientific research and capacity building, according to Lab documents (see sidebar). The centers, led by the Center for Agency Integration, deliver internal STIP services and mechanisms through toolkits, training, advisory services, and assessment and analysis of STIP activities or programming, according to Lab documentation. For example, at the request of missions or bureaus, the Digital Finance team in the Center for Digital Development can, among other things, review and provide technical input on awards related to digital finance. In addition, the Lab has provided advisory services to USAID operating units regarding innovative design or methods, such as co-creation, which can be used throughout the program cycle including in procurement (i.e., the broad agency announcement, annual program statements, etc.). According to Lab officials, some services are funded by the Lab at no cost to USAID operating units, while other services must be funded by the USAID operating units through funding mechanisms such as “buy-ins” or cooperative agreements. Lab data for fiscal years 2014 through 2017 show that the Lab provided services or support frequently in digital development activities, such as geospatial support to USAID field operations, and partnership services. For example, the Lab has provided technical services to missions around the world related to the GeoCenter (housed in the Center for Digital Development), which supports the application of advanced data and geographic analysis to international development challenges to improve the strategic planning, design, monitoring, and evaluation of USAID’s programs. In addition, the Lab provided partnership services related to private-sector engagement, including technical assistance and consultative services to USAID missions for more efficiently engaging, building, and maintaining relationships with the private sector at local or regional levels. Officials we interviewed at USAID missions and headquarters bureaus described services or tools they had received from the Lab, such as technical advice and training related to establishing private-sector partnerships and leveraging funding. For example, some USAID headquarters officials told us they had taken Lab-led private-sector engagement training that addressed developing collaborations with external stakeholders, establishing risk-sharing agreements, and engaging investors and other financial sector actors. In addition, some mission officials stated that they were involved in Lab-supported programs such as the Partnerships for Enhanced Engagement in Research and the Partnering to Accelerate Entrepreneurship Initiative and had received Lab support related to geographic information system mapping. One mission had a Lab-funded embedded advisor who provided technical assistance to a country’s Ministry of Health. According to Lab officials, demand for the Lab’s services and support exceeds the Lab’s capacity and its resources. Program Funding for the Lab Has Decreased since Fiscal Year 2015 Allocations of program funds from USAID to the Lab have decreased over the past few fiscal years, from $170.7 million in fiscal year 2015 to $77 million in fiscal year 2017. Similarly, the Lab’s obligations of program funds have also decreased since fiscal year 2015, according to Lab data. Obligations reached around $170 million in fiscal year 2015, the Lab’s first full year of operations. By fiscal year 2016, the Lab’s obligations had decreased to about $109 million—a reduction of over 35 percent. Although the Lab is still obligating fiscal year 2017 funding, its obligations would not exceed $77 million if it obligated the full amount of program funding provided to the Lab. As table 3 shows, from fiscal year 2014 through fiscal year 2017, the Lab obligated over $435 million of its program funds for its centers and support services (see app. IV for an overview of funding from various appropriations accounts in fiscal years 2014-2017). According to Lab officials, the program funds cover Lab- managed programs and programming (including funding for awards comprised of many activities) and the centers’ services, STIP activities, and staffing (including contractors), among other things (see app. V for a discussion of Lab-managed activities and corresponding obligations for fiscal years 2014-2017). As table 3 shows, in fiscal years 2014 through 2017, the Lab’s Center for Development Innovation obligated the most funds overall. The center houses the Development Innovation Ventures, a portfolio of innovations with the goal of reducing global poverty. Borrowing from the private sector’s venture capital model, the portfolio seeks to identify and test innovative development solutions based on three principles: rigorous evidence, cost-effectiveness, and potential to scale up. Lab officials indicated that the Lab has reassessed and realigned programming priorities because of decreased funding. For example, the Lab temporarily suspended new applications for awards through the Development Innovation Ventures program from the end of July 2017 due to budget uncertainties in fiscal year 2018. However, Lab officials indicated that the Lab has recently secured funding for new applications for the program. Funding decreases have also caused the Lab to scale back or put some programs on hold, according to Lab officials. For example, the Lab scaled back its Partnering to Accelerate Entrepreneurship Initiative; its Lab-Wide Priorities; and its Monitoring, Evaluation, Research, and Learning Innovation programs. The Lab also put its partnerships with NextGen missions on hold indefinitely, according to Lab officials. In addition, the Lab reported that it has been able to provide only minimal support for multi-stakeholder partnerships, such as the Digital Impact Alliance and the Global Innovation Fund. Lab Staffing Has Decreased since Fiscal Year 2015 The number of staff in the Lab has decreased since fiscal year 2015, the first year for which staffing numbers are available. Lab staff include both direct-hire staff, comprising civil service and foreign-service employees, and contractors with specialized skills who supplement the efforts of direct-hire staff. Contractors have made up at least 35 percent or more of staff each fiscal year since 2015. The total number of staff, including direct-hire staff and contractors, decreased by over 30 percent from fiscal years 2015 through 2018, dropping from 224 in fiscal year 2015 to 155 in fiscal year 2018 (see table 4). Lab information shows that the staff primarily comprise senior technical and professional experts and that about 80 percent of staff are on time- limited appointments, which can last 1 to 5 years, according to Lab officials. Further, according to Lab officials, due to the ever-changing nature of work in the Lab, staff may work on multiple projects and activities across several teams or may be assigned to work with one team or on a single project until it is completed. For example, Lab officials stated that when Lab-Wide Priorities are established, staff members are brought in to contribute to these efforts while also working on activities in the centers they support. In addition to declining staff numbers overall, since fiscal year 2015, the number of direct-hire staff employed by the Lab has decreased. According to Lab officials, because of the technical focus of its programming, the Lab has not been able to staff all authorized positions with direct-hire employees who have the necessary expertise. Instead, the Lab has filled some of these positions with contractors or science fellows. The Lab also uses a variety of other hiring mechanisms, such as the Participating Agency Service Agreement with the Department of Agriculture and the American Association for Advancement of Science fellows, to allow for flexibility and obtain the needed expertise to implement STIP and technical services throughout USAID. By fiscal year 2017, the Center for Digital Development had 40 staff members—the highest overall number, including the highest number of contractor staff members—among all the Lab’s centers. This center’s contractor staff primarily consisted of technical specialists assisting the GeoCenter (see app. VI for numbers of direct hires and contractors at each center in fiscal years 2015-2018). Lab officials stated that the decline in staff numbers—primarily direct-hire staff—over the years was due to a number of factors, including a government-wide hiring freeze, budget constraints, and a high attrition rate among the Lab’s staff beginning in 2017. According to several Lab officials, the high attrition rate was due to uncertainty about the USAID reorganization and its impact on the Lab, since a large percentage of the Lab’s staff is employed on a term-limited basis. The Lab Documented Its Oversight of Awards with Non- USAID Contributions, but Some Data Are Outdated and Public Reporting Lacks Transparency The Lab’s Documented Oversight of Awards with Non-USAID Contributions Followed USAID and Lab Guidance Our review of Lab documents showed that, for all 24 Lab-managed awards we reviewed, the Lab consistently documented certain oversight requirements for non-USAID contributions (i.e., committed, rather than actual, contributions from the private sector, the public sector, and other U.S. government agencies). We reviewed 24 Lab-managed awards that included non-USAID contributions to determine whether the Lab documented its compliance with key award oversight requirements we identified in USAID and Lab guidance. For all 24 awards, the Lab documented its compliance with the following key requirements: report funding amounts committed from non-USAID sources; conduct valuations of in-kind contributions, as applicable; document partners met cost-share or matching funds, if required; maintain copies of the award agreement and any modifications. Additionally, for awards receiving in-kind contributions, the Lab maintained documentation in award files demonstrating that officials reviewed the valuation of in-kind services and supplies. Further, in the 10 awards we reviewed containing cost-share requirements, the Lab maintained documentation to show partners’ progress in meeting those requirements. The Lab’s Data for Some Non-USAID Contributions Are Outdated We found that the Lab’s management information system contained outdated data on non-USAID contributions, which the Lab reports as leverage. According to ADS 596, information should be communicated to relevant personnel at all levels within an organization and the information should be relevant, reliable, and timely. Further, Standards for Internal Control in the Federal Government states that management should use quality information to achieve the entity’s objectives, including obtaining relevant data from reliable internal sources in a timely manner. Further, the Lab’s “Internal Guide to Accounting for Leverage” (internal guide) states that data on non-USAID contributions will be collected from Lab teams semi-annually. Our analysis of data in the Lab’s management information system found that one of two tables used to develop a number of internal and external reports contained outdated data for 10 of the 24 awards we reviewed and, in some cases, had not been updated for more than 2 years. Although this table showed a total of about $24.5 million in non-USAID contributions for these 10 awards, award documentation provided by the Lab showed the updated amount of non-USAID contributions to be about $12.1 million. For example, for an award aimed at providing hydro- powered irrigation pumps in Nepal, the table showed committed non- USAID contributions of about $262,000, while our review of award documentation found that the updated amount was about $410,000. For another award aimed at providing drip irrigation systems for small-plot farmers in India, the table showed partners had committed $362,000 in non-USAID contributions. However, in reviewing award documentation, we found that partners had ultimately committed about $61,600 to this award. The Lab’s internal guide does not provide instructions for ensuring that the non-USAID contributions data in USAID’s management information system are timely. According to Lab officials, the outdated data we identified resulted from staff’s failure to manually enter updated data in both of the two tables used for external reporting. Lab officials stated that leverage data are entered manually because the Lab’s management information system does not have the capacity to automatically update the tables. However, we found that the Lab’s internal guide does not describe the Lab’s current process for entering leverage data in the system or include instructions for ensuring that these data are regularly updated. Instead, the internal guide refers to a data collection practice that predates the Lab’s management information system and that, according to Lab officials, is no longer in use. To the extent that the Lab used outdated data when generating external reports and budget exercises, it risks reporting incorrect information about non-USAID contributions to Lab awards. According to Lab officials, the table with outdated data on non-USAID contributions that we identified in the Lab’s management information system is one of the data sources that the Lab uses for reports to the USAID Administrator’s Leadership Council and the Department of State and in USAID’s Annual Performance Plan and Report. According to Lab documentation, the Lab also uses these data to develop a number of annual budget formulation and justification exercises, including congressional communications. Providing instructions for updating all non-USAID contributions data in its management information system could help the Lab strengthen the timeliness and reliability of these data and of the external reports that include them. The Lab’s Internal Guide Does Not Require Its Public Reporting of Leverage Data to Disclose Types of Non-USAID Contributions Represented The Lab’s internal guide does not require its public reporting of data on non-USAID contributions, or leverage, to disclose the types of contributions represented. According to ADS 596, information should be communicated to relevant personnel at all levels within an organization and the information should be relevant, reliable, and timely. In addition, according to Standards for Internal Control in the Federal Government, management should externally communicate complete and accurate information to achieve an entity’s objectives. The Lab defines leverage more broadly than the Agency’s definition found in USAID’s ADS 303. Specifically, these definitions differ in two ways. First, the Lab definition includes cost-share contributions, which the ADS definition excludes. Second, the ADS definition limits leverage to public-private partnership awards, while the Lab’s definition does not contain a similar limitation. Because the Lab’s definition of leverage differs from the definition in ADS, the Lab uses two separate indicators to track non-USAID contributions, according to Lab officials. For the leverage data it collects for USAID reporting on public-private partnerships, the Lab adheres to the ADS definition, accounting as leverage all non-USAID resources, excluding cost sharing, that are expected to be applied to a program in USAID public-private partnership awards. For the leverage data it collects for its internal performance management and external reports, the Lab accounts in its leverage calculations all cost-share contributions (from both private and public-sector partners); all other contributions (from the private sector, the public sector, and other U.S. government agencies); and gifts (from bilateral donors). According to Lab officials, the Lab’s definition of leverage differs from the ADS definition because the Lab partners with both the private and public sectors in its contracts and awards, and the Lab’s more expansive definition allows it to fully account for all non-USAID contributions. However, despite the difference in the Lab’s and USAID’s definitions, the Lab’s internal guide does not require that its public reporting of leverage data identify the types of non-USAID contributions represented in the data. As a result, the Lab’s public reporting—for example, on its webpage—provides the total amount leveraged but does not specify the types of contributions committed by non-USAID partners. Given the difference between the Lab’s definition used in its public reporting and the ADS definition of leverage, USAID lacks assurance that it is reporting transparent data on leveraged non-USAID contributions. Moreover, because the Lab’s internal guide does not require the Lab’s public reporting of leverage to disclose the types of contributions, Congress and the public lack access to complete information about the extent and nature of the Lab’s partnerships. By specifying the types of non-USAID contributions included in its data on leveraging, the Lab could increase the transparency of its public reporting for this key metric. The Lab Uses Various Tools to Assess Its Performance; Assessments Have Identified Both Positive Results and Some Weaknesses The Lab uses various tools, such as its results framework, portfolio reviews, strategic learning reviews, and evaluations, established by USAID policy or Lab-specific practices to assess its performance. Because the Lab has existed only since 2014 and has had a strategy only since 2016, it has been able to collect a limited amount of data with which to assess its performance to show any trends in achieving results. However, the performance assessment tools that the Lab uses have identified both positive results and some weaknesses or challenges. Results Framework Tool and Identified Results The Lab’s strategy for fiscal years 2016 through 2020 includes a results framework comprising the Lab’s five strategic objectives, as shown previously in figure 1. For each strategic objective, the framework presents a corresponding development objective—that is, the most ambitious result that a Lab center aims to achieve through its projects and activities—as well as targets the Lab is focused on achieving by 2020. Progress toward the targets is tracked with annual and, in some cases, semi-annual performance indicators, according to Lab officials (see app. VIII for a list and descriptions of the Lab’s indicators). According to Lab officials, the Lab considers the results framework a living document and adjusts indicators and targets as necessary based on changing circumstances. The Lab’s indicator data indicate that, overall, the Lab met or exceeded its targets slightly more often than it did not meet them (see table 5). As table 5 indicates, the Lab met or exceeded its targets for 20 of its 39 indicators in fiscal years 2016 and 2017. For example, for one indicator— total number of program or policy changes made by public sector, private sector, or other development actors that are influenced by Lab-funded research results or related scientific activities—the Lab reported that it exceeded its target for both fiscal years. The Lab’s targets for this indicator for fiscal years 2016 and 2017 were set at 42 and 48, respectively, with reported results of 83 and 84. For another indicator— total dollar value of private and public capital catalyzed for early-stage entrepreneurs as a result of USAID support—the Lab reported it had exceeded its fiscal year 2017 target of $575 million, with an actual result of around $686 million. In addition, the Lab improved its performance for seven indicators, according to its data. For instance, for agency integration indicators—such as the number of operating units that have integrated STIP at the strategic, programmatic, and organizational levels—the Lab went from not meeting its targets in fiscal year 2016 to exceeding its targets in fiscal year 2017. The Lab’s indicator data also show some areas in which the Lab has faced challenges or has not met its targets. As table 5 shows, the Lab did not meet its targets for 19 of the 39 indicators in fiscal years 2016 and 2017. For example, for one indicator—number of operating units that have integrated STIP at the strategic, programmatic, and organizational levels—the Lab did not meet its targets of 15 and 20, respectively, for fiscal years 2016 and 2017, with reported results of 12 and 19. For another indicator—number of smart innovation methods adopted by USAID operating units—the Lab set a target of eight but reported an actual result of six. Moreover, from fiscal year 2016 to fiscal year 2017, the Lab’s performance declined for seven indicators. For instance, for innovation indicators—number of system actors engaged in innovation methods and number of smart innovation methods adopted by agency operating units—the Lab went from exceeding its targets in fiscal year 2016 to not meeting them in fiscal year 2017. Lab officials stated that the Lab’s performance goals were meant to be ambitious and that the Lab would adjust goals on the basis of resource and budget constraints. Portfolio Review Tool and Identified Results The Lab has implemented biannual portfolio reviews of projects and activities. According to Lab officials, the portfolio reviews assess progress toward strategic objectives, provide Lab staff an opportunity to share lessons learned, and foster collaboration across the centers. In fiscal years 2016 and 2017, the Lab conducted four portfolio reviews— two at midyear and two at the end of both years. Each portfolio review discussed the performance of each center, examined how well the center was meeting the targets for its performance indicators, and addressed topics such as key achievements and challenges and priority evaluation and research questions for the upcoming fiscal year. Lab officials stated that portfolio reviews have helped the Lab become more rigorous and better understand the reasons for implementing the various projects and activities. The Lab’s portfolio reviews for fiscal years 2016 and 2017 highlight, among other things, lessons learned and achievements made for particular projects and toward the Lab’s overall strategic objectives and targets. The reviews also note challenges faced Lab-wide as well as planned adjustments to address these challenges. Examples of the portfolio reviews’ findings, by strategic objective, include the following for each of the five Lab centers: Science. The review noted that lessons learned by the Center for Development Research included emphasis on managing relationships and the need to communicate with missions about the ways in which research can help them contribute to their objectives. The review also noted that the center’s challenges included striking the right balance between different elements of the science objective in the Lab strategy and developing mission-focused tools for integrating research. Technology. The review noted that the Center for Digital Development achieved largely positive ratings for digital development training and for a substantial amount of technical assistance, trainings, and knowledge products. The review also noted that the center had faced some challenges, such as staffing constraints that limited staff’s ability to prioritize both internal and external engagements. Innovation. The review noted that the Center for Development Innovation had several achievements, including positive feedback from innovators who received technical assistance from the center as well as agency partners who received program design services. The challenges noted included the center’s need for more engagement with key missions and for finding balance between advisory services and direct project implementation. Partnerships. The review noted that the Center for Transformational Partnerships had identified lessons learned in areas such as the center’s ability to support missions by helping them to identify opportunities and determine when and where partnership makes sense. One challenge that the review identified was the possibility that the center’s limited resources might inhibit technical assistance to missions and bureaus. Planned adjustments included prioritizing advisory and liaison support to the regions that have lower capacity for private sector engagement. Agency integration. The review noted that the Center for Agency Integration achieved several successes, including introducing the Lab and STIP to over 30 Foreign Service nationals (i.e., local, non-U.S. citizens employed by USAID), several of whom continued to champion STIP at their missions. The review also noted challenges, such as staffing and capacity gaps, that hampered training efforts as well as USAID staff being overwhelmed by the amount of information flowing from the Lab. Strategic Learning Review Tool and Identified Results In October 2017, the Lab implemented an evaluation, research, and learning plan that includes practices recommended for bureaus. According to Lab officials, the Lab’s plan is intended to help build evidence within and across the centers and ensure that resources are prioritized to support evaluation and research. As part of this plan, the Lab identified five key questions for all of the centers that evaluations, research, and learning efforts should attempt to help answer. According to Lab officials, the Lab began holding strategic learning reviews, beginning in spring 2018, to help it address theories of change—that is, descriptions of how and why a result is expected to be achieved through a particular project or activity. The Lab developed the reviews to complement its portfolio reviews, according to Lab officials. The Lab, led by the Office of Evaluation and Impact Assessment, completed its first cross-Lab strategic learning reviews in the spring of 2018, according to Lab officials. The reviews focused on three of five key questions in the Lab’s evaluation, research, and learning plan: addressing adaptive management; supporting innovators, entrepreneurs, and researchers; and sustaining results. According to the Lab, the 2-hour sessions, in which Lab officials and other selected agency subject-matter experts participated, resulted in discussions about issues that the participants considered most important for the Lab to address or improve in the future. For example, participants identified actions that could be currently achieved, such as designating time for “pause and reflect” exercises, particularly for reducing USAID’s administrative burden for first-time Lab partners that lack the capacity to manage USAID requirements; and focusing on larger market-enabling environments rather than on a single value chain. According to Lab documents, the Lab plans to use data from the reviews to develop recommendations that will be reflected in an action memo and to track any actions the Lab takes to implement the recommendations. Lab officials stated that the Lab plans to hold three additional 2-hour strategic learning reviews in fall 2018. Evaluation Tool and Identified Results Evaluation Evaluation is the systematic collection and analysis of information about the characteristics and outcomes of programs and projects that provides a basis for judgments to improve effectiveness and/or inform decisions about current and future programming. The Lab assesses its performance through evaluations (see sidebar). According to Lab officials, the Lab has conducted both external evaluations and internal evaluations, and the majority of its performance evaluations are external. As of October 2018, the Lab had primarily completed performance evaluations, although Lab officials reported that the Lab was also conducting three impact evaluations and one developmental evaluation. In addition to conducting evaluations, the Lab conducts assessments—management tools used to gather information about context or operating environment or to review an activity or project. As of October 2018, the Lab reported that it had completed 7 external performance evaluations of its programs or projects and had an additional 12 ongoing evaluations, both internal and external. The Lab’s completed performance evaluations cover a variety of programs, activities, and USAID services, such as the Securing Water for Food Grand Challenge project and the Lab’s technical assistance services. We reviewed the seven completed external performance evaluations and found that they identified a range of program strengths as well as challenges or weaknesses. For example: Mid-Term Review of Securing Water for Food: A Grand Challenge for Development. The evaluation identified program strengths, such as a diversity of innovations in the portfolio. The evaluation also found that the program had potential weaknesses, including a lack of focus on innovations for locations with greater water scarcity. Mid-Term Evaluation of the Partnerships for Enhanced Engagement in Research Program. The evaluation found, among other things, that partnerships between scientists in developing countries and in the United States have been of value for scientific output and strengthening professional relationships. In addition, the evaluation identified potential weaknesses in the program, including the need to facilitate broader dissemination of research findings by convening program grantees, the private sector, government officials, and civil society partners to network and share findings as well as policy and program challenges. Mid-Term Evaluation of the Higher Education Solutions Network. The evaluation found, among other things, that development labs housed in seven higher education institutions have begun providing data to inform USAID operating units’ decision making, collaborating to develop and test new technologies and innovative approaches, and engaging in knowledge sharing and learning. Additional findings included the need for Higher Education Solutions Network labs to streamline activities, adjust resource allocations, and increase synergies based on the insights gained through the first 5 years. Global Broadband and Innovations Alliance Performance Evaluation. The evaluation found, among other things, successful outcomes of specific projects focused on sustainably increasing broadband internet connectivity in the developing world. The evaluation also found that USAID had been challenged by changing leadership in the agency, which resulted in shifting priorities. In addition, the evaluation found that limited marketing of the mechanism to missions and other bureaus and offices resulted in lower-than- expected initial buy-in from the missions. STIP Integration Performance Evaluation: West Africa Regional and Uganda. The evaluation found, among other things, that mission staff want to build their capacity to use STIP but would prefer more demand-driven services from the Lab, rather than services that do not align with mission strategies. In addition to completing formal evaluations, the Lab has completed over 15 assessments of its activities or projects since 2014 and also is conducting a number of ongoing assessments. The completed assessments reflect work in all five centers and cover areas such as digital finance services, co-creation, and STIP integration. Conclusions Since its establishment as a USAID bureau more than 4 years ago, the Lab has supported the agency’s efforts to address science, technology, innovation, and partnerships. Further, the Lab has funded and managed opportunities for innovators to propose new ideas, approaches, and technologies that tie into USAID’s overall development goals and programming. The Lab’s centers have pursued global partnerships with a wide range of non-USAID public and private sector stakeholders in an effort to augment their programming and further their efforts. However, because non-USAID contributions data that the Lab collects are not always current, some of the leverage data the Lab reports internally and externally to help demonstrate its accomplishments risks being outdated. Moreover, because the Lab’s Internal Guide to Accounting for Leverage does not require its public reporting of leverage data to identify the types of contributions represented, the Lab’s public reporting lacks transparency. Ensuring that the Lab’s internal data on non-USAID contributions are updated and that its publicly reported information about leveraged resources from the public and private sector is transparent will enable the Lab and USAID to better demonstrate to Congress and American taxpayers that the agency is maximizing its use of development resources to pursue new and innovative approaches to development challenges. Recommendations for Executive Action We are making the following two recommendations to USAID: The USAID Administrator should ensure that the Executive Director of the Lab assures that the Lab’s Internal Guide to Accounting for Leverage includes instructions to update all non-USAID contributions data in the Lab’s management information system at least annually. (Recommendation 1) The USAID Administrator should ensure that the Executive Director of the Lab assures that the Lab’s Internal Guide to Accounting for Leverage requires that the Lab’s public reporting of leverage data discloses the types of non-USAID contributions represented. (Recommendation 2) Agency Comments We provided a draft of this report to USAID for review and comment. USAID provided written comments that are reprinted in appendix IX. In its letter, USAID concurred with, and indicated that it is already addressing, both recommendations. In addition, USAID provided technical comments on the draft, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the USAID Administrator, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X. Appendix I: Objectives, Scope, and Methodology In this report, we examine (1) the Global Development Lab’s (the Lab) programs, funding, and staffing resources, (2) the extent to which the Lab has documented its oversight of awards with non-U.S. Agency for International Development (USAID) contributions and clearly reported these contributions, and (3) the tools that the Lab uses to assess its performance as well as results that such assessments have shown. Programs, Funding, and Staffing To examine the Lab’s programs, funding, and staffing resources, we reviewed and analyzed Lab program, funding, and staffing documents and data covering fiscal years 2014 to 2017. We reviewed the congressional notification in which USAID advised Congress of its intent to establish the Lab, program description documents, as well as the Lab’s current strategy document which contains the Lab’s results framework and strategic objectives covering science, technology, innovation, partnerships (STIP), and agency integration. In addition, we reviewed documents that provided information on services and tools the Lab provides to operating units within USAID. We reviewed and analyzed Lab funding data, by appropriations accounts, which included allocations and obligations for Lab programs by centers and offices covering fiscal years 2014 to 2017. The Lab did not yet have fiscal year 2018 funding information available. In addition, we reviewed and analyzed obligation data on Lab-managed activities for fiscal years 2014 to 2017. To report on staffing, we reviewed and analyzed Lab staffing data for fiscal years 2015 to 2018 which included data on the number of direct hire staff and contractors, hiring mechanisms used to bring staff on board, as well information on the centers and offices the staff worked in. To assess the reliability of the staffing data for fiscal years 2015 to 2018 and the funding data for fiscal years 2014 to 2017, we compared and corroborated information provided by the Lab with staffing and funding information in the Congressional Budget Justifications for the fiscal years. On the basis of the checks we performed, we determined these data to be sufficiently reliable for the purposes of this report. We interviewed Lab officials representing every center—Center for Development Research, Center for Digital Development, Center for Development Innovation, Center for Transformational Partnerships, and Center for Agency Integration; each support office—Office of Engagement and Communication, and Office of Evaluation and Impact Assessment; and all Lab-Wide Priorities—Ebola, Digital Development for Feed the Future, and Beyond the Grid—to understand the Lab’s organizational structure, roles and responsibilities, programs, and services, among other things. We also spoke with officials in the Administrative Management Services and Program and Strategic Planning offices, which cover the Lab’s financial and human resources, as well as strategic planning and reporting. To obtain insight into the Lab’s interaction and STIP integration within USAID, we also interviewed agency officials from five USAID bureaus in Washington, D.C.— Democracy, Conflict, and Humanitarian Assistance; Economic Growth, Education, and Environment; Food Security; Global Health; and Policy, Planning, and Learning; and from six USAID missions overseas— Albania, Cambodia, Guinea, Haiti, Uganda, and the Regional Development Mission for Asia. To determine the number of activities the Lab managed from fiscal years 2014 through 2017, and the amount it had obligated for these activities in this timeframe, we reviewed and analyzed data from USAID’s financial management system—Phoenix. Additionally, we met with Lab officials responsible for managing and reviewing the data in this system. To ensure that we accounted for only programmatic activities in our timeframe, we removed activities, in consultation with Lab Officials, from the dataset that pertained to institutional support contracts and fellowships. We also met with officials from each of the Lab’s centers to discuss the activities that they manage. We determined that the data were sufficiently reliable to account for Lab managed activities. Oversight, Documentation, and Reporting of Non- USAID Contributions To address oversight and documentation of awards with non-USAID contributions, we reviewed Lab and USAID policies and guidance for oversight of non-USAID contributions as of fiscal year 2017, including Lab guidance, and relevant chapters of USAID’s Automated Directives System (ADS), which contain the agency’s policy. We analyzed Lab- managed awards with committed funding from non-USAID partners from fiscal years 2014 through 2017 (a total of 154) from the Lab’s information management system DevResults, which we determined was sufficient to allow us to select a sample of these awards for further review. Our sample included 24 awards, which represented all Lab-managed awards containing non-USAID contributions issued on or after fiscal year 2014, and ending in or before fiscal year 2017. We selected these timeframes to ensure that the awards we reviewed did not predate the creation of the Lab (fiscal year 2014) and to ensure that activities and all award documentation on activities had been completed. To assess the reliability of these committed funding data, we reviewed documentation and interviewed USAID officials to identify and rectify any missing or erroneous data. Since we selected only awards in our given timeframe, the results cannot be generalized to all Lab managed awards receiving non-USAID committed contributions. We determined that the data and information were sufficiently reliable to compare against award documentation. The awards we reviewed covered four of the Lab’s five objectives: science (1 award), technology (3 awards), innovation (19 awards), and partnerships (1 award). To determine the extent to which the Lab had documented certain oversight requirements for these awards, we reviewed award documentation contained in the 24 award files against key oversight requirements and best practices established by USAID and the Lab. These oversight requirements include: report committed funding amounts received from non-USAID sources; conduct valuations of in-kind contributions, as applicable; document partners met cost-share or matching funds, if required; and maintain copies of the award agreement and any modifications. To determine the extent to which the Lab’s information management system contained current data on non-USAID contributions, we reviewed committed funding data for the 24 selected awards in this system against documentation in the award files. We also reviewed the Lab’s guidance on accounting for non-USAID contributions in addition to meeting with Lab officials responsible for data input and oversight of such contributions. However, we did not independently assess the accuracy of the committed contributions against actual contribution amounts because the Lab does not collect data on actual contributions received in all of its awards. To determine the extent to which the Lab’s guidance on accounting for non-USAID contributions differs from USAID agency guidance, we compared guidance documents provided by the Lab with agency guidance from USAID’s ADS 303. Among other guidance documents, we reviewed the Lab’s Global Development Lab Internal Guide to Accounting for Leverage, and the Lab’s “Indicator Reference Sheet.” We also interviewed Lab officials responsible for implementing the Lab’s guidance for accounting for non-USAID contributions, as well as officials from USAID’s office of Policy, Planning, and Learning who are responsible for developing and updating ADS guidance on non-USAID contributions. We also reviewed the Lab’s public reporting of non-USAID contributions on USAID’s website. Performance Assessment and Results To report on the tools that the Lab uses to assess its performance, we reviewed and analyzed numerous Lab program and performance documents. These included the Lab’s strategic plan that covers fiscal years 2016 to 2020 and the Lab’s results framework that outlines the strategic objectives; Performance Management Plan; evaluation, research, and learning plan; Lab portfolio reviews; and Lab strategic learning reviews. To learn about the Lab’s performance management, program evaluation, and assessment process, we interviewed Lab officials from the Office of Evaluation and Impact Assessment and the Program and Strategic Planning office. We reviewed sections of USAID’s ADS 201 that pertain to strategic planning and implementation; project design and implementation; activity design and implementation; and monitoring, evaluation, and learning. We also spoke with officials in the Bureau for Policy, Planning, and Learning regarding the performance management requirements for bureaus outlined in ADS 201. To report on the results of the Lab’s performance indicators, we reviewed indicator data from the Lab for fiscal years 2014 to 2017. Since the Lab’s strategy was created in 2016, we focused our analysis on indicator data for fiscal years 2016 and 2017 that represent the Lab’s objectives as laid out in the Lab’s Results Framework. The Lab provided this information from DevResults, to include targets and measurements for each indicator by fiscal year. The data that we received from the Lab contained over 250 total indicators, which included those at the objective level, intermediate level, and sub-intermediate results level. We identified and analyzed 39 indicators representing the objective and intermediate results levels (for the science, technology, innovation, partnerships, and agency integration objectives) and looked at the targets and actuals for these for fiscal years 2016 and 2017. We compared each target value with the actual value to determine whether the Lab met, exceeded, or did not meet its targets for each indicator. If the target and the actual were the same value, we designated this as “meets.” If the target value was less than the actual value, we designated this as “exceeds.” Finally, if the target value was more than the actual value, we designated this as “does not meet.” We also identified indicators (both at the objective and intermediate results levels) where the Lab improved its performance from fiscal year 2016 to fiscal year 2017 as well as indicators where the Lab had declined in its performance from fiscal year 2016 to fiscal year 2017. To assess the reliability of the Lab’s performance data base, we interviewed Lab officials and reviewed documentation, and we determined that the data was sufficiently reliable for the purposes of comparing the Lab’s targets to reported results. However, it was beyond the scope of this engagement to assess the reliability of each of the 39 indicators. To report the results of the Lab’s seven external evaluations, we reviewed the completed external evaluations that were conducted in 2016 and 2017. As applicable, we looked at the purpose of those evaluations, findings, lessons learned, and any challenges to the program or project that the evaluation covered. We did not assess whether the Lab met its evaluation requirements under ADS 201, as this issue was outside of the scope of our review. We did not independently assess the methodology that was used in the evaluations. To report the results of the Lab’s portfolio reviews, we reviewed four portfolio reviews—two at midyear and two at the end of the year—that the Lab conducted in fiscal years 2016 and 2017. The portfolio reviews included sections on the Lab’s five objectives. As the portfolio reviews used different approaches to collect information, we analyzed them and identified headings in the documents that pointed towards results, including findings, challenges, achievements, and lessons learned and summarized this information. To report on the results of the strategic learning reviews, we reviewed the three strategic learning reviews—each a 2-page document—that the Lab had conducted in spring of 2018. We summarized each review and reported on each of the reviews’ questions and one of the “now what” actions from each review to provide an illustrative example. We conducted this performance audit from July 2017 to November 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Description of Key Global Development Lab Programs and Portfolios by Each Center The Global Development Lab’s (the Lab) five centers, its offices, and Lab- Wide Priorities manage more than 20 key programs and portfolios. The following are descriptions of key programs or portfolios implemented or managed by the Lab’s five centers—Development Research, Digital Development, Development Innovation, Transformational Partnerships, and Agency Integration. Center for Development Research Higher Education Solutions Network (HESN): According to Lab documentation, HESN is a partnership with seven universities working with partners worldwide. Leveraging nearly equal investments from each higher education institution, the universities established eight development labs that collaborate with a network of 685 partner institutions in academia, the private sector, civil society, and government across 69 countries. HESN’s development labs work with the U.S. Agency for International Development (USAID) to address problems faced by developing countries. Partnership for Enhanced Engagement in Research (PEER): According to Lab documentation, PEER supports competitively awarded grants for collaborative research projects led by developing country scientists and engineers who partner with American researchers. PEER-funded scientists conduct applied research that can inform public policy or new practices in development with a goal of creating and leading new innovations or generating evidence for how to scale innovations. PEER also builds research capacity by providing funds, tools, technical assistance, and research opportunities for local scientists and students. The program is implemented in partnership with the U.S. National Academy of Sciences. Science and Research Fellowship Programs: According to Lab documentation, the Lab supports three fellowship programs that are characterized by a commitment to the use of science, technology, innovation, and partnerships. The American Association for the Advancement of Science (AAAS) Science and Technology Policy Fellowship and the Jefferson Science Fellowship both bring scientists and technical experts to serve 1- to 2-year fellowships at the U.S. Agency for International Development, contributing their knowledge and analytical skills to development policy, research, and programming. Further, the Research and Innovation (RI) Fellowship program connects U.S. graduate student researchers research, or technical expertise, to address pressing development challenges. Research Policy Support: According to Lab documentation, the Lab provides advice to the agency on implementing the USAID Scientific Research Policy. This may include areas like peer review and open access to research products including data and USAID staff publications. Center for Digital Development Digital Inclusion: According to Lab documentation, the Lab helps improve connectivity by expanding access to the internet in countries where USAID works to help ensure that the most marginalized citizens have the skills and resources to be active participants in the digital economy. The team supports missions to integrate internet solutions into existing programs to ensure health clinics, schools, and other critical facilities are connected and offer access to modern internet services. Development Informatics (portfolio): According to Lab documentation, the Lab seeks to make development more adaptive, efficient, and responsive to citizens and decision makers by helping transform the use of data and technology throughout development. The Lab supports mission investments in technology platforms that can collect and analyze data more efficiently to improve strategic planning and program implementation. The Lab also leads the public advocacy campaign for the Principles for Digital Development, a set of agency best practices for applying digital technology and data in development. GeoCenter: According to Lab documentation, the Lab applies geographic and other data analysis to improve the strategic planning, design, implementation, monitoring, and evaluation of USAID’s programs. The GeoCenter works directly with USAID bureaus and missions to integrate geographic analysis, futures analysis (including scenario planning), and data analytics to inform development decisions. The team also leads a geospatial community of 50 geographic information systems specialists in field-based missions and in Washington, D.C. Digital Finance (portfolio): According to Lab documentation, the Lab’s Digital Financial Services team is working with USAID missions and bureaus through multi-stakeholder alliances and direct technical assistance to help the world’s financially excluded and underserved populations obtain access to and use financial services that meet their needs. The Digital Finance team has worked with over 30 missions and agency operating units to improve operational and programmatic efficiency as a means to accelerating development objectives within USAID projects and programs. Center for Development Innovation Development Innovation Ventures (DIV): According to Lab documentation, DIV is the agency’s venture capital-inspired, tiered, evidence-based funding model that invests comparatively small amounts in relatively unproven concepts, and continues to support only those that prove to work. It applies three core criteria to its application review process—evidence of impact, cost-effectiveness, and potential to scale. DIV accepts applications at three different funding stages from Proof of Concept ($25,000-150,000); Testing ($150,000–$1.5 million); and Transitioning to Scale ($1.5 million–$15 million). Grand Challenges for Development: According to Lab documentation, grand challenges call on the global community to discover, test, and accelerate innovative solutions around specific global challenges. The Lab is also leading efforts to apply innovation methods such as funding for challenges and prizes to accelerate innovation or incentivize action toward specific outcomes, such as the development of more efficient, lower-cost refrigeration solutions in the recently launched Off-Grid Refrigeration Competition. The Global Innovation Exchange: According to Lab documentation, this effort is an online platform to convene and connect innovators, funders, and experts working on development innovations around the world. The exchange is co-funded by USAID, the Australian Department of Foreign Affairs and Trade, the Korea International Cooperation Agency, and the Bill and Melinda Gates Foundation. Innovative Design (portfolio): According to Lab documentation, innovative design tools and approaches can help make a process more open and collaborative, incorporate human-centered design, or find a more innovative approach to solving a development problem. The Lab works to reframe development challenges, reach new audiences, and spur new ways of solving problems. It seeks to equip USAID teams with skills to design innovative programs using tools like design thinking and co- creation. It also builds diverse networks around critical systems challenges and facilitates a dialogue on the practice of innovation and design across USAID and the industry. Center for Transformational Partnerships Global Development Alliances (GDAs): According to Lab documentation, GDAs are partnerships between USAID and the private sector that use market-based solutions to advance broader development objectives. These partnerships combine the assets and experiences of the private sector to leverage capital, investments, creativity, and access to markets to work to solve the complex problems facing governments, businesses, and communities. GDAs leverage market-based solutions to advance broader development objectives. GDAs are co-designed, co-funded, and co-managed by all partners involved so that the risks, responsibilities, and rewards of partnership are shared. Partnering to Accelerate Entrepreneurship (PACE): According to Lab documentation, the Lab’s PACE initiative catalyzes private-sector investment into early-stage enterprises and helps entrepreneurs grow their businesses. Diaspora Engagement (portfolio): According to Lab documentation, the diaspora engagement is a core focus area for the Lab, which works with non-traditional partners in diaspora communities and organizations in under-addressed technical areas to test and incubate innovative partnership models. Center for Agency Integration Science, Technology, Innovation, and Partnerships (STIP) Agency Integration (portfolio): According to Lab documentation, the Lab supports the application of STIP across the agency by providing technical assistance, training, and catalytic investments in mission-driven STIP programs. In fiscal year 2016, the Lab worked closely with eight missions to integrate STIP tools and approaches to accelerate their development objectives. For example, the Lab is supporting ongoing efforts with the Uganda mission and a range of local partners, including the government of Uganda, to promote and source local, sustainable off-grid power solutions to impact a majority of underserved citizens. Digital Development for Feed the Future: According to Lab documentation, the Lab is collaborating with USAID’s Bureau for Food Security on integrating digital technologies into Feed the Future activities to accelerate reductions in global hunger, malnutrition, and poverty. An example includes facilitating greater precision agriculture through richer data collection, analysis, and packaging. Operational Innovation: According to Lab documentation, the Operations Innovations Team collaborates with partners to test and demonstrate viable disruptions which improve efficiency and effectiveness of Agency’s internal business processes, practices, and procedures. Appendix III: Description of 10 Grand Challenges for Development Since 2011, the U.S. Agency for International Development (USAID) and its partners have launched 10 Grand Challenges for Development. Grand Challenges for Development mobilize governments, companies, and foundations around important issues. According to USAID, through these programs, USAID and public and private partners bring in new voices to solve development problems through sourcing new solutions, testing new ideas, and scaling (expanding) what works. Table 6 includes a description of each of the Grand Challenges, identifies the founding partners, and lists the primary bureau within USAID responsible for the programs. According to Global Development Lab (the Lab) officials, the Lab manages Securing Water for Food and Scaling Off-Grid Energy Grand Challenges. Appendix IV: Global Development Lab Program Fund Allocation and Obligation Totals by Account, Fiscal Years 2014-2017 The Global Development Lab’s (the Lab) funding comes from different appropriations accounts. While the majority of the funding for fiscal years 2014 to 2017 is from the Development Assistance account, the Lab has also received lesser amounts of funding from four other accounts (see table 7). Appendix V: Global Development Lab Managed Activities in Fiscal Years 2014 through 2017 In fiscal years 2014 through 2017, the Global Development Lab (the Lab) managed a total of 339 activities addressing science, technology, innovation, and partnerships implemented by partners and obligated about $371 million for these awards. As figure 3 shows, the number of activities the Lab managed increased each year during this period, from 149 in fiscal year 2014 to 226 in fiscal year 2017. Obligated funding for all activities also increased annually until fiscal year 2017, when it declined by 27 percent. The Global Development Lab obligated funds to other activities it managed during this period that are not reflected in the data presented. These include obligations for institutional support contracts and staff fellowships. In fiscal years 2014 through 2017, four of the Lab’s centers managed a variety of activities addressing the Lab’s science, technology, innovation, and partnerships objectives. The Center for Development Research managed 28 activities addressing the Lab’s science objective. Obligations for these activities totaled about $120.4 million. The majority of this funding went to two programs, the Higher Education Solutions Network (about $81.2 million) and the Partnership for Enhanced Engagement in Research (about $27.7 million). The Center for Digital Development managed 17 activities addressing the Lab’s technology objective, ranging from providing geospatial satellite imagery to increasing the use of mobile money and e- payments in developing countries. Obligations for these activities totaled $64.5 million, with the majority of this funding going to Digital Finance activities. The Center for Development Innovation managed 205 activities addressing the Lab’s innovation objective. Obligations for these activities totaled about $115.4 million. This funding went to three programs: the Development Innovation Ventures program (about $57 million), the Innovation Acceleration program (about $19.3 million) and the Innovation Design program (about $39.2). The Lab’s Innovation Acceleration and Design program houses the Securing Water for Food Grand Challenge. The Center for Transformational Partnerships managed 37 activities addressing the Lab’s partnerships objective. Obligations for these activities totaled $39.8 million. For example, the Lab obligated about $13.9 million for the Partnering to Accelerate Entrepreneurship program, which aims to bring private-sector investment into businesses at early stages of development, among other things. In addition, other U.S. Agency for International Development (USAID) missions and bureaus have provided funding to Lab-managed projects through buy-ins. From fiscal years 2014 to 2017, USAID missions and bureaus provided funding to 55 Lab-managed projects, totaling $53 million. According to Lab officials, missions and bureaus can buy into projects in the development stage and can also buy into existing projects. For example, according to officials at USAID’s mission in Haiti, the Lab developed and funded a Higher Education Solutions Network project in Haiti, which provided the Haitian Ministry of Planning with capacity- building training to improve the collection of development and funding data for all donors in the country. Because the USAID mission saw the value of this project, it bought into the project, using its own funding, to allow the project to continue for an additional 2 years. Appendix VI: Global Development Lab Centers’ Direct Hires and Contractors, Fiscal Years 2015-2018 The Global Development Lab (the Lab) has numerous contractors who provide technical expertise in the centers and fill gaps when direct-hire staff are not available, according to Lab officials. In fiscal years 2016 to 2018, the Center for Digital Development had the most contractors of all the centers (see table 8). The contractors in this center are technical specialists mainly in the Lab’s GeoCenter, which uses geographic information systems to collect data to help aid development decisions in countries around the world. In fiscal year 2018, there were more contractors than direct-hire staff in the Center for Digital Development. Appendix VII: U.S. Agency for International Development Missions’ and Bureaus’ Views on the Global Development Lab Officials in the five U.S. Agency for International Development (USAID) bureaus and six missions we spoke with provided positive feedback on their interactions with the Global Development Lab (the Lab) but also identified some challenges. USAID officials identified numerous positive aspects or benefits of working with the Lab, such as the following: Lab staff brings diverse expertise and outside perspectives to the agency and provides technical assistance to projects that would not have been implemented otherwise. For example, some USAID officials mentioned that the Lab staff has insight into innovative approaches—whether procurement-related or project design and monitoring—and that the Lab has the ability to bring in contractors with specific technical expertise that the traditional development arena lacks. Lab staff is responsive and often willing to help with technical issues. Some USAID staff mentioned that Lab staff provide expertise and answer questions on an informal basis, sometimes covering areas where they are not the assigned point of contact with a particular bureau or mission. The Lab coordinates cross-cutting projects across the agency, such as the Grand Challenges for Development. Some bureau officials stated that Lab officials have been able to share their perspectives at training and other activities which has allowed them to be aware of what others across USAID are doing relevant to activities related to science, technology, innovation, and partnerships (STIP). The Lab funds projects and activities that missions and USAID headquarters operating units cannot afford. Some USAID officials mentioned that the Lab has sent staff out to provide STIP training, with the Lab covering the costs. However, some officials also mentioned that they have seen that recent budget cuts have had an impact on the Lab’s funding for more recent activities. The Lab holds trainings on topics such as procurement processes and private sector engagement that have helped missions and bureaus adopt new approaches to work and development partnerships. USAID officials also noted problematic aspects or challenges in working with the Lab, such as: Some Lab services can be cost prohibitive. For example, some mission officials mentioned that Lab resources are centralized in headquarters and therefore the cost to missions might be high and not affordable. Staff turnover at the Lab is frequent, making it difficult for bureau or mission officials to maintain relationships with the Lab. For example, some officials stated there has not been consistent contact with the Lab due to Lab staff frequently moving around or leaving. This has included changes in contacts for agreement officer representatives responsible for awards impacting the mission. The centers’ services and the ways in which bureaus or missions could work most effectively with the Labs are not always clear. For example, some mission and bureau officials mentioned that Lab staff does not always understand a country’s context when suggesting or deploying potential programs or activities related to STIP. This includes working to integrate STIP activities or innovations into the Country Development Cooperation Strategy when these might not be feasible for a country context or responsive to the needs of the mission. USAID officials noted that when they have provided feedback to the Lab, the Lab has generally been responsive. In addition, bureau officials mentioned that the Lab’s communications have improved. Appendix VIII: List and Description of Global Development Lab’s Performance Indicators, Fiscal Years 2016-2017 The Global Development Lab (the Lab) established its performance indicators when it created its strategy in fiscal year 2016 to cover fiscal years 2016-2020. The Lab’s results framework, which is reflected in the strategy, includes the Lab’s objective statements and intermediate results statement from which the Lab’s performance indicators flow. See table 9 for a description of indicators for the Lab’s five strategic objectives for fiscal years 2016 to 2017. Appendix IX: Comments from the U.S. Agency for International Development Appendix X: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Leslie Holen (Assistant Director), Andrea Riba Miller (Analyst in Charge), Nick Jepson, and Kelly Friedman made key contributions to this report. Also contributing were Martin De Alteriis, Jeff Isaacs, Chris Keblitis, Reid Lowe, Aldo Salerno, and Nicole Willems.
Why GAO Did This Study The Lab was created as a USAID bureau in April 2014. The Lab was intended to institutionalize and improve USAID's ability to harness and leverage science, technology, innovation, and partnerships in addressing development issues and goals worldwide. The Lab supports projects and activities and announces, issues, and manages awards—or funding opportunities—for innovators to propose new ideas, approaches, and technologies. The Lab also incorporates external (i.e., non-USAID) contributions into its programming. Senate Report 114-290 included a provision for GAO to review the Lab. GAO's report examines, among other things, (1) the Lab's programs, funding, and staffing resources and (2) the extent to which the Lab has documented its oversight of awards with non-USAID contributions and clearly reported these contributions. GAO reviewed and analyzed agency documents and interviewed agency officials in Washington, D.C., and from six missions. GAO also analyzed selected Lab documentation for fiscal years 2014 through 2017. What GAO Found The U.S. Agency for International Development's (USAID) Global Development Lab (the Lab) has programs and activities for each of its five strategic objectives: science, technology, innovation, and partnerships (STIP) and agency integration of STIP. The Lab comprises five centers and two support offices (see figure). The centers house more than 25 Lab programs focused on issues such as development research, digital development, innovation ventures, and private sector engagement. The Lab's funding for its programs has generally been decreasing, as have its staffing numbers, since fiscal year 2015. USAID allocations of program funds to the Lab decreased from $170.7 million in fiscal year 2015 to $77 million in fiscal year 2017. Although the Lab has documented its oversight of awards that include non-USAID contributions, some data it collects for these contributions are outdated and its public reporting of such data lacks transparency. For awards GAO reviewed, the Lab consistently documented its compliance with key award oversight requirements. However, its Internal Guide to Accounting for Leverage (internal guide) does not include instructions for ensuring the data for these contributions are current. As a result, GAO found the Lab's management information system contained outdated data for non-USAID contributions in 10 of 24 awards GAO reviewed. The Lab publicly reports a broader range of non-USAID contributions than the types described in USAID policy. However, the Lab's internal guide does not require the Lab to disclose the types of contributions represented in its public reporting. As a result, the Lab's public reporting of such contributions lacks transparency. USAID policy and standards for internal control in the federal government require the use and communication of timely and reliable information. Revising the Lab's internal guide to include instructions for updating data on non-USAID contributions and requiring the Lab's public reporting to disclose the types of contributions represented would help the Lab ensure accuracy and transparency in the information it reports to Congress and the public. What GAO Recommends GAO recommends that USAID ensure that the Lab revises its Internal Guide to Accounting for Leverage to (1) include instructions for updating data on non-USAID contributions for awards and (2) require its public reporting of non-USAID contributions to disclose the types of contributions represented. USAID concurred with both recommendations.
gao_GAO-18-627
gao_GAO-18-627_0
Background The Services Acquisition Reform Act of 2003 required the Administrator for Federal Procurement Policy to establish an acquisition advisory panel (referred to as the Panel) to review federal acquisition laws, regulations, and policies; and identify opportunities to enhance how agencies award and administer contracts for the acquisition of goods and services. The Administrator for Federal Procurement Policy appointed the Panel members in February 2005, and the Panel issued its final report in 2007. Our Work in Federal Acquisitions We have a long history of reporting on the key issue areas that the Panel addressed in 2007. In 2007, we reported that the Panel’s findings were largely consistent with our prior work. For example, the Panel found that defining requirements is key to achieving the benefits of competition. Similarly, we have issued numerous reports that address the importance of robust requirements definition. Panel members also recognized a significant mismatch between the demands placed on the acquisition workforce and the personnel and skills available to meet those demands. In 2006, we testified that DOD’s acquisition workforce, the largest component of the government’s acquisition workforce, remained relatively unchanged while the amount and complexity of contract activity had increased. Since then, we have issued many reports and testimonies on topics ranging from requirements development at DOD, government-wide competition rates, small business, and the acquisition workforce, among others. We also track a number of key acquisition issues—such as DOD contract management and weapons systems acquisitions—through our high-risk program. Our high-risk program identifies government operations with greater vulnerabilities to fraud, waste, abuse, and mismanagement. Section 809 Panel Twelve years after the Services Acquisition Reform Act of 2003 required the Administrator for Federal Procurement Policy to establish the Panel, Congress required the establishment of another advisory panel by the Secretary of Defense in section 809 of the National Defense Authorization Act (NDAA) for Fiscal Year 2016 (referred to as the Section 809 Panel), and tasked it with reviewing applicable defense acquisition regulations and finding ways to streamline and improve the defense acquisition process, among other things. The Section 809 Panel is reporting on a number of topics related to areas covered by the 2007 Acquisition Advisory Panel report, including competition, acquisition workforce and small business participation. The Section 809 Panel issued an interim report in May 2017. Volumes I and II of its final report were issued in January 2018 and June 2018, respectively. Its final volume is expected in January 2019. Key Issue Area 1: Requirements Definition Issue Area Context Acquisition requirements describe the government’s needs when agencies procure products (such as major weapon systems) and services (such as engineering support) from contractors. Federal statute, policy and best practices emphasize the need for valid, clear, and achievable requirements early in the acquisition process. An example of a requirement for a major weapon system could include the range that a missile must be able to travel, while a requirement for a service acquisition could include an engineer’s experience and education. In 2007, the Panel found that defining requirements is key to achieving the benefits of competition because procurements with clear requirements are far more likely to produce competitive, fixed-price offers that meet customer needs. The Panel also found that the government invested in requirements definition less than the private sector, and that better requirements definition would help facilitate implementation of performance- based acquisition (PBA). PBA is a preferred acquisition approach that focuses on contractors’ deliverables rather than how they perform the work. We have found that federal agencies continue to face challenges involving acquisition requirements definition. Congress passed a defense acquisition reform law with requirements- related provisions in 2009, but our work shows that DOD often begins programs with unrealistic requirements. Agencies have not consistently complied with OMB’s requirements relating to key provisions from an information technology (IT) acquisition reform law. Numerous efforts have been made to improve and encourage commercial item procurements in an attempt to take advantage of market innovations and reduce acquisition costs. DOD and GSA have taken steps to improve how personnel define requirements for service acquisitions, and to focus more on contractors’ deliverables than on how the contractors perform the work, but officials told us that some acquisition officials are reluctant to cede control of the acquisition to contractors. We elaborate on these points below. 2009 Defense Acquisition Reform Law Included Provisions Related to Requirements Definition, but DOD Still Faces Challenges The 2009 Weapon Systems Acquisition Reform Act (WSARA) included provisions related to requirements definition for major defense acquisition programs. In December 2012, we found that WSARA was helping program offices identify and mitigate requirements-related risks earlier in the acquisition process based on our analysis of 11 weapon acquisition programs. Section 809 Panel In its June 2018 report, the Section 809 Panel suggested that the Department of Defense better align its acquisition, requirements, and budget processes. It also suggested that the requirements system focus on capabilities needed to achieve strategic objectives instead of predefined systems. However, we have also observed and reported that DOD has struggled to adequately define requirements for its largest acquisition programs. For example, in 2014, we found that cost and schedule growth in major acquisition programs can, in part, be traced to a culture in which the military services begin programs with unrealistic requirements. This cost and schedule growth decreases DOD’s buying power, reducing the aggregate military capability the department can deliver over time. In 2017, we found that the Army’s requirements development workforce had decreased by 22 percent since 2008, with some requirements development centers reporting more significant reductions. We recommended that the Secretary of the Army conduct a comprehensive assessment to better understand the resources necessary for the requirements development process and determine the extent to which the shortfalls can be addressed given other funding priorities. While the Army agreed with the recommendation, it remains unaddressed. WSARA also required that DOD use competitive prototyping, which we generally define as two or more competing vendors producing prototypes for weapon systems before a design is selected for further development, in major defense acquisition programs as applicable. We have found that prototyping has benefited acquisition programs by, among other things, helping programs understand their requirements, and we have found that competitive prototyping has generated additional benefits, such as improving the quality of systems offered. Even though Congress repealed WSARA’s competitive prototyping requirement in 2015, Congress simultaneously codified a preference for prototyping—including competitive prototyping—as a risk mitigation technique, which has been implemented in DOD policy. Further, the fiscal year 2017 and 2018 NDAAs included several new prototyping-related provisions. As of 2018, DOD Weapons System Acquisitions remains on our High Risk list. Among other things, we reported that DOD needs to build on existing reforms intended to improve requirements definition and, specifically, examine best practices to better integrate critical requirements. Agencies Have Not Consistently Complied with a Key IT Acquisition Reform Law The 2014 Federal Information Technology Acquisition Reform Act (commonly referred to as FITARA) expanded the role of certain agency Chief Information Officers (CIOs) to improve acquisitions of information technology (IT) products and services. Several aspects of FITARA target requirements definition and OMB has expanded upon and reinforced these aspects in a number of ways through government-wide guidance. However, as of 2018, Improving the Management of IT Acquisitions and Operations remains on our High Risk List because agencies have not completely implemented certain FITARA requirements as implemented by OMB or addressed a number of our recommendations, including several that target requirements definition. CIO Responsibilities FITARA includes a provision generally requiring that agency heads ensure CIOs review and approve all IT contracts prior to award, unless that contract is associated with a non-major investment. Additionally, OMB’s implementing guidance states that CIOs—or other authorized officials, as appropriate—should review and approve IT acquisition plans or strategies as applicable. These reviews can provide CIOs greater insight into IT acquisition requirements. However, in January 2018, we found that officials at 14 of 22 selected agencies did not identify, or help identify, IT acquisitions for CIO review as required by OMB’s guidance. The same number of agencies did not fully satisfy OMB’s requirement that the CIO or other appropriate parties review and approve IT acquisition plans or strategies. As a result, agencies increased the risk that they were awarding IT contracts that were duplicative, wasteful, or poorly conceived. Incremental Development FITARA requires that CIOs certify that their agencies are adequately implementing incremental IT development, as defined in capital planning guidance issued by OMB. We previously reported that OMB has emphasized the need to deliver investments in smaller parts, or increments, to reduce risk, deliver capabilities more quickly, and facilitate the adoption of emerging technologies. We have previously reported that a key step in implementing incremental development methods can include defining requirements appropriately, such as by involving end users and stakeholders. We have found that agencies have struggled to adhere to FITARA’s incremental development requirements, as implemented in OMB’s capital planning guidance. In 2017, we found less than 65 percent of major IT software development investments were reported as being certified by the agency CIO for implementing adequate incremental development. Software Licenses FITARA also includes provisions addressing government software license management, calling for the identification and development of a strategic sourcing initiative to enhance government-wide acquisition, shared use, and dissemination of software. In May 2014, we found that 22 of 24 major agencies did not have comprehensive license policies and only 2 had comprehensive license inventories. Without comprehensive policies and inventories, agencies are poorly positioned to understand their requirements for software licenses. We recommended that OMB issue a directive to help guide agencies in managing licenses and that the 24 agencies improve their policies and practices for managing licenses. As of July 2018, OMB had addressed our recommendation, but many of the recommendations to other agencies remained unaddressed. Congress and DOD Have Worked to Encourage Commercial Item Procurements Purchasing commercial items helps an agency take advantage of market innovations, increase its supplier base, and reduce acquisition costs. The commercial item definition includes items customarily used by and sold (or offered) to the general public, including products with minor modifications. Federal agencies can purchase commercial items to meet many requirements, from the relatively simple, such as office furnishings and housekeeping services, to the more complex, such as maintenance services and space vehicles. Further, contracting officers can use streamlined solicitation procedures—which can reduce the time needed to solicit offers from vendors—if they determine that the product or service being procured is commercial. We reported that federal agencies used commercial item procedures for over $100 billion of goods and services in 2015. The issue of commercial item procurements has been a concern of Congress for a number of years. In the fiscal year 2018 NDAA, and four of its predecessor acts, Congress specified how DOD is to define and purchase commercial items. For example, a fiscal year 2017 provision set a preference for certain commercial services, such as facilities-related or knowledge-based services, by prohibiting defense agencies from entering into non-commercial contracts above $10 million to meet those requirements without a written determination that no commercial services can meet the agency’s needs. Section 809 Panel In its January 2018 report, the Section 809 Panel proposed a new approach for using commercial items to meet requirements. The panel proposed that Congress and the Department of Defense (DOD) tailor the department’s acquisition approach based on the level of customization a given product entails. For readily available commercial items, or those requiring minor customization, the panel stated that DOD should be willing and able to reduce management and oversight to capitalize on the nondefense marketplace. In its June 2018 report, the Section 809 Panel suggested additional statutory and regulatory changes to simplify commercial item procurements. In January 2018, DOD revised its regulations and corresponding procedures, guidance, and information related to the procurement of commercial items to reflect recent legislative changes. DOD also updated its acquisition regulations to provide guidance to contracting officers for making price reasonableness determinations, promoting consistency in making commercial item determinations, and expanding opportunities for nontraditional defense contractors to do business with DOD. The Department also updated its Guidebook for Acquiring Commercial Items, which includes information on how to define, determine, and price commercial items, to reflect the regulatory changes. DOD has also created six commercial item Centers of Excellence to provide analytical support and assist in both the timeliness and consistency of commercial item determinations. The centers are staffed with engineers and price/cost analysts to help contracting officers with market analysis, commercial item reviews and determinations, and commercial pricing analysis. The centers also provide training and assistance to the DOD acquisition community on various techniques and tools used to evaluate commercial items and commercial item pricing. Finally, the fiscal year 2018 NDAA directed GSA to establish a program to procure commercial items through commercial e-commerce portals, which can generally be described as online marketplaces. OMB was charged with carrying out the program’s implementation phases. GSA issued the initial implementation plan in March 2018, and the next phase of implementation will entail market analysis and consultation with industry and agencies. Efforts to Improve Service Acquisition Requirements Have Not Fully Overcome Cultural Resistance In 2017, we found that federal agencies procured over $272 billion in services in fiscal year 2015, which was approximately 60 percent of total contract obligations for that year. We’ve also previously reported that services contracts are sometimes awarded for professional and management support services that can put contractors in a position to inappropriately influence government decisionmaking if proper oversight is not provided. As we previously reported, in 2009, DOD’s Defense Acquisition University introduced a Services Acquisition Workshop to provide training and guidance on developing service acquisition requirements. The workshop brings together the key personnel responsible for an acquisition to discuss the requirements and how they will know if a contractor has met those requirements. During the workshop, the teams develop the language that will articulate the requirements, and by the end of the process, the goal is to have draft acquisition documents. We reported in 2013 that DOD mandated the use of the workshop for service acquisitions valued at $1 billion and above, and encouraged its use for acquisitions valued at $100 million or more. Performance-based acquisition (PBA) is, as the Panel reported in 2007, a preferred commercial technique. PBA focuses on contractors’ deliverables rather than how they perform the work. Rather than using traditional statements of work that define requirements in great detail, PBA uses performance work statements (PWS) that define requirements more generally based on desired outcomes. We have reported that defining requirements this way has been a struggle for DOD for several years. Additionally, we have found that implementing PBA can be particularly challenging when acquiring certain services. Services differ from products in several aspects and can offer challenges when attempting to define requirements and establish measurable, performance-based outcomes. In 2012, we found that the Defense Acquisition University developed an Acquisition Requirements Roadmap Tool, which is an online resource designed to help personnel write requirements for PBA and create pre- award documents, including requirements documents, using a standardized template. Additionally, in 2018, GSA updated its Steps to Performance-Based Acquisition guidance for managing PBAs and made sample PBA planning documents available to contracting officers across the federal government. The updated PBA guidance is a start-to-finish set of instructions for planning and executing a PBA, and the planning documents include examples of requirements documents, such as performance work statements, which set forth the contractor’s expected outcomes for the acquisition. During the course of this review, we identified that some cultural resistance to PBA has endured. Under PBA, which is structured around the results to be achieved as opposed to the manner in which the work is to be performed, a PWS may be prepared by a contractor in response to an agency’s statement of objectives. A PWS is a type of statement of work that describes the required results in clear, specific and objective terms with measurable outcomes. While some DOD and GSA officials reported that PBA has become an increasingly standard approach, other DOD officials told us that some acquisition officials are still reluctant to give contractors control over how agencies’ requirements will be met under PBA because they fear that they may not get what they need. The officials we spoke with asserted it is difficult to overcome decades of conducting federal acquisition using government-drafted statements of work that outline—often in precise detail—how an agency expects a contractor to perform work. Key Issue Area 2: Competition and Pricing Issue Area Context Federal regulations generally require that agencies determine that the prices proposed by contractors are fair and reasonable before purchasing goods or services. Agencies normally establish a fair and reasonable price through competitions where multiple offerors submit proposals. Competition is considered the cornerstone of a sound acquisition process and a critical tool for the government. It helps agencies achieve the best prices and return on investment for taxpayers. Federal statutes and regulations permit agencies to award contracts noncompetitively in certain circumstances. Under those circumstances, agencies may obtain other types of data—for example via market research—to determine whether prices proposed by contractors are fair and reasonable. In 2007, the Panel found that the private sector relied heavily on competition and rigorous market research to effectively and efficiently buy products and services. The Panel also found the federal government could improve competition and pricing through greater adoption of commercial practices. Further, the Panel cited our prior findings about interagency contracting—a contracting approach in which an agency either places an order directly against another agency’s indefinite-delivery contract, or uses another agency’s contracting operation to obtain goods or services. This approach can reduce the prices the government pays for goods and services, but we had found that interagency contracts did not always adhere to federal procurement laws, regulations, and sound contracting practices. We have found that federal agencies’ efforts to increase competition and improve pricing have had limited success. OFPP and DOD have taken steps to increase competition rates, but the government-wide competition rate has remained steady, while DOD’s rate has declined over the past 5 years. Agencies facing acquisition planning obstacles are sometimes using bridge contracts, which we have generally defined as extensions to existing contracts or new short-term, sole-source contracts to avoid a lapse in service caused by a delay in awarding a follow-on contract. In some instances, bridge contract awards delay opportunities for competition and can place the government at risk of paying higher prices for multiple years. In response to our recommendations, several agencies have taken steps to improve how they conduct market research and determine price reasonableness. GSA has developed new pricing tools, but is not collecting pricing data as it had planned. GSA officials told us pricing data helps contracting officers conduct market research and negotiate prices. OFPP has promoted consolidated purchasing approaches to improve pricing, but low adoption rates diminish potential savings. The federal government has made significant progress addressing challenges related to interagency contracting, where one agency uses another’s contract or contracting support to obtain goods or services. We elaborate on these points below. The Government-wide Competition Rate Has Remained Steady while DOD’s Rate Has Declined Despite the existence of OFPP memoranda directing agencies to increase competition, we found that competition rates—the percentage of total obligations reported for competitive contracts versus noncompetitive contracts—have remained largely unchanged. We previously reported that, in 2009, OFPP directed agencies to increase competition and reduce their spending on sole-source contracts. However, in 2017, we found that the government-wide competition rates had remained relatively steady, at just below two-thirds of all contract obligations from fiscal years 2013 through 2017. Furthermore, during the same time period, DOD’s rate declined by over 4 percent, and civilian agency rates increased by 1.6 percent. See figure 2. We have previously identified various factors that affect competition rates, including the government’s preference for a specific vendor, inadequate acquisition planning, and overly restrictive government requirements. We have also identified a number of reasons why DOD’s competition rates have been particularly low: In 2017, we found that some companies that had not done business with DOD reported several barriers preventing them from competing for DOD contracts, including the complexity of DOD’s contracting process. In 2014, we found that that 7 of the 14 justifications in a non- generalizable sample of non-competitive DOD contracts cited the “lack of data rights” as a barrier to competition. Obtaining adequate data rights, such as unlimited rights in technical data, for instance, can allow the government to use, modify, and release the technical data used to design, produce, support, maintain, or operate an item, among other things. A long-standing factor impacting DOD’s competition rate has been its reliance on original equipment manufacturers throughout the life cycle of a program because of a previous decision not to purchase adequate data rights. In 2013, we found that DOD may be missing opportunities to effectively facilitate competition in future acquisitions for products and services previously acquired non-competitively. We reviewed justifications for why awards were non-competitive and found that some of them provided limited insight into reasons for the noncompetitive award, or did not fully describe actions that the agency could take to bring about competitive awards in future acquisitions of the same goods or services. We recommended that DOD identify, track, and consider the specific factors that affect competition when setting competition goals and develop guidance to apply lessons learned from past procurements to help achieve competition in the future. We also recommended DOD collect reliable data on one-offer awards. DOD agreed with these recommendations, and implemented them in 2014. Between 2010 and 2015, DOD’s then-Under Secretary for Acquisition, Technology and Logistics issued a series of Better Buying Power memos intended to promote competition, among other things. For example, some memos provide guidance on the effective management of technical data rights, which can include acquiring rights in data, as appropriate, to avoid future reliance on original equipment manufacturers. In 2017, we found that more large DOD weapon system programs were implementing “Better Buying Power” initiatives among other reforms, which led to better acquisition outcomes for some programs. In 2018, we further found that DOD programs initiated after 2010, and therefore subject to Better Buying Power guidance, gained nearly $5 billion in buying power—which is the amount of goods or services that can be purchased given a specified level of funding. The fiscal year 2018 NDAA directed the Secretary of Defense to ensure that DOD negotiates prices for technical data to be delivered under development or production contracts before selecting a contractor to engineer and manufacture a major weapon system, among other things. Some Agencies Are Using Non-Competitive Bridge Contracts When Facing Acquisition Planning Obstacles When an existing contract is set to expire but the follow-on contract is not ready to be awarded, the government may simply extend the existing contract beyond the period of performance (including option years). Alternatively, an agency may award a new short-term sole-source contract to the incumbent contractor to avoid a gap in service caused by a delay in awarding a follow-on contract. These contract extensions and short-term sole-source contracts are often referred to as “bridge contracts”. Bridge contracts can be necessary tools, but they can also delay opportunities for competition, which we and others have noted is the cornerstone of a sound acquisition process. Additionally, bridge contracts are typically envisioned as short-term, but we found in 2015 that some bridge contracts spanned multiple years, potentially undetected by agency management. For example, of the 29 contracts we reviewed in-depth in 2015, six were longer than three years. As figure 3 illustrates, an Army bridge contract for computer support services was initially planned as a 12-month bridge, but because of subsequent bridges, ultimately spanned 42 months. Obstacles during the pre-award phase, including poor acquisition planning, delayed completion of requirements documents, bid protests, and an inexperienced and overwhelmed acquisition workforce largely drove the use of bridge contracts in the cases we studied. We further found that in the sample we reviewed, increased periods of performance sometimes corresponded to increased contract values, and that— consistent with best practices—agencies paid lower prices in several instances after subsequent contracts were competed. We recommended that OFPP take steps to amend acquisition regulations to incorporate a definition of bridge contracts, and, in the interim, provide guidance for agencies to track and manage their use. OFPP agreed with the recommendation to provide guidance for managing bridge contracts, and has drafted management guidance, but has not yet finalized it as of July 2018. This guidance includes a definition of bridge contracts. Some Agencies Have Taken Steps to Improve How They Determine Price Reasonableness, but More Can Be Done Market research helps agencies obtain knowledge about pricing that can be critical to the government’s ability to determine that prices are fair and reasonable. Market research can include: Contacting knowledgeable government and industry officials, Obtaining information about similar items from other agencies, Querying government-wide databases for contract prices, and Reviewing the results of recent market research undertaken to meet similar requirements. However, in 2014, we found that four agencies—DOD, the Department of Homeland Security, the Department of Transportation, and the Federal Aviation Administration—did not leverage many available market research techniques on lower dollar contracts, and, as a result, may have missed opportunities to promote competition. We recommended that the Secretaries of Defense and Homeland Security take action to ensure their acquisition personnel more clearly document the market research activities they conduct, and that the Secretary of Transportation (the Federal Aviation Administration falls under this department) update its market research guidance to include more detail on which elements of market research should be documented. All three agencies agreed with and addressed our recommendations. In July 2018, we issued a report on DOD’s efforts to determine whether prices are fair and reasonable for commercial items, and we have found that dealing with a limited marketplace and limited price data can be a challenge. Limited market information can hinder contracting officers’ ability to make commercial item and price reasonableness determinations. Additionally, the inability to obtain contractor data can make it difficult for acquisition staff to make commercial item and price reasonableness determinations. We also found that better information sharing efforts could address some of the challenges, and recommended that DOD develop a strategy to better share commerciality and price reasonableness information across the department. DOD agreed with our recommendation. GSA has Developed New Pricing Tools, but Some Agencies and Contractors Are Not Providing GSA Key Data GSA has developed a number of web-based tools that, according to GSA officials, are intended to enhance contracting officers’ understanding of the basis of contractors’ proposed prices, improve contracting officers’ leverage during contract negotiations, and ultimately reduce the cost of some government contracts. These tools are housed under GSA’s Acquisition Gateway, a website intended to provide federal contracting professionals with access to tools and resources. GSA has developed the Contract-Awarded Labor Category (CALC) tool that is intended to help federal contracting officers find awarded prices to use in negotiations for labor contacts. It currently contains pricing data from professional services and IT contracts. GSA has developed an independent cost estimate tool that is intended to help contracting personnel develop cost estimates prior to contract award. GSA has developed a Prices Paid Portal to capture how much the government has previously paid for certain goods and services. Additionally, in 2016, GSA issued a Transactional Data Reporting Rule that requires contractors to report more granular transactional data, including pricing information, to the government. GSA officials told us they anticipate that the collection of this transactional pricing data will greatly enhance the government’s price analyses, and provide pricing data for the Prices Paid Portal. GSA officials also told us that transactional data reporting will provide contracting officers real-time, prices-paid information that should help them conduct market research and negotiate prices faster and easier. However, GSA officials told us that agencies do not collect and share pricing data in a standardized manner, and that this makes pricing analysis challenging. Furthermore, the Transactional Data Rule may provide less data than initially expected since GSA has decided to make reporting these data optional for contractors under certain circumstances. According to OMB staff, GSA is also collecting transactional data from all “best-in-class” contracting vehicles—those that are recommended for agency use as part of the OMB-directed category management effort. We will continue to monitor GSA’s efforts to collect pricing data. Agency Adoption of Consolidated Purchasing Approaches Has Been Limited, Diminishing Potential Savings As we have reported, category management is a multi-pronged acquisition approach that includes a broad set of strategies such as consolidated purchasing, supplier management, and improving data analysis and information sharing. Federal category management efforts are intended to manage entire categories of spending across the federal government for commonly purchased goods and services in order to maximize the government’s buying power and improve pricing for all federal buyers. In December 2014, OFPP issued a memo that directed GSA to develop guidance to provide agencies with consistent standards for the development and execution of category management. Category management follows a similar government-wide effort known as strategic sourcing, which also strove to consolidate purchasing activities. According to OMB and GSA guidance, a tenet of strategic sourcing is that higher volume generally translates to lower prices. As we have reported, a key characteristic of strategic sourcing is the use of tiered pricing, where unit prices are reduced as cumulative sales volume increases. Table 1 illustrates an example of a tiered pricing model. As we have reported, it is unclear whether the government will fully realize consolidated purchasing approaches’ potential to reduce prices. We have found that agencies’ adoption of strategic sourcing has historically been low, and that tiered price discounts negotiated with vendors were not reached in most instances. For example, we reported in 2016 that, in fiscal year 2015, federal agencies spent an estimated $6.9 billion on the types of commodities—goods and services—available through federal strategic sourcing initiatives, but they only saved $129 million because of low adoption rates. We estimated the government could have saved $1.3 billion if agencies had directed more spending to strategic sourcing initiatives. See figure 4. In our 2016 report, we found that agencies’ adoption of the federal strategic sourcing initiatives was low, in part, because individual agencies were not held accountable for complying with their own commitment letters. In these commitment letters, agencies identified how much spending they planned to direct to strategic sourcing vehicles. Additionally, agencies were not held accountable for implementing transition plans that specified timelines for redirecting their relevant spending to strategic sourcing vehicles. In 2016, we made six recommendations to OMB’s OFPP and GSA in order to better promote agency accountability for implementing the strategic sourcing initiatives and category management effort. OMB and GSA have taken actions to address all six recommendations, including a recommendation for OFPP to report on agency-specific targets for the use of category management that. Although agency adoption of strategic sourcing initiatives has been low, we reported in 2012 and 2016 that strategic sourcing has still achieved significant savings for the government, and resulted in savings rates that are comparable to those reported by leading companies. For example, GSA officials reported that federal agencies directed almost $2 billion of spending through strategic sourcing contracts between fiscal years 2011 and 2015, and achieved an estimated $470 million in savings—which represents an overall savings rate of about 25 percent. By comparison, leading companies typically achieved savings rates between 10 and 20 percent by using strategic sourcing. Since our 2016 analysis of savings under strategic sourcing, category management efforts have continued. OMB staff told us that statistics show early progress in category management. Progress Made Addressing Interagency Contracting Challenges Interagency contracting refers to instances when an agency either places an order directly against another agency’s indefinite-delivery contract, or uses another agency’s contracting operation to obtain goods or services. Interagency contracting can leverage the government’s buying power and allow agencies to meet the demands for goods and services efficiently. This method of contracting can reduce the prices the government pays for goods and services when properly managed, but it also poses a variety of risks. In 2005 we reported that DOD used a Department of the Interior contract for information technology to obtain interrogation services quickly during the Iraq War, and, as a result, six task orders for interrogation, screening, and other intelligence-related services were placed on an information technology contract. Our additional work found that interagency contracting deficiencies stemmed from increasing demands on the acquisition workforce, insufficient training, and—in some cases— inadequate guidance; as well as questionable lines of responsibility for key functions such as requirements definition, contract negotiation, and contractor oversight. For these reasons, we added the management of interagency contracts to our High Risk list in 2005. In 2013, we found that the federal government had made significant progress in addressing challenges involving interagency contracting. Specifically, we found that agencies had adopted new oversight requirements for interagency contracts, and that OMB and GSA had taken steps to improve the reliability of data on interagency contracts, increasing transparency into how agencies used them. Therefore, we removed interagency contracting from our High Risk list in February 2013. Key Issue Area 3: Contractor Oversight Issue Area Context The government uses contracts to procure a wide range of services, some of which warrant increased management attention because there is an increased risk that the contractors may perform tasks reserved for the government. The responsibility for overseeing contractors often falls to contracting officers’ representatives, who are expected to help ensure contractors perform their work in accordance with contractual requirements. Additionally, the Federal Acquisition Regulation (FAR) contains a prohibition on using personal services contracts, which are characterized by the employer-employee relationships they create. In 2007, the Panel found that uncertainty about inherently governmental functions led to confusion about the necessary amount of contractor oversight, and it raised questions about federal agencies’ capacity to oversee contractors. Additionally, the Panel asserted that the FAR prohibition on personal services contracts should be removed and that new guidance should be provided to define where, to what extent, under what circumstances, and how agencies may procure personal services by contract. We have found that contracts requiring increased management attention have posed contractor oversight challenges for federal agencies. Agencies across the federal government award contracts requiring increased management attention, such as contracts for professional and management support services. DOD is not leveraging its annual reports to Congress on its portfolio of contracted services to systematically identify contracts requiring increased management attention. DOD has taken steps to improve the reliability of data on personal services contracts, which could help ensure contractors are supervised appropriately. We elaborate on these points below. Federal Agencies Are Awarding Contracts Warranting Increased Management Attention at a Steady Rate There are benefits to using contractors to provide services, such as addressing surge capacity needs and providing needed expertise. But we and OFPP have identified the need for increased management attention on certain types of contracted services. These contracted services include professional and management support services, such as intelligence services and policy development. Additionally, some of these services can be closely associated with inherently governmental functions. In 2009, we found that federal agencies introduce the risk that contractors may inappropriately influence government authority when performing contracts for services “closely associated” with inherently governmental functions. In 2017, we found that agencies continued to award service contracts warranting increased management attention at a steady rate. See figure 5. From fiscal years 2013 through 2017, the share of government-wide obligations for these services remained consistent for civilian agencies at around 20 percent, and grew for DOD from about 18 percent to 20 percent. OMB has taken steps to help agencies reduce some of the risks associated with contracts warranting increased management attention. In 2011, OMB emphasized the importance of adequate management by government employees when contractors perform work that is closely associated with inherently governmental functions. For example, OMB directed agencies to employ and train a sufficient number of qualified government personnel to provide active and informed management and oversight of contractor performance where contracts have been awarded for functions closely associated with the performance of inherently governmental functions. We have found that some agencies face other challenges overseeing their contractors. In 2010 and 2012, we reported that DOD lacked sufficient numbers of adequately trained personnel, including contracting officer’s representatives (CORs), to oversee contractors in contingency operations like those in Afghanistan and Iraq. In 2013, at the Department of Veterans Affairs, we found that heavy workloads and competing demands made it difficult for CORs to effectively monitor contractors and ensure they were executing their work in accordance with contract terms. In addition, we have found that these CORs often lacked the technical knowledge and training needed to effectively oversee certain technical aspects of a contractor’s performance. We recommended that the Department of Veterans Affairs develop tools to help the officials oversee contracts. The department agreed and did so. DOD Is Not Using Available Information to Inform Contractor Oversight Efforts In 2008 and again in 2009, Congress mandated that defense and certain civilian agencies start providing annual reports on certain service contract actions. These inventories can improve agency insight into the number of contractor personnel providing services and the functions they are performing, among other things, and help agencies determine whether any of these functions require increased management attention. Despite the increased reporting requirements, we have found that DOD has not always used available inventory information to improve contractor oversight. In March 2018, for example, we found that the military departments generally had not developed plans to use the inventory to inform management decisions as required. We did not make any new recommendations at the time, noting that seven of our 18 prior recommendations related to the inventory remained open, including a recommendation for DOD to identify officials at the military departments responsible for developing plans and enforcement mechanisms to use the inventory. In its comments on our March 2018 report, DOD stated it was committed to improving its inventory processes. DOD Has Taken Steps to Improve the Reliability of Data on Personal Services Contracts A personal services contract is one that creates an employer-employee relationship between the government and contractor personnel. Because such contracts could be used to circumvent the competitive hiring procedures of the civil service laws, the use of personal services contracts requires specific statutory authority. Section 809 Panel In its June 2018 report, the Section 809 Panel suggested eliminating statutory and regulatory distinctions between personal services contracts and non-personal services contracts to increase managerial flexibility in determining how to fulfill requirements. As of July 2017, we could not verify how often DOD awarded personal services contacts because more than one third (17 of 45) of the contracts we reviewed that had been designated personal services contracts in the government’s primary acquisition-data repository (the Federal Procurement Data System-Next Generation) were incorrectly recorded. DOD concurred with our recommendation to address this issue and has taken steps to do so. As we found in 2017, agencies need accurate information about their personal services contracts in order to ensure that they are supervising contractors work appropriately. Key Issue Area 4: Acquisition Workforce Issue Area Context The federal acquisition workforce manages and oversees billions of dollars in acquisition programs and contracts to help federal agencies get what they need at the right time and at a reasonable price. The acquisition workforce consists of contracting officers, contracting officer’s representatives, program and project managers; and may include others such as, engineers, logisticians, and cost estimators. A number of governmental organizations play critical roles in assisting agencies in building and sustaining their acquisition workforces. Among these agencies, OFPP provides government-wide guidance on acquisition workforce issues, GSA’s Federal Acquisition Institute promotes the development of the civilian acquisition workforce, and the Defense Acquisition University provides training for DOD’s acquisition workforce. In 2007, the Panel found the federal acquisition workforce was understaffed, overworked, and undertrained. The Panel also found that most agencies were not carrying out appropriate workforce planning activities and had not assessed the skills of their current acquisition workforce or the number of individuals with relevant skills that would be needed in the future. We found that steps have been taken to address acquisition workforce issues, but workforce gaps endure. Congress established the Defense Acquisition Workforce Development Fund (DAWDF) in 2008 which helps DOD recruit, train, and retain acquisition personnel. It has helped DOD close some staffing gaps. The acquisition workforce faces skill gaps due to the increasing complexity of acquisitions, particularly IT acquisition. OFPP, GSA, and DOD have introduced new training programs to help improve the skills of the federal acquisition workforce. Congress and OMB have taken several actions intended to ensure agencies conduct adequate workforce planning, but agencies have not done so consistently. We elaborate on these points below. The Defense Acquisition Workforce Development Fund Has Helped DOD Close Some Staffing Gaps In 2008, Congress established the Defense Acquisition Workforce Development Fund (DAWDF), which provides resources for the recruitment, training, and retention of DOD acquisition personnel. In 2017 we reported that, as of September 2016, DOD obligated more than $3.5 billion for these purposes and that DAWDF had helped increase the total size of the DOD acquisition workforce by about 24 percent from 2008 to 2016, among other things. However, DOD did not achieve its growth targets for each of its acquisition career fields. In December 2015, we reported that DOD had exceeded its planned growth for seven career fields by about 11,300 personnel, including the priority career fields of auditing and program management. However, DOD had not reached its growth targets for six other career fields, falling about 4,400 personnel short. These included the additional priority career fields of contracting, business, and engineering. We recommended that DOD issue an updated acquisition workforce plan that includes revised career field goals as a guide to ensure that the most critical acquisition needs are being met. Since that time, DOD has continued to hire more people in its acquisition workforce, including the contracting and engineering career fields. It also issued an updated strategic plan in October 2016. However, as we reported in 2017, the plan does not include workforce targets for each career field, so the sizes of DOD’s current staffing shortfalls, if any, are unclear. DOD officials stated that determining which career fields were a priority was most appropriately determined by the components rather than at the department level. Section 809 Panel In its June 2018 report, the Section 809 Panel made recommendations to improve the resourcing, allocation, and management of the Defense Acquisition Workforce Development Fund (DAWDF). In 2017, we also reported on the amount of unobligated balances in the DAWDF account that have been carried over from one fiscal year to the next. According to DOD officials, these balances—which totaled $875 million at the beginning of fiscal year 2016—were the result of several factors. For example, DOD officials generally did not begin the process of collecting and distributing DAWDF funds before DOD received its annual appropriations. Other factors that affected DAWDF execution included hiring freezes and imbalances between DOD’s DAWDF requirements and the minimum amount that DOD was required to put into DAWDF. In order to improve fund management, we recommended that DOD officials clarify whether and under what conditions DAWDF funds could be used to pay for personnel to help manage the fund. DOD indicated that it planned to address the recommendation. We continue to highlight DOD acquisition workforce issues in our High- Risk List, through the DOD Contract Management area, because agencies continue to face challenges in maintaining sufficient staff levels and monitoring the competencies of their acquisition workforce. In our 2017 High Risk report, we determined that DOD should continue efforts to ensure that its acquisition workforce is appropriately sized and trained to meet the department’s needs, among other actions. Increasingly Complex Acquisitions Are Creating Skill Gaps The acquisition workforce faces skill gaps due to the increasing complexity of acquisitions, particularly IT acquisitions, according to officials we spoke with for this review. Officials from DOD, GSA, and one industry group indicated that a lack of technical knowledge presents challenges for effectively planning and executing complex IT acquisitions. Additionally, we have reported that the government’s ability to respond to evolving cybersecurity threats depends in part on the skills and abilities of the IT acquisition workforce. Cross-functional or multidisciplinary teams may help to address the acquisition skill gaps because they can provide a broad range of specialized skills. In 2014, Congress included provisions in FITARA to ensure timely progress by federal agencies toward developing, strengthening and deploying IT acquisition cadres consisting of personnel with highly specialized skills in IT acquisitions. This legislation followed an initiative OMB started in 2010 when OMB’s United States Chief Information Officer issued a 25 point implementation plan requiring each major IT investment to establish an integrated program team to include, at a minimum, a dedicated, full-time program manager and an IT acquisition specialist. In 2016, we reported on three characteristics that contribute to the creation and operation of a comprehensive integrated program team. We also found that shortfalls in these characteristics— leadership, team competition and team processes—had contributed to significant problems in major IT acquisitions. New Training Opportunities Help Address Skill Gaps Over the past 10 years, OFPP, GSA and DOD have introduced new training programs to help improve the skills of the federal acquisition workforce. In fiscal year 2007, OFPP launched two new certification programs for civilian agencies: (1) the program/project managers’ certification, and (2) the contracting officers’ representatives’ certification. In 2011, GSA introduced the Federal Acquisition Institute Training Application System, which includes continuous learning modules, certification modules, and a learning management system. In 2013, OFPP issued a memo requiring all civilian federal agencies to increase use of the system. In 2015, OFPP and the United States Digital Service jointly developed the Digital Information Technology Acquisition Professional Training Program to help make acquisition personnel better IT buyers. In 2015, GSA established the Center for Acquisition Professional Excellence to improve training for GSA’s own acquisition personnel. In 2016, DOD reported that, since 2008, its Defense Acquisition University increased its capacity with a 28 percent increase in classroom graduates and a 15 percent increase in online training graduates. In addition, DOD reports that its overall acquisition workforce certification level increased from 58.3 percent in fiscal year 2008 to 76 percent in fiscal year 2017. In 2018, OFPP established a new certification program for digital services as part of the overall effort to increase expertise in buying technology. Gaps Persist in Agency Workforce Planning Efforts Workforce planning involves identifying critical occupations, skills, and competencies; analyzing workforce gaps; building the capabilities needed to support workforce strategies; and monitoring and evaluating progress toward achieving workforce planning and strategic goals, among other things. Since 2009, Congress and OMB have taken several steps involving agencies’ acquisition workforce planning efforts. In the fiscal year 2009 NDAA, Congress directed OMB to prepare a 5- year Acquisition Workforce Development Strategic Plan for civilian agencies to increase the size of the federal acquisition workforce, among other things. In response, OMB issued the plan in October 2009. From 2011 to 2016, Congress required DOD to develop biennial plans to improve the defense acquisition workforce. However, DOD did not always meet this biennial requirement, issuing an acquisition strategic plan in 2010 and then not issuing another until October 2016. In 2016, we reported that DOD officials cited budget uncertainties as the primary reason for the delay. In July 2016, OMB released its Federal Cybersecurity Workforce Strategy, which cited the need for agencies to examine specific IT, cybersecurity, and cyber-related work roles, and to identify personnel skills gaps. We have ongoing work reviewing federal agencies’ IT and cybersecurity workforce planning. Nonetheless, we have found gaps in agency workforce planning efforts. In December 2015, we found that DOD had assessed workforce competencies for 12 of its 13 career acquisition fields, but had not established a timeline for reassessing competencies in 10 of those fields to gauge progress in addressing previously identified gaps. We made four recommendations to DOD as a result. DOD concurred with all four recommendations, including the recommendation that the department issue an updated acquisition workforce plan in fiscal year 2016, which DOD implemented. The other three recommendations remain unaddressed as of June 2018, including the recommendation to establish a timeframe for reassessment. Similarly, in 2017, we found that the Department of Homeland Security was continuing to refine its acquisition workforce planning efforts. In April 2017, we reported that the department’s 2016 staffing assessments did not take into account all acquisition-related positions, which could limit its insight into the size and nature of potential staffing shortfalls. Additionally, in November 2016, we found that the five departments in our review—the Departments of Defense, Commerce, Health and Human Services, Transportation, and the Treasury—had not fully implemented key workforce planning steps and activities for IT acquisitions. For example, four of these agencies had not demonstrated an established IT workforce planning process, which should include training for acquisition personnel. In addition, none of these agencies had fully developed strategies and plans to address IT workforce gaps. We recommended that the selected departments implement IT workforce planning practices to facilitate (1) more rigorous analyses of gaps between current skills and future needs, and (2) the development of strategies for filling the gaps. As of June 2018, all five recommendations remain open. Key Issue Area 5: Federal Procurement Data Issue Area Context The Federal Procurement Data System-Next Generation (FPDS-NG) is the federal government’s primary repository for procurement data. Government officials and others use FPDS-NG for a variety of analytical and reporting purposes, such as examining data across government agencies, providing managers a mechanism for determining where contract dollars are being spent, and populating USASpending.gov, a website that contains data on federal awards. The General Services Administration, with guidance from the Office of Federal Procurement Policy, established and administers FPDS-NG. In 2007, the Panel found that FPDS-NG contained unreliable data at the granular level, didn’t have appropriate validation rules in place, and lacked appropriate administration. We found that OMB, GSA, and federal agencies have taken steps to improve data reliability, but the government’s primary repository for acquisition data still faces capability limitations. OMB and GSA have taken steps to improve FPDS-NG data quality. FPDS-NG’s current capabilities face limitations. OMB’s IT Dashboard provides detailed information on major IT acquisitions at 26 agencies, but accuracy and reliability issues endure. We elaborate on these points below. Some FPDS-NG Data Reliability Concerns Endure From 2008 to 2011, OMB repeatedly directed agencies to take specific actions to improve the quality of the data they report in FPDS-NG. In May 2008, OMB provided agencies guidance on how to verify, validate, and certify their FPDS-NG data. In October 2009, OMB directed agencies to explicitly describe their data quality improvement and validation activities. In May 2011, OMB directed agencies to verify that they have the policies, procedures, and internal controls in place to monitor and improve procurement data quality generally, and that they have similar controls for ensuring that contractors comply with their reporting requirements. Since 2007, GSA has reported improvements in FPDS-NG data quality. Agencies are responsible for developing a process and monitoring results to ensure timely and accurate reporting of contractual transactions in FPDS-NG and are required to submit certifications about the accuracy of contract reporting to GSA. In 2017, GSA reported that these certifications collectively demonstrate that the data in FPDS-NG have an overall accuracy rate of 95 percent. GSA also reports that the overall completeness rate for FPDS-NG data has increased from 98.0 percent in fiscal year 2009 to 99.2 percent in fiscal year 2016. Nonetheless, our work has recently identified data reliability challenges with FPDS-NG data. For example, in 2017 we found that FPDS-NG did not accurately identify some indefinite delivery contracts. And in March 2016, we identified some FPDS-NG data limitations, including the misclassification of some contractors as small businesses, and some incorrect obligations data. FPDS-NG Capabilities Have Expanded, but Limitations Remain GSA has updated the FPDS-NG system to expand its capabilities several times since the Panel issued its 2007 report. The most recent version was released in October 2017, and it increased the type of data that could be collected. For example, FPDS-NG now collects more detailed information on women-owned business types, inherently governmental services, and legislative mandates. A previous update in 2009 standardized how FPDS- NG tracks and reports competition data. Despite these changes, FPDS-NG has limitations in the type of acquisition data it can track. For example, in November 2017, we reported that agencies were unable to use FPDS-NG to track and report specific contract award data elements in accordance with OMB guidance because the required data had no corresponding data-entry field in FPDS-NG. We recommended that OMB take steps to improve how agencies collect certain procurement data. OMB generally agreed, but has not yet addressed the recommendation. Similarly, in 2014 we found limitations in FPDS-NG with regard to tracking small business subcontractors. Specifically, we found that FPDS-NG did not contain data on subcontracts, and was not designed to identify the type of subcontracting plan used or to link small business subcontractors to particular prime contracts. In fiscal year 2020, GSA plans to fully integrate FPDS-NG with nine other legacy systems operated by the agency’s Integrated Award Environment (IAE). IAE was initiated in 2001 to bring together 10 different acquisition data systems into a unified system. GSA, DOD, and OMB staff expect that the IAE will contribute to improved FPDS-NG data reliability and better system governance. Integration with other systems will reduce the need to input the same data multiple times, which creates opportunities for errors. DOD and OMB staff also stated that FPDS-NG is currently managed through the IAE governance model, which offers a clear governance structure, including strategic planning, conflict resolution, and decision-making. OMB’s IT Dashboard Enhances Transparency and Oversight, but Accuracy and Reliability Issues Persist In 2009, OMB deployed a public website, known as the IT Dashboard, to provide detailed information on major IT acquisitions at 26 agencies, including ratings of the IT acquisitions’ performance against cost and schedule targets. Among other things, agencies are to submit investment risk ratings from their CIOs. For more than 6 years, we have issued a series of reports about the IT Dashboard, noting the significant steps OMB has taken to enhance the oversight, transparency, and accountability of federal IT acquisitions. We have also reported concerns about the accuracy and reliability of IT Dashboard data. We have made 47 recommendations to OMB and federal agencies to help improve the accuracy and reliability of this data and to increase its availability. As of March 2018, 19 of the recommendations remain open, including recommendations that agencies factor active risks into their IT Dashboard ratings, and ensure that major IT investments are included on the Dashboard. Key Issue Area 6: Small Business Participation Issue Area Context The federal government has a long-standing policy to maximize contracting opportunities for small businesses. Congress has established, and the Small Business Administration (SBA) maintains, goals for small business participation in federal contracting. SBA also manages several programs targeted at increasing participation by particular business types, including: Small Disadvantaged Businesses, Service- Disabled Veteran-Owned Small Businesses, Women-Owned Small Businesses, and those in Historically Underutilized Business Zones (HUBZone). Agency-specific goals are established through negotiation between SBA and the respective agency. In 2007, the Panel found a number of challenges hindering agencies’ efforts to achieve small business participation goals. In particular, the Panel made recommendations focused on a lack of parity across small business types (identifying that some statutes appeared to prioritize certain small business programs over others), consolidation or bundling of contract requirements, and how small businesses are prioritized under multiple award contracts (contracts awarded to two or more contractors under a single solicitation). We found that small business participation in government contracting has increased over the past few years, but small business advocates report emerging concerns, and agencies struggle with policy compliance. Executive branch agencies have increased small business participation over time. Small business advocates have expressed concerns that category management will reduce the number of small businesses eligible for a given opportunity; the executive branch has taken some steps to address such concerns. Most agencies did not demonstrate that they are in full compliance with requirements involving their small business offices. SBA has improved how it assesses firms’ eligibility for small business programs, but we found it should do more to oversee its women- owned small business program and its HUBZone program. We elaborate on these points below. Agencies Have Met More Small Business Goals Over Time Section 809 Panel Federal agencies continue to address challenges related to small business participation. For example, the Department of Defense (DOD) did not meet all of its small business goals in 2017. In its January 2018 report, the Section 809 Panel recommended that DOD refocus its small business policies and programs to prioritize the department’s mission, among other things. Since the Panel issued its report in 2007, Congress and executive branch agencies have continued efforts to encourage small business participation, with improved results over time. In the 2010 Small Business Jobs Act, Congress addressed the three primary small business issues raised by the Panel. These issues included taking action on issues of parity, requiring justifications and reporting for contract bundling, and addressing small business concerns about multiple award contracts, among other things. Meanwhile, executive branch agencies have also taken steps to encourage small business participation. For example: GSA strongly supports small business participation in its Federal Supply Schedules program. The schedule program provides federal agencies a simplified method of purchasing commercial products and services at prices associated with volume buying. GSA set aside some specific schedule categories—such as photographic services and library furniture—for small businesses. GSA also developed a forecasting tool in 2016, intended to give small businesses a preview of upcoming federal contracting opportunities. In a 2013 rule, SBA clarified how contracting officers should assign small business codes under multiple award contracts. North American Industry Classification System (NAICS) codes are the basis for SBA’s size standards; therefore, the NAICS code that a contracting officer assigns determines whether a firm is eligible for small business set-asides. In its rule, SBA observed that when NAICS codes are assigned to a multiple award contract solicitation, a business concern may be small for one or some of the NAICS codes, but not all. In that situation, an agency could receive small business credit on an order for an award to a “small business” where a firm qualifies as small for any NAICS code assigned to the contract, even though the business is not small for the NAICS code that was assigned or that should have been assigned to that particular order. SBA’s rule stated that, to ensure small businesses receive the awards that are intended for them, contracting officers should assign NAICS codes to discrete components of a contract in certain circumstances. The contracting officers we interviewed stated that assigning a NAICS code can be challenging when one or more codes could apply to a contract and we noted that SBA’s rule may further clarify code assignment for these officials. However, updates to the FAR are required to fully implement SBA’s final rule. This FAR rule-making process is ongoing. In fiscal year 2017, the federal government met three of its five government-wide small business participation goals. This is progress compared to fiscal year 2007, when the government met just one of its five small business goals. While individual agencies’ success varied, there was significant improvement in the number of agencies meeting service-disabled veteran-owned and women-owned small business goals. Additionally, the number of agencies meeting all of their small business goals increased from two to seven. Meanwhile, HUBZone goals have remained unmet for a majority of agencies. See figure 6. Small Business Advocates Have Concerns About Category Management According to OMB guidance, under category management the federal government should “buy as one.” Specifically, agencies are expected to move away from making numerous individual procurements to purchasing through a broader aggregate approach. Small business advocates we spoke with have reported a number of concerns to us about the government-wide category management effort. Because category management includes streamlining the number of available contracts, small business advocates—including officials at DOD and SBA—have told us that they worry the initiative will reduce the number of small businesses eligible for a given opportunity, and that the number of small businesses awarded federal contracts may fall. The executive branch has taken some steps to provide small businesses with contracting opportunities through category management. For example, the category management effort includes a set of cross-agency priority goals that include small business utilization. Another element of category management identifies best-in-class contracting vehicles that are recommended for agency use. Some best-in-class vehicles under category management focus on small business providers, including GSA’s Alliant Small Business vehicle that provides IT solutions. Additionally, in 2015, we found that DHS’s “on-ramp/off-ramp” mechanisms offered an option to help maintain a pool of eligible small businesses by reopening an indefinite-delivery, indefinite-quantity vehicle’s solicitation to new small business vendors after participating businesses outgrew their small size status and left the program. GSA recently reported that two of its small business interagency contracts— OASIS Small Business and 8(a) Stars II—used on-ramp procedures in 2017 and 2018. However, in 2014 we analyzed small business participation in strategic sourcing, a predecessor to category management, and found that agencies had not implemented OMB requirements to develop performance measures to determine how strategic sourcing initiatives had affected small business participation. As of June 2018, four of the six contracting agencies we reviewed had implemented our recommendation to do so. Most Agencies Did Not Demonstrate Full Compliance With Small Business Office Requirements In the Small Business Act, Congress required certain agencies to create and appropriately staff Offices of Small and Disadvantaged Business Utilization (OSDBUs) to advocate for small businesses. Throughout the years, Congress amended the requirements on multiple occasions, generally expanding the areas for the OSDBU to maintain involvement in, and providing details on how the OSDBU office should function. However, among other results, we have found that many agencies have not demonstrated that they are in full compliance with a number of requirements related to the functions and duties of these offices, such as establishing a direct reporting relationship between the OSDBU director and the agency head or deputy head, and specifying that the director must have supervisory authority over staff performing certain duties. As we reported in August 2017, noncompliance with these legislative requirements may limit the extent to which an office can advocate for small businesses, and we made recommendations to 19 agencies to come into full compliance with these OSDBU requirements or report to Congress on why they have not. Most agencies that provided comments agreed or partially agreed with the recommendations. As of June 2018, two of the 19 agencies—the National Aeronautics and Space Administration and the U.S. Agency for International Development—had implemented our recommendations. SBA Has Improved How It Assesses Firms’ Eligibility for Small Business Programs, but Work Remains Over the past decade, we have identified a number of weaknesses in the processes SBA uses to certify and recertify businesses as being eligible to participate in its selected programs—specifically HUBZone and women-owned programs, and the 8(a) program for small disadvantaged businesses—and made recommendations to SBA to address them. SBA has taken steps to address these weaknesses, but some remain. In March 2010, we made six recommendations to improve how SBA assesses the continuing eligibility of firms to participate in the 8(a) program, and we have closed all six recommendations as implemented. In 2014, we made two recommendations to improve SBA’s oversight of firms’ participation in its women-owned small business program. We had found that SBA had not yet developed procedures that provided reasonable assurance that only eligible businesses obtained set-aside contracts. Then in 2015, we made two recommendations to improve SBA’s oversight of firms’ participation in the HUBZone program. We had found that SBA lacked an effective way to communicate program changes to small businesses as well as key oversight controls over the process that small businesses used to recertify that they are eligible to participate. The four recommendations in these two reports remained open as of May 2018. Agency Comments and Third Party Views We provided a draft of this report to OMB, DOD, GSA and SBA for review and comment. We received written comments from DOD, which are reprinted in appendix II, and one technical comment via e-mail. OMB and GSA provided technical comments via e-mail. We addressed OMB’s, DOD's and GSA's comments as appropriate. SBA told us that they had no comments on the draft report. We also offered three third party organizations—two industry groups and the Section 809 Panel—the opportunity to provide their views on sections of the report that relate to them. They confirmed these sections of the report are accurate. We are sending copies of this report to the appropriate congressional committees, the Director of the Office of Management and Budget, the Secretary of Defense, the Administrator of General Services, the Administrator of the Small Business Administration, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objective, Scope, and Methodology This report identifies actions the federal government has taken to address the key issues the Acquisition Advisory Panel (the Panel) raised in its 2007 report, and some of the acquisition challenges that remain. To frame the key issues the Panel identified in its 2007 report, we worked with internal subject matter experts and officials from the Office of Management and Budget’s (OMB) Office of Federal Procurement Policy (OFPP), Department of Defense (DOD), General Services Administration (GSA), and Small Business Administration (SBA) to categorize the Panel’s 89 recommendations into six higher-level issue areas: Competition and pricing, Federal procurement data, and Small business participation. To identify progress made and challenges that remain in each of these issue areas, we reviewed relevant GAO reports and testimonies; key legislation such as the Weapon Systems Acquisition Reform Act of 2009, and the Small Business Jobs Act of 2010; acquisition guidance issued by OMB, DOD, GSA, and SBA; and interim reports from the Section 809 Panel, which is addressing acquisition challenges at DOD, and plans to issue its final report in January 2019. We also interviewed officials from OMB, DOD, GSA, and SBA; and Section 809 Panel staff. Further, we collected input from members of the Chief Acquisition Officers Council and two industry groups: the Professional Services Council and the Coalition for Government Procurement. The GAO reports cited throughout this report include detailed information on the scope and methodology from our prior reviews. For findings based on analyses of data from the Federal Procurement Data System-Next Generation (FPDS-NG) in our prior work, we updated the previous analyses to include the most recent years available. We reviewed current documentation for FPDS-NG in order to identify any changes that might impact our analyses. We determined that the FPDS-NG data were sufficiently reliable for the purpose of updating previous analyses. We conducted this performance audit from July 2017 to September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Nathan Tranquilli (Assistant Director), Betsy Gregory-Hosler (Analyst-in-Charge), Holly Williams, George Bustamante, and Brandon Voss made key contributions to this report. Ted Alexander, Cheryl Andrew, Peter Del Toro, Brenna Derritt, Alexandra Dew Silva, Tim DiNapoli, Jennifer Dougherty, Kathleen Drennan, Lorraine Ettaro, Stephanie Gustafson, Dave Hinchman, Javier Irizarry, Justin Jaynes, Julia Kennon, Sherrice Kerns, Emily Kuhn, Heather B. Miller, Angie Nichols-Friedman, Shannin O’Neill, Miranda Riemer, William Russell, Bill Shear, Roxanna T. Sun, and Katherine Trimble also made contributions to the report.
Why GAO Did This Study In fiscal year 2017, federal agencies obligated more than $500 billion to acquire products and services. These products and services included military aircraft, information technology software, and maintenance services. Amid this large spending, the federal government has taken steps to reform federal acquisitions, increase efficiencies, and improve results. For example, in the Services Acquisition Reform Act of 2003, Congress established the Acquisition Advisory Panel to review federal acquisition laws, regulations, and policies; and identify opportunities for improvement. The Panel issued its final report in 2007, addressing topics that span all three phases of the contracting life cycle identified by GAO: pre-contract award, contract award, and post-contract award. GAO was asked to follow up on the Panel's report and identify progress made since 2007. This report identifies the actions the federal government has taken to address key issues raised in the Panel's report, and the challenges that remain. GAO reviewed documentation and interviewed personnel from federal agencies and the private sector. These personnel included staff from OMB that are responsible for federal procurement policy, as well as staff supporting a panel addressing DOD's acquisition regulations and processes, known as the Section 809 Panel. GAO also leveraged its large body of work on federal acquisitions. What GAO Found Congress and the executive branch have taken numerous actions to address key issues the Acquisition Advisory Panel (Panel) identified in its 2007 report, but these actions have not eliminated some enduring challenges. The figure below presents the key issues the Panel addressed in relation to the life cycle of a typical contract as identified by GAO. Three of the key issues, and the corresponding challenges, align with specific phases in the contracting life cycle: Requirements Definition: The Panel found that fully identifying requirements before a contract is awarded is key to achieving the benefits of competition. GAO has found that unrealistic requirements have contributed to poor program outcomes at the Department of Defense (DOD), and that the Army's requirements development workforce decreased by 22 percent from 2008 to 2017. Competition and Pricing: The Panel said that competition can help reduce prices. GAO's work shows that competition rates have remained steady government-wide, and declined at DOD. See figure below. GAO has also found that agencies are sometimes using bridge contracts—which GAO has generally defined as either extensions to existing contracts or new short-term, sole-source contracts—to avoid a lapse in service caused by delay of a follow-on contract award. In some instances, bridge contract awards delay opportunities for competition and can place the government at risk of paying higher prices for multiple years. The figure below depicts how an Army bridge contract for computer support services planned for 12 months was extended to 42 months. Contractor Oversight: The Panel raised questions about the capacity of federal agencies to oversee contractors. GAO has found that agencies continue to award contracts warranting increased management attention at a steady rate, such as contracts for management support services. With contracts like those for management support services, there is an increased risk that contractors may perform tasks reserved for the government. Additionally, GAO found that heavy workloads at the Department of Veterans Affairs have made it difficult for officials who oversee contractors to ensure contractors adhere to contract terms. Three of the key issues, and the corresponding challenges, cut across all the phases of the contracting life cycle: Acquisition Workforce: The Panel found that the federal acquisition workforce faces workload and training challenges. GAO's work has shown that DOD has enhanced its workforce, but some workforce gaps endure at DOD and across agencies. Federal Procurement Data: The Panel found that the government's primary repository for acquisition data contained some unreliable data. Also, GAO has found that the system has demonstrated limitations. For example, guidance from the Office of Management Budget (OMB) required that agencies collect specific contract award data, but the system did not have the capability to do so. Small Business Participation: The Panel found a number of challenges hindering agencies' efforts to meet small business goals. GAO has found small business participation has increased, but many agencies are not in full compliance with requirements governing Offices of Small and Disadvantaged Business Utilization (OSDBUs). For example, the directors of these offices should report directly to agency heads or their deputies, but not all agencies have established this type of direct reporting relationship. What GAO Recommends GAO is not making any new recommendations in this report, but it has made numerous recommendations in the past. The agencies have agreed with many of GAO's recommendations, and have implemented some of them but not others. For example, GAO has made the following recommendations. The Army should assess the resources needed for the requirements development process. The Army agreed, but it has not yet done so. OMB should provide guidance for agencies to manage bridge contracts. OMB agreed and has drafted management guidance but has not yet finalized it. Certain federal agencies should take steps to document how they conduct market research. The agencies agreed and did so. The Department of Veterans Affairs should develop tools to help oversee contracts. The department agreed and did so. DOD should have issued an updated acquisition workforce plan in fiscal year 2016. DOD agreed and issued the plan. OMB should take steps to improve how agencies collect certain procurement data. OMB generally agreed, but has not yet addressed the recommendation. Certain federal agencies should take steps to comply with OSDBU-related requirements. Most agencies that provided comments agreed or partially agreed. Two agencies—the National Aeronautics and Space Administration, and the U.S. Agency for International Development—have addressed the recommendations. GAO continues to believe the agencies should implement all of these recommendations.
gao_GAO-18-617
gao_GAO-18-617_0
Background Countering the proliferation of nuclear weapons and other weapons of mass destruction (WMD) remains a U.S. national security priority. According to the 2017 National Security Strategy, terrorist groups continue to pursue WMD-related materials, which pose a grave danger to the United States. As also stated in the 2017 National Security Strategy, Russia’s nuclear arsenal remains the most existential threat to the United States, China’s nuclear arsenal is growing and diversifying, Iran has the potential of renewing its nuclear program and North Korea has pursued nuclear weapons despite international commitments. As the DSB report noted, U.S. monitoring abilities are increasingly challenged by evolving risks in 1) the capability of existing nuclear states and 2) the number of state and nonstate actors possessing or attempting to possess nuclear weapons. U.S. nonproliferation activities are conducted and coordinated across multiple government agencies and organizations, as well as the intelligence community. In addition, these efforts are coordinated with international entities, national laboratories, industry, and academia. U.S. nuclear nonproliferation verification and monitoring efforts are guided by, among other things, U.S. obligations under the Treaty on the Non- Proliferation of Nuclear Weapons (NPT) and U.S. support for the Preparatory Commission for the Comprehensive Nuclear Test-Ban Treaty Organization (CTBTO). The NPT lays out the respective responsibilities of nuclear-weapon and nonnuclear-weapon states with regard to the transfer, acquisition, possession, control, and manufacture of nuclear weapons. All nonnuclear-weapon states are required to have a comprehensive safeguards agreement with the International Atomic Energy Agency (IAEA) to facilitate IAEA’s safeguards activities. IAEA safeguards are a set of technical measures and activities by which IAEA seeks to verify that nuclear material subject to safeguards is not diverted to nuclear weapons or other proscribed purposes. Under the Comprehensive Nuclear Test-Ban Treaty (CTBT), which has yet to enter into force, parties agree not to carry out any nuclear explosions. The United States supports the work of the CTBTO to build up a verification regime in preparation for the treaty’s entry into force. The Administration’s fiscal year 2018 plan for verification and monitoring described ongoing interagency efforts to support nuclear proliferation verification and monitoring and includes information about relevant national priorities, capability gaps, R&D initiatives, and roles and responsibilities. The 2018 plan (40 pages) is longer and more detailed than the 2015 plan (2 pages) or the 2017 update (4 pages). The bulk of the 2018 plan is contained in two chapters—one chapter broadly describes U.S. and international efforts and roles and responsibilities, and the other chapter describes ongoing U.S. R&D efforts. The Administration’s 2018 Plan Generally Addressed the Reporting Requirements but Did Not Identify Costs and Funding Needs We found the Administration’s 2018 plan provided details on each of the four major reporting requirements called for in the fiscal year 2018 NDAA with the exception of future costs and funding needs (see table 1). Plan and Roadmap The first reporting requirement called for a plan and roadmap for verification, detection, and monitoring with respect to policy, operations, and research, development, testing, and evaluation, including— Identifying requirements for verification, detection, and monitoring; Identifying and integrating roles, responsibilities, and planning for verification, detection, and monitoring activities; and The costs and funding requirements over 10 years for these activities. We found that the 2018 plan provided detail on verification, detection, and monitoring requirements and roles and responsibilities, but did not provide details on future costs and funding needed to support the activities in the plan. We found that the plan identified requirements for verification, detection, and monitoring as required. To identify these requirements, the plan notes that interagency partners first identified a set of verification and monitoring priorities. From these priorities they identified a number of technical gaps. The plan then described dozens of examples of R&D efforts and non-technical activities to address those technical gaps. For example, for one gap the plan identifies eight current efforts to address this gap, including continued Department of Energy and NNSA investment in sensor capabilities that are small, light, and can operate in low power. We found that the plan provided details on the requirement to identify and integrate roles and responsibilities and planning. The plan includes details of the roles and responsibilities of interagency partners and international bodies that cooperate in the nonproliferation realm. For example, the plan describes how the Department of Defense is to support U.S. verification activities under the CTBT, including the installation, operation, and maintenance of U.S. International Monitoring Systems. We found that the plan did not identify costs and funding needs over a 10- year period. NNSA officials stated that they believed providing funding information over a 10-year period is unrealistic for several reasons. First, according to NNSA officials, it is not feasible to achieve agreement on actual or implied budgets outside of the existing President’s budget process. Second, according to NNSA officials, agencies have little influence over the funding priorities of other departments outside of existing budget efforts. Third, according to NNSA officials, long-term funding estimates are infeasible because the President’s budget only identifies funding levels five years into the future. However, the 2018 NDAA did not ask for budget information. Instead, the NDAA reporting requirement called for long-term costs and funding information necessary to support the verification and monitoring activities in the plan. Finally, NNSA officials told us that they and officials from other agencies briefed the appropriate congressional committees prior to the release of the 2018 plan, and discussed the challenges with providing cost and funding data. According to NNSA officials, they verified with the congressional committees that providing such information in the plan would be impractical. We have previously reported that providing estimates of future costs and funding needs can help congressional decisionmakers prioritize projects and identify long-term funding needs. NNSA as well as other agencies within the federal government already develop plans with long-term funding priorities and cost estimates. For example, in June 2014, we reported on 10-year estimates for sustaining and modernizing U.S. nuclear weapons capabilities. As we found in this and other reports, even when budgets are preliminary or not yet known, plans that include a range of potential estimates help Congress prioritize projects and funding. Because the plan does not include any information on interagency costs and funding needs, it limits 1) congressional understanding of the long-term affordability of the nation’s verification and monitoring efforts and 2) Congress’s ability to make necessary funding and policy decisions. By including in its plan estimates of future costs and funding needed to support the activities in the plan, NNSA could help provide assurance that agencies are allocating appropriate resources to the verification and monitoring effort. In addition, including estimates of future costs and funding needs in the plan can help ensure that interagency partners understand the amount of resources necessary to support verification and monitoring efforts, and determine if these resources align with agency activities. We have previously reported on the importance of identifying resources among collaborating agencies; we noted that without information on resource contributions from partners in a collaborative effort, there is less assurance that agency contributions are appropriate to successfully sustain the effort. Similarly, providing information on future costs and funding needs is important to help interagency partners coordinate and develop long-term strategic plans that align with future interagency efforts. We have found that for strategic planning to be done well, plans should demonstrate alignment between activities, core processes, and resources that support mission outcome. By including in its plan estimates of future costs and funding needed to support the activities in the plan, NNSA could help provide assurance that agencies are allocating appropriate resources for interagency efforts and that these resources are aligned with future activities and processes. International Engagement Plan The second reporting requirement called for an international engagement plan for building cooperation and transparency—including bilateral and multilateral efforts—to improve inspections, detection, and monitoring activities. We found that the 2018 plan provided detail on this requirement. The 2018 plan reiterates the nation’s commitment to the NPT and includes information on IAEA’s safeguards programs and U.S support for those programs. For example, under the plan, interagency partners are to continue to encourage countries through diplomatic outreach to conclude Additional Protocol agreements with IAEA. Research and Development Plan The third reporting requirement called for the plan to describe current and planned R&D efforts toward improving monitoring, detection, and in-field inspection and analysis capabilities, including persistent surveillance, remote monitoring, and rapid analysis of large data sets; and measures to coordinate technical and operational requirements early in the process. We found that the 2018 plan provided detail on this requirement. The plan includes detail on a wide range of R&D efforts and non-technical efforts that agencies are pursuing. For example, the plan reports that the Defense Advanced Research Projects Agency is starting a program that models millions of nodes and billions of connections to support the detection of WMD proliferation activities. In addition, the plan describes interagency groups involved in coordinating R&D requirements, such as the National Science and Technology Council Subcommittee on Nuclear Defense Research and Development. Interagency Engagement The fourth reporting requirement called for the plan to describe the engagement of relevant federal departments and agencies; the military departments; national laboratories; industry; and academia. We found that the 2018 plan provided detail on this requirement. The plan includes detail on the roles and responsibilities for interagency partners, as well as information on interagency organizations and working groups to coordinate efforts and reduce duplication. For example, the plan discusses the Department of State’s efforts to lead the interagency policy process on nonproliferation and manage global U.S. security policy, and the Department of Defense’s support of U.S. diplomatic efforts, including agreements with other defense departments, R&D cooperation, and multinational exercises. Conclusion This 2018 plan represents the third effort by Administrations to address the nation’s nuclear proliferation verification and monitoring efforts. The 2018 plan provides more detail on these efforts than the 2015 plan and 2017 update. However, the plan does not include estimates of future costs and funding needs as required by the fiscal year 2018 NDAA. Costs and funding information can help congressional decisionmakers prioritize projects and identify potential long-term funding needs. Similarly, costs and funding information helps interagency partners understand what resources they are expected to contribute in the future and helps to ensure long-term strategic plans reflect an alignment between resources and interagency activities. By including in its plan estimates of future costs and funding needed to support the activities in the plan, NNSA could help provide assurance that agencies are allocating appropriate resources to the verification and monitoring effort and interagency activities, and that these resources are aligned with future activities and processes. Recommendation for Executive Action We are making the following recommendation to NNSA: The Administrator of NNSA should include in its plan for verification and monitoring estimates of future costs and funding needed to support the activities in the plan. (Recommendation 1) Agency Comments and Our Evaluation We provided NNSA with a draft of this report for review and comment. NNSA provided written comments, which are summarized below and reproduced in appendix I; the agency neither agreed nor disagreed with our recommendation to include estimates of future costs and funding needed to support the activities in its plan for nuclear proliferation verification and monitoring. However, NNSA stated that it planned no further action with regard to costs and funding data. NNSA also provided technical comments, which we incorporated as appropriate. NNSA stated that it appreciated our recognition of improvements in the 2018 plan for verification and monitoring over the 2015 plan and the 2017 update. In its written comments, NNSA acknowledged that it did not include interagency cost and funding requirements in the 2018 plan over 10 years as required in the NDAA. The agency stated that it briefed the appropriate congressional committees before the release of the plan about the challenges and feasibility of providing the cost and funding data and received no objections on the omission of the data from the plan. NNSA also stated that it informed us of the briefings. We have added clarification in our report that NNSA officials believed they received agreement from congressional staff to exclude funding and cost estimates from its plan. NNSA stated that the NDAA did not prioritize the relative importance of the reporting requirements, and that we disproportionately weighted the one omission in our assessment, effectively overstating the importance of providing cost and funding information. In addition, NNSA identified challenges to the feasibility of providing interagency out-year cost and funding estimates, including the difficulty to quantify the level of R&D and associated funding required to achieve specific outcomes and that departments and agencies are unable to commit to aligning 10 year funding estimates with individual agencies’ timelines and internal processes for planning, programming, budgeting, and execution. NNSA’s statement suggests that it views nuclear proliferation verification and monitoring programs as being unique and different from other federal programs and that they should therefore be exempt from estimating their potential long-term resource burden on the federal budget. We disagree. Developing future cost and funding estimates for programs is central to effective interagency planning efforts. The efforts described in NNSA’s 2018 nuclear verification and monitoring plan span a diverse range of activities that are implemented across multiple agencies. The absence of cost and funding estimates for these efforts in NNSA’s plan raises questions as to whether there is an effective interagency process to coordinate these efforts and if the process is taking adequate account of resource constraints and making realistic assessments of program resource needs. In addition, information on future cost and funding estimates of federal programs provides Congress with a better understanding of the potential long-term funding needs and costs of the diverse efforts supporting the proliferation verification and monitoring mission. We believe this big picture view is important given the multiple congressional committees of jurisdiction—including appropriations, authorization, and oversight committees—for the efforts identified in NNSA’s plan. Regarding the feasibility of providing 10-year cost and funding estimates, we recognize the difficulty and uncertainty agencies face in estimating future funding needs. However, we do not believe developing such estimates is impossible. As we reported, the Department of Defense (DOD) and the Department of Energy (DOE) prepare an annual plan with 10 year cost and funding estimates for their ongoing nuclear sustainment and modernization efforts, including R&D efforts. NNSA also provided general technical comments addressing our findings on the cost and funding estimates that were not included in the plan, including comments on NNSA’s authority to obtain 10-year estimates from other agencies, and on the examples we cited of other interagency plans that include similar estimates. NNSA stated that it did not have authority to require other agencies to submit 10-year budget estimates for their efforts that are included in the plan. We noted in our report that Congress directed the President to include this element in the nuclear proliferation verification and monitoring plan. However, responsibility to prepare and submit the plan was delegated by the President to DOE. NNSA commented that the joint DOD-DOE annual nuclear sustainment and modernization plan is not comparable to the NNSA plan because the former primarily addresses capital projects and other material products, while the latter primarily addresses R&D activities. The reporting requirements for NNSA’s nuclear proliferation verification and monitoring plan were not limited to R&D efforts, but included cost and funding estimates for related activities and capabilities, including policy, operations, testing, and evaluation. NNSA’s comment focuses only on the difficulty of addressing cost and funding estimates for only one aspect (R&D) of nuclear proliferation verification and monitoring and ignores the possibility that estimates for non-R&D efforts may be more feasible and less difficult to report. Moreover, we have reported that the joint DOD- DOE plan on nuclear modernization includes 10-year DOD and DOE estimates for R&D, as well as estimates for related modernization efforts, including infrastructure, nuclear weapon life extension programs, delivery systems, nuclear command, control, and communications systems, and other related activities. We are sending copies of this report to the appropriate congressional committees, the Administrator of NNSA, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Energy Appendix II: GAO Contact and Staff Acknowledgments: GAO Contact Staff Acknowledgments In addition to the contact named above, William Hoehn (Assistant Director), Dave Messman (Analyst-in-Charge), Alisa Carrigan, Antoinette Capaccio, Ben Licht, Steven Putansu, and Gwen Kirby.
Why GAO Did This Study Countering the proliferation of nuclear weapons is a national security priority that is challenged by weapons advances from existing nuclear states and other actors possessing or attempting to possess nuclear weapons. To help address these issues, Congress directed the Administration in 2015 and 2017 to develop a plan for verification and monitoring relating to the potential proliferation of nuclear weapons, components of such weapons, and fissile material. GAO reviewed the first plan submitted to Congress in 2015, and an update submitted in 2017. GAO reported in March 2018 that this plan and update generally did not address the congressionally mandated reporting requirements. In the fiscal year 2018 NDAA, Congress directed the Administration to develop another plan and included a provision for GAO to review the plan. This report assesses whether the Administration's new plan provided details on the reporting requirements included in the NDAA. To determine whether the plan provided details on the reporting requirements, GAO reviewed the fiscal year 2018 plan and assessed whether the plan included details for each of the elements as required by the NDAA. What GAO Found GAO found that the 2018 plan provided details on most of the reporting requirements in the National Defense Authorization Act (NDAA) for Fiscal Year 2018, but did not include information on future costs and funding needs (see table below). In the NDAA, Congress directed the President to produce a plan that would address four reporting requirements: (1) a plan and roadmap on verification, detection and monitoring efforts, including details on costs and funding needs over 10 years, (2) an international engagement plan, (3) a research and development plan, and (4) a description of interagency engagement. The National Nuclear Security Administration (NNSA), a separately organized agency within the Department of Energy, developed the plan and submitted it to Congress in April 2018. According to NNSA officials, NNSA did not include long-term costs and funding needs in the plan because identifying these needs is unrealistic for several reasons, including because agencies have little influence over the spending priorities of other departments outside of the President's budget process. However, NNSA and other agencies already develop plans with long-term funding priorities and cost estimates for other programs. Because the plan does not include any estimates on future costs and funding needs, it limits congressional understanding of the long-term affordability of the nation's verification and monitoring efforts and its ability to make necessary funding and policy decisions. GAO has previously reported that providing estimates of future costs and funding needs can help congressional decisionmakers prioritize projects and identify long-term funding needs. By including in its plan estimates of future costs and funding needed to support the activities in the plan, NNSA could help provide assurance that agencies are allocating appropriate resources to the verification and monitoring effort and that these resources are aligned with future activities and processes. What GAO Recommends GAO recommends that the Administrator of NNSA should include in its plan estimates of future costs and funding needed to support the activities in the plan. NNSA neither agreed nor disagreed with the recommendation, but said it planned no further action. GAO maintains that the recommendation is valid.
gao_GAO-18-376
gao_GAO-18-376_0
Background Placements for Children Entering and Exiting Foster Care Children enter foster care when they have been removed from their parents or guardians and placed under the responsibility of a child welfare agency. Reasons for a child’s removal can vary, though 61 percent of nearly 275,000 removals during fiscal year 2016 involved neglect and 34 percent involved drug abuse by the parent(s), according to the most recent available HHS data. Child welfare agencies most commonly place children with unrelated foster parents, with relatives, or in congregate care settings. Coordinating placement and support services for these children, such as physical and mental health services, education, child care, and transportation, is typically the responsibility of child welfare agency caseworkers. Caseworkers may also coordinate placements for children exiting foster care, which most commonly include reunifications with the child’s parents or permanent placements through adoption, legal guardianship, or other living arrangements with a relative. Children who age out of the foster care system without a permanent placement with a family may receive transitional supports, such as housing and job search services. Children placed in foster families—including unrelated foster parents, relatives, and fictive kin (e.g., close family friends who are not relatives)— live in the family’s home and are typically incorporated into an existing family structure. For example, these families may include biological children and other children in foster care. Families may receive a payment from the child welfare agency to help cover the costs of a child’s care, as determined by each state. Families who are trained to provide therapeutic foster care services are supervised and supported by qualified program staff to care for children who need a higher level of care. Therapeutic foster care families may have fewer or no other children in the home, and parents in these families may be required to provide a higher level of care and supervision for the child. In addition, the payment provided to these families may be higher. Responsibilities for Recruiting and Retaining Foster Families States are primarily responsible for administering their child welfare programs, consistent with applicable federal laws and regulations. Their responsibilities include recruiting and retaining foster families and finding other appropriate placements for children. In recruiting foster families, states generally require that families undergo a licensing process that includes a home study to assess the suitability of the prospective parents, including their health, finances, and criminal history, and take pre-service training on topics such as the effects of trauma on a child’s behavior. In retaining foster families, states may provide support to families, such as through ongoing training classes and regular visits from child welfare agency caseworkers if a child is placed in their home. State and county child welfare agencies may work with private foster care providers, commonly through contracts, to help them administer child welfare services. Private providers can include non-profit and for-profit organizations that provide a range of public and private services in addition to foster care, such as residential treatment, mental health, and adoption services. For foster care, private providers may be responsible for recruiting foster families, which may involve identifying prospective foster parents, providing information on and helping with the licensing process, and conducting home studies and training. If the child welfare agency places a child with a foster family working with a private provider, the private provider may also be responsible for activities that can help retain foster families, such as conducting regular visits with the family (in addition to visits from child welfare agency caseworkers) and helping them access needed services. Child welfare agencies may pay these providers based on the number of children placed. This payment may include an administrative payment to the private provider, as well as a payment that the private provider passes on to the foster family to help cover the costs of a child’s care. Child welfare agencies and private providers may also work with other entities to recruit and retain foster families. For example, they may collaborate with community partners, such as faith-based organizations and schools, to share information about foster care and recruit families. Child welfare agencies and private providers may also work with direct service providers, such as hospitals and community-based mental health clinics, to obtain services to support children in foster care and their foster families, which can help retain these families. Federal Supports Related to Recruiting and Retaining Foster Families HHS’s Administration for Children and Families (ACF) administers several federal funding sources that states can use to recruit and retain foster families, in addition to state, local, and other funds. For example, funding appropriated for title IV-E of the Social Security Act makes up the large majority of federal funding provided for child welfare, comprising about 89 percent of federal child welfare appropriations in fiscal year 2017 (approximately $7 billion of nearly $7.9 billion), according to ACF. These funds are available to states to help cover the costs of operating their foster care, adoption, and guardianship assistance programs. For example, in their foster care programs, states may use these funds for payments to foster families to help cover the costs of care for eligible children (e.g., food, clothing, and shelter) and for certain administrative expenses, including recruiting and training prospective foster parents. Title IV-E funds appropriated specifically for foster care programs totaled about $4.3 billion in fiscal year 2017, comprising about 61 percent of title IV-E funding, according to ACF. In addition, title IV-B of the Social Security Act is the primary source of federal child welfare funding available for child welfare services. States may use these funds for family support and family preservation services to help keep families together and reduce the need to recruit and retain foster families. Such services can include crisis intervention, family counseling, parent support groups, and mentoring. States may also use title IV-B funds to support activities to recruit and retain foster families. Federal appropriations for title IV-B comprised about 8 percent of federal child welfare appropriations (approximately $650 million of nearly $7.9 billion) in fiscal year 2017, according to ACF. ACF is responsible for monitoring states’ implementation of these programs. For example, ACF monitors state compliance with title IV-B plan requirements through its review of states’ 5-year Child and Family Services Plans and Annual Progress and Services Reports. Child and Family Services Plans set forth a state’s vision, goals, and objectives to strengthen its child welfare system, and Annual Progress and Services Reports provide annual updates on the progress made by states toward those goals and objectives. Child and Family Services Plans are required for a state to receive federal funding under title IV-B, and document the state’s compliance with federal program requirements. One requirement is that states must describe in their plans how they will “provide for the diligent recruitment of potential foster and adoptive families that reflect the ethnic and racial diversity of children in the State for whom foster and adoptive homes are needed.” In addition, ACF conducts Child and Family Services Reviews, generally every 5 years, to assess states’ conformity with requirements under these federal programs. These reviews involve case file reviews and stakeholder interviews, and are structured to help states identify strengths and areas needing improvement within their agencies and programs. States found not to be in substantial conformity with federal requirements must develop a program improvement plan and undergo more frequent review. States Employ a Range of Strategies to Recruit Foster Families and Nearly All Use Private Providers to Recruit, Particularly for Therapeutic Foster Care States Recruit Foster Families by Searching for Relatives, Conducting Outreach, Targeting Certain Populations, and Obtaining Referrals Searching for Relatives or Fictive Kin In addition to the diligent recruitment requirements under title IV-B of the Social Security Act, states receiving federal foster care funds under title IV-E are generally required to search for relatives when a child enters foster care. In the three selected states—California, Georgia, and Indiana—child welfare officials said their first priority is to recruit relatives or fictive kin to care for children entering foster care, when appropriate. Officials in California and Georgia discussed recent initiatives to expand the search for relatives and fictive kin for children already in foster care. For example, county child welfare officials in California said they contracted with a private provider who they also use to recruit and retain foster families to conduct these searches. This particular private provider told us that they can access the child welfare agency’s case management system to review information about each child to determine which relatives or fictive kin have already been contacted. The private provider said they may contact these relatives or fictive kin to see whether circumstances have changed such that they would now be able to care for the child. In addition, the private provider said they may use existing contacts, social media, and an identity search program to locate additional relatives or fictive kin for a child. This private provider reported that from July to September 2017, their searches yielded 36 additional relatives or fictive kin, on average, for each of the 23 children in one county for whom the private provider conducted a search. In addition, officials in Georgia said they initiated pilot projects in two regional offices to train staff on how to search for relatives and fictive kin. Community Outreach Community outreach to a broad population of prospective foster families is a moderately or very useful recruitment strategy, according to 36 states that responded to our survey. In addition, child welfare officials and 11 of the 14 private providers in the three selected states said they engage in community outreach events to recruit prospective foster families. For example, they said they attend local events (e.g., state fairs) or visit local organizations (e.g., faith-based organizations or schools) to provide information about becoming a foster parent. One private provider said they attend local markets and summer festivals to talk with prospective families and provide them with informational materials. Another private provider said they hold meetings for prospective foster parents to answer questions and provide additional information about foster care and the role of the private provider. In addition, 20 states reported in our survey that marketing campaigns, such as mailings and media advertisements, are a moderately or very useful recruitment strategy. In the three selected states, child welfare officials and 12 of the 14 private providers said they use different forms of media, such as newspapers, radio, television, billboards, social media, or printed advertisements, to solicit foster families. Child welfare officials we interviewed in Georgia and Indiana said they have implemented statewide media campaigns that incorporate both traditional and digital media. Officials in Georgia told us the campaigns have successfully increased inquiries through the agency’s website and toll-free phone line. A private provider in one county said they worked with a marketing firm to create advertisements that were shown in movie theaters, which also resulted in additional inquiries from prospective families. With regard to therapeutic foster care services, private providers we spoke with in both of our discussion groups said they use strategies such as yard signs, television commercials, and social media to recruit therapeutic foster care families. Targeted Recruitment In our survey, nearly all states reported having targeted recruitment strategies as part of their recruitment plans or practices, such as strategies that focus on certain populations of prospective foster parents (e.g., those in faith-based communities or of a certain race), families for certain populations of children in foster care (e.g., teenagers and sibling groups), and families living in specific geographic locations. To help inform their recruitment strategies, 39 states reported in our survey that they collect and use information on children awaiting placement, such as their backgrounds and service needs, and 31 states reported that they collect and use information on available foster families, such as their preferences for placements and where they are located. In the three selected states, child welfare officials and 8 of the 14 private providers we interviewed said they use targeted recruitment to identify prospective foster families. In addition, child welfare officials and five private providers said they collect or use demographic data on children needing placement and available foster families to inform their efforts. For example, child welfare officials in one county said they use data to target recruitment efforts in the neighborhoods where children entered foster care. Similarly, one private provider told us they use data on the demographics of successful foster families to target recruitment efforts toward those types of families, such as social workers and parents whose children have grown up and left home (i.e., “empty nesters”). Targeted recruitment can be a particularly useful strategy to identify families who can provide therapeutic foster care services for children who need a higher level of care, such as those who have severe mental health conditions or who are medically fragile. In the three selected states, child welfare officials and four private providers said they use targeted recruitment strategies to search for families who can provide therapeutic foster care services. For example, child welfare officials in one state said they focus on recruiting individuals with specific skillsets, such as doctors and nurses who have experience working with children who need more care. Private providers in both of our discussion groups also said they use targeted recruitment strategies for these purposes. Referrals from Current Foster Families When asked in our survey about the usefulness of various recruitment strategies, states most often cited referrals from current foster families as a moderately or very useful recruitment strategy. In the three selected states, child welfare officials and all 14 private providers said they use referrals from current foster families to recruit new families, and the majority of these officials and private providers said such referrals are the most effective recruitment strategy. One private provider emphasized that current foster families are better recruiters than private providers because these families can speak from first-hand experience about the potential benefits and difficulties of caring for a child in foster care. Another private provider said that referrals occur through regular interactions in the community or through information meetings and events facilitated by private providers, such as movie nights. To encourage referrals, 6 of the 14 private providers in the three states said they offer financial incentives to current foster families who help recruit new families. For example, three of these private providers said they offer incentives ranging from $100 to $500. In regard to therapeutic foster care services, private providers in both of our discussion groups said referrals are the most effective recruitment strategy. Private providers in one group said they offer financial incentives ranging from $200 to $300, which generally are paid after a new family becomes licensed to provide therapeutic foster care services and a child has been placed in their home. Eight of the 14 private providers in the three selected states said they try, in general, to employ multiple types of recruitment strategies. Further, many of these private providers explained that prospective foster parents typically hear about foster care through multiple mediums before applying to become a parent. For example, a prospective parent might hear a radio advertisement, then see a billboard, and later talk to a private provider at a state fair before deciding to apply. Foster parents we spoke with in the three states, as well as in discussion groups on therapeutic foster care services, discussed a number of reasons why they became foster parents, including knowing others who had provided foster care, having the desire to give back, and wanting to expand their family by fostering with the intention to adopt a child (see text box). Almost All States Reported Using Private Providers to Recruit Foster Families, Particularly for Therapeutic Foster Care In our survey, 49 states reported using private providers to recruit foster families, including 44 that use private providers to recruit families who can provide therapeutic foster care services for children who need a higher level of care. Specifically, 30 states reported that they use private providers to recruit both traditional and therapeutic foster care families, 14 reported that they use private providers to recruit therapeutic foster care families exclusively, and the remaining 5 reported that they use private providers to recruit traditional foster families exclusively. In the three selected states, child welfare officials said they initially developed agreements with private providers to recruit families who can provide therapeutic foster care services. However, as state caseloads have risen, these officials said they have also referred children who do not need therapeutic foster care services to private providers. Child welfare officials and private providers in the three selected states said that private providers in their states are responsible for both recruiting and retaining foster families. They said responsibilities of private providers can include helping families become licensed, suggesting possible matches between children and available families, and providing support to help families access services needed to care for children in foster care (see fig. 1). Child welfare officials and private providers in the three selected states described ways they have collaborated to recruit foster families, and discussed the benefits of using private providers to recruit and retain these families. For example, child welfare officials in one county said they collaborated with private providers to create common marketing materials that included information about the child welfare agency and each private provider, which helps prospective foster families decide which entity they want to work with. Officials and private providers in this county said collaborative recruitment efforts are an efficient use of resources and reduce competition in recruiting from the same pool of prospective foster families. Nearly all of the 14 private providers we interviewed in the three selected states said they can help child welfare agencies support foster families, particularly those who care for children who need more care than others, because they can maintain lower caseloads and be more accessible to families than child welfare agencies. These private providers explained that they accept placements for children only when they have available foster families and staff, whereas child welfare agencies cannot choose how many children they have in their caseloads. Specifically, four private providers noted that private providers typically maintain small caseloads, such as 10 children per private provider caseworker. In contrast, seven private providers said child welfare agencies manage larger caseloads—as high as 40 children per caseworker—which can strain their ability to support foster families. In addition, eight private providers said families can contact them 24 hours a day, which may not be the case with child welfare agency caseworkers. All of the 49 states that reported using private providers in our survey also reported having various oversight mechanisms to monitor them. These mechanisms include periodic audits and site visits, regular calls for information sharing, periodic check-ins with foster families working with private providers, and requirements for providers to develop recruitment plans. Child welfare officials in the three selected states provided detail on a range of oversight activities. For example, child welfare officials in Georgia said their agency conducts comprehensive audits of private providers annually, which include an examination of the facility, case file reviews, and staff interviews. In addition, county child welfare officials in California said their agency requires private providers to attend monthly meetings with agency staff and submit quarterly outcome reports. States Reported Various Recruitment and Retention Challenges, Including Difficulties Prioritizing Recruitment Efforts and Supporting Foster Families In Recruiting Foster Families, States Reported Challenges with Prioritizing Efforts, Extensive Licensing Processes, and Finding Families Who Can Meet the Needs of Children Difficulties Prioritizing Recruitment Efforts In response to our survey, 34 states reported that limited resources to focus on foster family recruitment made their recruitment efforts moderately or very challenging. In the three selected states, child welfare officials raised concerns about their ability to prioritize foster family recruitment efforts, given large increases in their foster care caseloads and other demands for resources. Nationwide, caseloads increased by over 10 percent from fiscal years 2012 through 2016, according to HHS data. In addition, 8 of the 14 private providers in the three states told us that a lack of dedicated funding for recruitment from child welfare agencies made recruitment efforts challenging. One private provider said they have recently put recruitment efforts on hold to focus on serving children in existing placements. States also reported in our survey that eligibility requirements for federal foster care funding have affected their ability to prioritize resources for recruitment. Specifically, of the 34 states that provided a response on this issue, almost half reported that requirements that tie eligibility for receiving federal funds under title IV-E of the Social Security Act to income eligibility standards under the discontinued Aid to Families with Dependent Children program have affected their recruitment efforts to a moderate or great extent. States may use title IV-E funds to assist with the costs of operating their foster care programs, and are generally entitled to receive these funds based on the number of eligible children they have in their programs. To be eligible for title IV-E foster care funds, a child must have been removed from a home that meets income eligibility standards under the Aid to Families with Dependent Children program as of July 1996, among other criteria. The Aid to Families with Dependent Children program was replaced by the Temporary Assistance for Needy Families program beginning in 1996, and the income eligibility standards for title IV-E foster care funding have not been changed since then. We reported in 2013 that a family of four had to have an annual income below $15,911 to meet the income eligibility threshold in 1996. If adjusted for inflation, the threshold would have been $23,550 in 2013. Due, in part, to fewer families meeting these income eligibility standards, we found that the number of children who currently meet title IV-E eligibility requirements has declined. As a result, we reported that states have received less federal funding under title IV-E and have paid an increasingly larger share of funds for their foster care programs. The percentage of children eligible for title IV-E foster care funds decreased from about 54 percent in fiscal year 1996 to nearly 39 percent in fiscal year 2015, according to data published by the Congressional Research Service (see fig. 2). Given fiscal constraints, child welfare agencies, like other state agencies, may need to make difficult choices about how to allocate their limited resources. The process for licensing foster families can help ensure that children are placed in safe and stable environments that meet their needs. However, 35 states reported in our survey that lengthy licensing processes made it moderately or very challenging to recruit new foster families. In the three selected states, child welfare officials and 7 of the 14 private providers discussed extensive state licensing processes that may discourage prospective foster families, including delays in getting fingerprints, completing background checks, or reviewing applications. Some private providers said delays are likely caused by competing priorities at state licensing agencies or limited staff in child welfare agencies. One private provider told us that families may wait several months for approval after completing an application. Another private provider told us that in the past year, approval time frames for licenses have, in some cases, increased from 1 to 2 weeks to 3 to 6 months. In regard to therapeutic foster care services, private providers in both discussion groups raised similar concerns (see text box). Child welfare officials in California told us they are in the process of restructuring their licensing process to improve efficiencies and reduce burden for foster families. In addition, county child welfare officials in the state told us they are offering families additional support to help them through the licensing process, such as assigning staff to prospective foster families as soon as they initiate the licensing process to help them complete required paperwork and schedule pre-service training. Finding Families Who Can Meet the Needs of Children and Other Challenges In response to our survey, states reported difficulties finding families who can meet the needs of children, particularly for therapeutic foster care services. Specifically, 37 states reported that the needs of children entering foster care have increased, and 35 reported that there are not enough foster families willing to care for the types of children needing placement. For example, nearly all states cited difficulties finding families for children with aggressive behaviors and severe mental health needs, as well as for teenagers and sibling groups. Consequently, 36 states reported difficulties appropriately matching children with families, and 30 reported having moderately or significantly too few therapeutic foster care families (see text box). In the three selected states, child welfare officials and 7 of 14 private providers discussed similar challenges finding appropriate families for children needing placement. For example, officials in one state said the increased demand for both traditional and therapeutic foster care families has caused them to place children in the first available home rather than match them with families based on the family’s preferences and ability to provide care. One private provider told us that due to the increasing number of referrals for placements, they are not able to be as selective during the matching process as they have been in the past. Another private provider said child welfare agencies may be so pressed to find placements for children that they may call foster families working with the private provider directly, which can put pressure on the family to agree to the placement even when the family does not believe the child is a good fit. One private provider told us that a foster family accepted a child who had been sleeping in the child welfare agency caseworker’s office, but the placement was not a good fit and was eventually disrupted, which was traumatic for both the child and the foster family. Private providers in both of our discussion groups said finding families willing to provide therapeutic foster care services to children can be difficult. They noted that parents may be required to take on more documentation and supervision responsibilities for a child who requires a higher level of care and complete more intensive training, which may be difficult for working parents. In addition to challenges finding appropriate families for children, 34 states reported in our survey that a negative perception of foster care made it moderately or very challenging to recruit new families. Child welfare officials in two states and 5 of the 14 private providers we interviewed raised similar concerns. For example, child welfare officials in one county told us that they recruit foster families in an environment where media reports have highlighted challenges with overburdened caseworkers and turnover of agency directors. These officials also said foster parents may share negative experiences with family and friends, leading to an unfavorable impression of child welfare agencies within the community. In addition, child welfare officials in one state and four private providers said some families who provide foster care services have faced false allegations of child abuse and subsequent investigations. Some private providers said these investigations can be emotionally draining or disruptive to the family, and some said that fear of such allegations and investigations may deter prospective families from becoming a foster family. Other recruitment challenges cited by several child welfare officials, private providers, and foster parents we interviewed included concerns by prospective foster families about caring for children who have high needs or who are certain ages, or that providing foster care will disrupt their nuclear family. While many child welfare officials and private providers we spoke with acknowledged these negative perceptions and fears, parents in all eight foster parent groups we interviewed in the three states also discussed how being a foster family can be a positive experience. For example, several foster parents said providing foster care to different types of children has enhanced their family. Private providers and foster parents also said it is important to share personal experiences to bring understanding about what it is like to be a foster family. For example, one foster parent told us about a blog she writes to describe normal family activities that include children in foster care, such as taking family trips. In Retaining Foster Families, States Reported Challenges with Inadequate Support for Families and Limited Access to Services for Children Inadequate Support for Foster Families In response to our survey, 29 states reported that inadequate support for foster families from the child welfare agency made it moderately or very challenging to retain these families. In the three selected states, all 14 private providers we interviewed and foster parents in all eight of the foster parent groups we spoke with emphasized the importance of supporting families in order to retain them. All 14 private providers discussed concerns about communication with child welfare agencies, which they said can affect the quality of services they provide to foster families. For example, 10 of the private providers said they have difficulty contacting or receiving a response from child welfare agency caseworkers when they try to obtain information needed to comply with child welfare agency requirements. One private provider explained that they are required to develop a service plan for each child they place with a family, and the plan must be signed by the child welfare agency caseworker within 5 days of placement. However, this private provider said they often cannot reach the caseworker to have plans reviewed and approved within the required time frame. Seven private providers told us that there often is confusion on the part of child welfare agency caseworkers about the role of private providers. For example, these private providers said child welfare agency caseworkers may not know which tasks the private providers are responsible for or may be unfamiliar with the paperwork they need to give to the private provider. Similarly, foster parents in five groups expressed dissatisfaction with the level of support they have received from child welfare agency caseworkers. These foster parents described instances in which they were unable to reach their caseworker during emergencies, such as when they needed permission to administer medications to their foster child. One foster parent told us she had waited approximately 8 weeks for her caseworker to approve her child’s medication. This parent said she worked with her private provider to email the child welfare agency caseworker on a daily basis, but received no response. Foster parents in our discussion group raised similar concerns (see text box). Reasons why child welfare agency caseworkers may be limited in their ability to support foster families can include high caseloads and caseworker turnover. For example, 33 states reported in our survey that having too few staff and inadequate funding made it moderately or very challenging to retain foster families. In the three selected states, child welfare officials, 9 of 14 private providers, and foster parents in five of the eight foster parent groups noted that high caseloads contribute to a lack of support for foster families. Child welfare officials in one state said although their regulations stipulate a maximum caseload of 12 to 17, many caseworkers have caseloads that exceed those levels. In addition, a private provider in this state told us that child welfare agency caseworkers typically carry about 35 cases. Other private providers explained that the demands on child welfare caseworkers to meet basic paperwork and case planning requirements and conduct visits for a large caseload may prevent them from responding to requests or returning phone calls in a timely manner. Child welfare officials in two states, 11 private providers, and foster parents in three foster parent groups also explained that frequent caseworker turnover can affect the level of support foster families receive, particularly when new caseworkers are unfamiliar with a child’s history and needs. One foster parent told us that she had worked with eight different child welfare agency caseworkers in a 19-month period. Another foster parent said she maintains all of her foster children’s records, since in the past, documents have been lost in transfers between child welfare agency caseworkers. Child welfare officials in the three selected states acknowledged difficulties supporting foster families due to high caseloads or caseworker turnover. Officials in one state said they recently requested additional state funds to add 500 caseworker positions, and officials in another state said they have made efforts to revisit staffing levels following reductions during the economic recession in 2008. In addition, many private providers and foster parents we interviewed noted limitations with other supports for foster families. For example, 10 of 14 private providers and foster parents in three of the eight foster parent groups in the three states discussed their concerns about low payment rates for foster families, which some said may not adequately cover the costs of caring for a child. A 2012 study on payment rates for foster families found that basic payment rates (e.g., for traditional foster care services) in the majority of states fell below estimated costs of caring for a child, based on data from the U.S. Department of Agriculture. Five private providers and foster parents in five foster parent groups also discussed a lack of access to respite care services or a lack of “voice” for foster parents in contributing to decisions regarding children in their care. These private providers and foster parents said these circumstances can be frustrating and cause parents to leave the system. Limited Access to Services for Children and Other Challenges In response to our survey, 31 states reported that inadequate access to services, such as child care and transportation, made it moderately or very challenging to retain foster families. In the three selected states, child welfare officials, 9 of 14 private providers, and foster parents in six of eight foster parent groups discussed similar difficulties. For example, they discussed difficulties accessing child care services, which some said are particularly needed because of the increasing number of opioid- affected infants coming into care. Some officials, private providers, and foster parents said their state may offer child care subsidies, but waitlists can be long, and foster families may have difficulties finding an approved childcare center, particularly for children who need a higher level of care. Further, child welfare officials, private providers, and foster parents discussed challenges accessing transportation services. For example, child welfare officials said children are sometimes moved to homes outside their original community due to a lack of available homes, which places a burden on foster families to transport children to physical and mental health appointments, regular visits with their biological families, and school. A private provider we interviewed said many parents who provide transportation to these various appointments also must go through a burdensome process to claim mileage reimbursement from the child welfare agency, so many parents do not submit a claim. In addition, child welfare officials, private providers, and foster parents discussed challenges accessing mental health services. For example, one private provider said they have been unable to find a qualified mental health provider who accepts Medicaid to deliver needed services to an autistic child. Further, child welfare officials we interviewed in one county discussed difficulties connecting children with therapists who have an understanding of childhood trauma. In addition to these challenges, child welfare officials and private providers we interviewed said many foster families leave the foster care system due to family or life changes, including adoptions of children in their care, retirements, health issues, and relocation to a different state. HHS Supports States’ Recruitment and Retention Efforts with Technical Assistance, Guidance, and Funding, though Private Providers Were Unaware of Some Supports HHS’s Administration for Children and Families (ACF) provides a number of supports to help state child welfare agencies in their efforts to recruit and retain foster families, according to ACF officials we interviewed and agency documents we reviewed. These supports include technical assistance, guidance and information, and funding. Technical assistance. ACF provided technical assistance through its National Resource Center for Diligent Recruitment (the Center), and subsequently, the Child Welfare Capacity Building Collaborative. The Center provided several types of technical assistance to achieve its aim of helping states develop and implement diligent recruitment programs to achieve outcomes such as improving permanency and placement stability for children in foster care. The Center provided on- and off-site coaching to states in a number of areas, such as developing a mix of general and targeted recruitment strategies, using existing data to target recruitment efforts, and developing a recruitment plan. Staff who worked at the Center reported providing direct technical assistance and training to 30 states. The Center also provided toolkits that guide states through the process of developing a comprehensive diligent recruitment plan to meet federal requirements. For example, the toolkits include discussion questions about the goals states have for their plans, suggestions on which stakeholders to include, and worksheets to help states analyze existing data. ACF officials told us that they also review states’ diligent recruitment plans and may provide feedback to states. In addition, ACF provides technical assistance to states through its Child and Family Services Reviews. These reviews are generally conducted every 5 years and examine a number of factors in states’ foster care programs to assess conformity with federal requirements, including factors related to recruiting and retaining foster families. In its reviews of 24 states in fiscal years 2015 and 2016, ACF reported deficiencies for 18. ACF officials said these deficiencies included a lack of adequate state recruitment plans and data used for recruitment efforts. In addition, they said they will be working with states to address identified deficiencies in subsequent program improvement plans, which are to be developed in consultation with ACF. Guidance and information. ACF provides a wide range of guidance and information to states to support their recruitment and retention efforts. For example, the Center distributed free monthly electronic newsletters that provided information on new tools, resources, and webinars related to foster family recruitment and retention. The Center also developed or provided links to publications on topics such as using data to inform recruitment efforts, taking a customer service approach in working with current and prospective foster families, and lessons learned from related projects funded by ACF. The Center facilitated information sharing among states by holding webinars, such as one on the benefits of implementing a comprehensive diligent recruitment program, and peer-to-peer networking events on topics such as recruiting, developing, and supporting therapeutic foster care families. In addition, ACF’s Child Welfare Information Gateway is a website that provides access to a broad array of electronic publications, websites, databases, and online learning tools for improving child welfare practice. For example, its resources related to recruiting and retaining foster families include publications on strategies and tools, as well as examples from state and local child welfare agencies on promising practices. Funding. HHS administers a number of federal funding sources that states said they used for their foster family recruitment and retention efforts. For example, in our survey, states most often cited using child welfare funds under title IV-E and IV-B of the Social Security Act for these purposes in fiscal year 2016 (see fig. 3). ACF also provided a number of discretionary grants to support state efforts to recruit and retain foster families through the Adoption Opportunities program, which funds projects designed to eliminate barriers to adoption and help find permanent families for children, particularly older children, minority children, and those with special needs. Specifically, ACF awarded cooperative agreements to 22 states, localities, and non-profit organizations in fiscal years 2008 through 2013 for 5-year projects that aim to enhance recruitment efforts and improve permanency outcomes for children, among other things. For example, ACF awarded a cooperative agreement in 2010 to the county child welfare agency in Los Angeles, California to launch a project that targeted recruitment efforts to prospective foster families in African American, Latino, LGBT, and deaf communities to increase permanency outcomes for their foster care population. In addition, it awarded a cooperative agreement in 2013 to Oregon’s state child welfare agency to implement a project that focused on developing customer service concepts in working with foster families, increasing community partnerships, and using data to inform recruitment efforts and outcome measures. In addition, ACF also awarded two cooperative agreements to Spaulding for Children to develop training for prospective and current foster and adoptive families. The first, awarded in fiscal year 2016, was for a 3-year project to develop a foster and adoptive parent training program to prepare families who can care for children who have high needs, such as children needing therapeutic foster care services. The second, awarded in fiscal year 2017, was for a 5-year project to develop a foster and adoptive parent training program for all individuals interested in becoming a foster family or adopting a child from foster care or internationally. In response to our survey, many states reported that they found these federal supports helpful to their recruitment and retention efforts. For example, guidance and information, such as the electronic newsletters, publications, and webinars provided by the Center, were cited most often by states as being moderately or very helpful (31 states). Over half the states reported that networking opportunities, such as peer-to-peer networking events facilitated by the Center, and technical assistance provided by the Center were moderately or very helpful to their efforts (28 and 27 states, respectively). However, similar to concerns raised by all 14 private providers in the three selected states about communication issues with child welfare agencies, several private providers told us they have not received guidance or information from these agencies about recruiting and retaining foster families, and most were unaware of some of the supports provided by ACF. Specifically, 11 of the 14 private providers said they were unaware of the National Resource Center for Diligent Recruitment, and 7 told us that the information offered by the Center would have been useful to their recruitment efforts had they known about it. For example, one private provider told us they have been trying to use data to more effectively recruit foster families, and the Center’s resources on recruitment strategies and tools would have been helpful in these efforts. Another private provider said each private provider in their area conducts recruitment activities based on its own ideas and experiences, and the Center’s resources would have been helpful in ensuring that they use the most effective strategies. ACF officials said they encourage states to involve all relevant stakeholders in their efforts to recruit and retain foster families. They acknowledged that ACF has not provided specific guidance and information to states on working with private providers, but noted that some supports, such as online publications and webinars, are available to private providers working in the public sector. ACF officials explained that their efforts have focused on child welfare agencies because these are the entities that receive federal funds. However, federal internal control standards state that agencies should communicate necessary information, both internally and externally, to achieve their objectives. The mission statement for ACF’s Children’s Bureau is to partner with federal, state, tribal, and local agencies to improve the overall health and well-being of the nation’s children and families. According to its website, the Children’s Bureau carries out a variety of projects to achieve its goals, such as providing guidance on federal law, policy, and program regulations, offering training and technical assistance to improve child welfare service delivery, and sharing research to help child welfare professionals improve their services. Given that almost all states use private providers to help them recruit foster families, and that private providers may be responsible for providing supports to help retain these families, it is important for HHS to determine whether additional information on working more effectively with private providers would be useful to states. This could help HHS better achieve its goals in supporting states’ efforts to recruit and retain foster families. Conclusions States face challenges recruiting and retaining foster care families and almost all states rely on private providers to help them meet the demand for appropriate foster families, particularly those who can provide therapeutic foster care services. However, private providers used by child welfare agencies in the three states where we conducted interviews raised concerns about the level of communication they have with these agencies. Such communication issues can affect the quality of services provided to support foster families, as well as the level of guidance and information private providers receive from child welfare agencies. Although HHS has provided various supports that states have found useful in their efforts to recruit and retain foster families, many of the private providers we spoke with were unaware of some supports that they said could have helped them. Given the important role private providers play in recruiting and retaining foster families, state feedback to HHS on whether child welfare agencies could benefit from information on how to work more effectively with private providers could help HHS determine whether it needs to take action to better support states’ use of private providers. Recommendation for Executive Action GAO recommends that the Secretary of Health and Human Services seek feedback from states on whether information on effective ways to work with private providers to recruit and retain foster families would be useful and if so, provide such information. For example, HHS can seek feedback from states through technical assistance and peer-to-peer networking activities. If states determine that information would be useful, examples of HHS actions could include facilitating information sharing among states on successful partnerships between states and private providers and encouraging states to share existing federal guidance and information. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this report to the Secretary of HHS for review and comment. HHS agreed with our recommendation and said it will explore with states whether additional materials specific to private providers would be useful. While HHS noted that it has no authority over private providers, it provided examples of ways the agency has supported states’ efforts to recruit and retain foster families and encouraged them to involve private providers in these efforts. We believe that seeking feedback from states on whether they would like information on effective ways to work with private providers would be a useful first step. With that information, HHS could then determine if additional supports are needed to help states meet the demand for appropriate foster families. A letter conveying HHS’s formal comments is reproduced in appendix II. We are sending copies to the appropriate congressional committees, the Secretary of the Department of Health and Human Services, and other interested parties. The report will also be available at no charge on the GAO website at www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Overview This report examines (1) how state child welfare agencies recruit foster families, including those who provide therapeutic foster care services, (2) challenges, if any, to recruiting and retaining families, and (3) the extent to which the U.S. Department of Health and Human Services (HHS) provides support to child welfare agencies in their efforts to recruit and retain foster families. To address our objectives, we administered a web- based survey of state child welfare agencies in the 50 states and the District of Columbia to obtain national information. To obtain more in- depth information, we interviewed child welfare officials, private providers, and foster parents in three selected states (California, Georgia, and Indiana). To obtain perspectives on providing therapeutic foster care services specifically, we conducted three discussion groups with private providers and foster parents at a national foster care conference. To develop our methodologies, we conducted a literature search related to foster care recruitment and retention, including for therapeutic foster care services, and we interviewed experts with a range of related research, policy, and direct service experience. To examine how HHS supports child welfare agencies in their efforts to recruit and retain foster families, we interviewed officials from HHS’s Administration for Children and Families (ACF), Centers for Medicare & Medicaid Services, Office of the Assistant Secretary for Planning and Evaluation, and Substance Abuse and Mental Health Services Administration. We reviewed relevant documents obtained in these interviews and other information available on HHS’s website, such as from the National Resource Center for Diligent Recruitment and the Child Welfare Information Gateway. We focused on HHS efforts from fiscal years 2012 through 2016. We also reviewed relevant federal laws, regulations, and HHS policies, as well as federal internal control standards. Survey of State Child Welfare Agencies To obtain nationwide information on our objectives, we surveyed officials from state child welfare agencies in the 50 states and the District of Columbia. The survey was administered in September 2017, and we obtained a 100 percent response rate. The survey used a self- administered, Web-based questionnaire, and state respondents received unique usernames and passwords. To develop the survey, we performed a number of steps to ensure the accuracy and completeness of the information collected, including an internal peer review by an independent GAO survey expert, a review by an external foster care expert, and pre-testing of the survey instrument. Pre-tests were conducted over the phone with child welfare officials in four states to check the clarity of the question and answer options, as well as the flow and layout of the survey. The states that participated in pre- testing were selected based on recommendations from foster care experts and variation in child welfare administration systems (i.e., state- versus county-administered) and use of private providers. We revised the survey based on the reviews and pre-tests. The survey was designed to gather information from state child welfare agencies rather than county- level child welfare agencies or private providers. As such, we included questions in the survey to ensure that respondents were knowledgeable about foster family recruitment and retention efforts if the state child welfare agency was not directly involved. Our survey included a range of fixed-choice and open-ended questions related to recruiting and retaining foster families, including those who provide therapeutic foster care services. These questions were grouped into six subsections that covered (1) the states’ administrative structure for recruiting and retaining foster families, including the use of private providers; (2) information on states’ recruitment and retention plans and the usefulness of various strategies in recruiting and retaining foster families; (3) challenges states face in their efforts; (4) perspectives on various federal supports in this area and any additional supports needed; (5) data collected and used in recruitment and retention efforts; and (6) oversight of county child welfare agencies and private providers, if applicable. To obtain our 100 percent response rate, we made multiple follow-up contacts by email and phone in September 2017 with child welfare officials who had not yet completed the survey. While all surveyed officials affirmatively checked “completed” at the end of the web-based survey, not all state child welfare agencies responded to every question or the sub-parts of every question. We conducted additional follow-up with a small number of state child welfare agencies to verify key responses. Because this was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as non-sampling errors. For example, unwanted variability can result from differences in how a particular question is interpreted, the sources of information available to respondents, or how data from respondents are processed and analyzed. We tried to minimize these factors through our reviews, pre-tests, and follow-up efforts. In addition, the web-based survey allowed state child welfare agencies to enter their responses directly into an electronic instrument, which created an automatic record for each state in a data file. By using the electronic instrument, we eliminated the errors associated with a manual data entry process. Lastly, data processing and programming for the analysis of survey results was independently verified to avoid any processing errors and to ensure the accuracy of this work. Interviews in Selected States To gather more in-depth information representing a variety of perspectives on our objectives, we interviewed officials from three state and three county child welfare agencies, representatives from 14 private foster care providers working with these agencies, and foster parents working with 8 of these private providers in the three selected states (California, Georgia, and Indiana). The states were selected based on factors such as recent changes in foster care and congregate care caseloads, opioid abuse rates estimated by HHS in June 2016, variation in child welfare administration systems (i.e., state- versus county- administered), and geographic location. Interviews were conducted during in-person site visits in California and Indiana and via phone in Georgia. We used semi-structured interview protocols for child welfare agencies, private providers, and foster parents that included open-ended questions on the strategies and challenges in recruiting and retaining foster families and federal supports in this area, among other topics. We interviewed officials from state-level child welfare agencies in each of these states. In California, the only selected state with a county-administered child welfare system, we selected three counties— Los Angeles, Sacramento, and Sonoma—and conducted interviews with officials from the respective county-level child welfare agency. These counties were selected based on factors similar to those mentioned above as well as variation in population density (i.e., rural versus urban). In addition, we interviewed 14 private providers in the three selected states, including 3 private providers in California (1 in each county we visited), 4 in Georgia, and 7 in Indiana. Private providers were chosen for interviews from a list of all private providers working with state child welfare agencies to recruit foster families. This list was provided by child welfare officials from each selected state. We considered factors such as the number of foster families private providers worked with, their involvement in recruiting families who provide therapeutic foster care services, and geographic location. We interviewed foster parents working with 8 of the private providers mentioned above, including 2 groups of foster parents in California, 1 group in Georgia, and 5 groups in Indiana. Each of these groups included between one and three sets of foster parents (e.g., one foster parent or a couple). Due to the sensitivity of the topics discussed, we worked with private providers to identify foster parents who were able and willing to participate in interviews. We discussed several considerations for selecting foster parents, such as gathering parents with a range of experience providing foster care services to children in both traditional and therapeutic foster care settings. Because foster parents we interviewed self-selected to participate and were all working with private providers we interviewed, their views do not represent the views of all foster parents, such as those working directly with child welfare agencies. We also reviewed relevant documents that corroborated the information obtained in our interviews with child welfare agencies and private providers, such as recruitment plans, marketing materials, and child placement reports. Because we conducted interviews with a non-generalizable sample of child welfare officials, private providers, and foster parents, the information gathered in the three selected states is not generalizable. Although not generalizable, our selection methodologies provide illustrative examples to support our findings. Discussion Groups To obtain information specifically about efforts to recruit and retain families who provide therapeutic foster care services, we conducted three discussion groups at a conference hosted by the Family Focused Treatment Association, a non-profit organization that aims to develop, promote, and support therapeutic foster care services. The conference was held in July 2017 in Chicago, Illinois. We held two discussion groups with representatives from 17 private providers and one discussion group with eight sets of foster parents. To solicit participants, we used email to invite all individuals who registered for the conference to participate in our discussion groups. These emails explained our objectives and potential discussion topics related to recruiting and retaining therapeutic foster care families. Participants who volunteered were sorted into the three groups. Discussion groups for private providers and foster parents were guided by a GAO moderator using semi-structured interview protocols. These protocols included open-ended questions that encouraged participants to share their thoughts and experiences on recruiting and retaining therapeutic foster care families, including strategies and challenges in these efforts, as well as differences in providing therapeutic versus traditional foster care services. Discussion groups are not designed to (1) demonstrate the extent of a problem or to generalize results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, they are intended to generate in- depth information about the reasons for participants’ attitudes on specific topics and to offer insights into their concerns about and support for an issue. For these reasons, and because discussion group participants were self-selected volunteers, the results of our discussion groups are not generalizable. We conducted this performance audit from January 2017 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, the following staff members made key contributions to this report: Elizabeth Morrison (Assistant Director); Nhi Nguyen (Analyst-in-Charge); Luqman Abdullah; Laura Gibbons; and Elizabeth Hartjes. Also contributing to this report were Sarah Cornetto; Tiffany Johnson Lapuebla; Cheryl Jones; Kirsten Lauber; Serena Lo; Hannah Locke; Mimi Nguyen; Samuel Portnow; Ronni Schwartz; Almeta Spencer, and Kathleen van Gelder.
Why GAO Did This Study Foster care caseloads have increased in recent years due, in part, to the national opioid epidemic. States have struggled to find foster families for children who can no longer live with their parents, including those who need TFC services. States may use private providers, such as non-profit and for-profit organizations, to help recruit and retain foster families. States may also use federal funds provided by HHS for these efforts. GAO was asked to review states' efforts to recruit and retain foster families. This report examines: (1) how state child welfare agencies recruit foster families, including those who provide TFC services, (2) any challenges in recruiting and retaining foster families, and (3) the extent to which HHS provides support to child welfare agencies in these efforts. GAO reviewed relevant federal laws, regulations, and guidance; interviewed HHS officials; surveyed child welfare agencies in all states and the District of Columbia; held discussion groups with private providers and foster parents who provide TFC services; and conducted interviews with officials in California, Georgia, and Indiana, which were selected for factors such as changes in foster care caseloads, opioid abuse rates, and geographic location. What GAO Found States employ a range of strategies to recruit foster families and nearly all use private providers to recruit, particularly for therapeutic foster care (TFC) services, in which parents receive training and support to care for children who need a higher level of care. Recruitment strategies include searching for relatives, conducting outreach to the community, targeting certain populations, and obtaining referrals from current foster families. In response to GAO's national survey, 49 states reported using private providers to recruit foster families. In the three selected states where GAO conducted interviews, private providers were responsible for both recruiting and retaining foster families, such as helping families become licensed and providing them with support (see fig.). States reported various challenges with recruiting and retaining foster families in response to GAO's survey. In recruiting families, over two-thirds of states reported challenges such as limited funding and staff, which can make prioritizing recruitment efforts difficult; extensive licensing processes; and difficulties finding families willing to care for certain children, such as those with high needs. In retaining families, 29 states reported concerns about inadequate support for foster families, which can include difficulties contacting child welfare agency caseworkers. In addition, 31 states reported limited access to services needed to care for children, such as child care. The U.S. Department of Health and Human Services (HHS) provides a number of supports to help states recruit and retain foster families, including technical assistance with their recruitment programs, guidance and information, and funding. Most states GAO surveyed found HHS's supports moderately or very helpful. However, several private providers GAO interviewed in three selected states said they have not received guidance or information from child welfare agencies about recruiting and retaining foster families. In addition, 11 of the 14 providers said they were unaware of related HHS supports and all of them described concerns about communication with child welfare agencies. HHS officials said they encourage states to involve all relevant stakeholders in their efforts, though HHS has focused on supporting child welfare agencies. Consistent with internal control standards on communication, determining whether information on working with private providers would be useful to states could help HHS better support states' use of private providers in efforts to recruit and retain foster families. What GAO Recommends GAO recommends HHS seek feedback from states on whether information on effective ways to work with private providers to recruit and retain foster families would be useful and if so, provide such information. HHS agreed with GAO's recommendation.
gao_GAO-19-102
gao_GAO-19-102_0
Background Composition of the MHS Total Workforce The MHS has a dual mission of maintaining the skills of the medical force and providing health care and beneficiary medical care in its MTFs in the United States and overseas. It accomplishes this in part by providing (1) operational medical care via deployable health care platforms in an operational environment, such as forward surgical teams and combat support hospitals, and (2) beneficiary medical care in its MTFs in the United States and around the world. DOD’s total workforce supporting this dual mission comprises three main components: military personnel (including active and reserve personnel), federal civilian personnel, and private sector contractor personnel. Active duty medical personnel simultaneously support operational medical care and the delivery of beneficiary health care to patients across the globe. Reserve component medical personnel generally provide health care to deployed military personnel, but may also provide personnel to support MTFs when active duty personnel are deployed or otherwise unavailable. Federal civilians and contractors generally provide beneficiary care within MTFs. Figure 1 shows the number of the active and reserve components of the military, federal civilians, and estimated contractor full-time equivalents (FTEs) that comprised DOD’s total medical workforce in fiscal year 2017. DOD’s Total Workforce Provides Operational Medical Care at Four Levels DOD has established four levels of operational medical care provided to servicemembers and other eligible persons. The levels of care extend from the forward edge of the battle area to the United States, with each level providing progressively more intensive treatment. Level 4 care facilities are MTFs that also provide beneficiary medical care. In addition to the four levels of medical care, en-route care to transport patients is also provided via casualty evacuation, medical evacuation, and/or aeromedical evacuation from the point of patient injury, illness, or wounding. Figure 2 illustrates the different levels of care. The four levels of care are: Level 1—First responder care. This level provides immediate medical care and stabilization in preparation for evacuation to the next level, and treatment of common acute minor illnesses. Care can be provided by the wounded soldiers, medics or corpsmen, or battalion aid stations. Level 2—Forward resuscitative care. This level provides advanced emergency medical treatment as close to the point of injury as possible to attain stabilization of the patient. In addition, it can provide postsurgical inpatient services, such as critical care nursing and temporary holding. Examples of level 2 units include forward surgical teams, shock trauma platoons, area support medical companies, and combat stress control units. Level 3—Theater hospital care. This level provides the most advanced medical care available in Iraq and Afghanistan. Level 3 facilities provide significant preventative and curative health care. Examples include Army combat support hospitals, Air Force theater hospitals, and Navy expeditionary medical facilities. Level 4—U.S. and overseas definitive care. This level provides the full range of preventative, curative, acute, convalescent, restorative and rehabilitative care. Examples of level 4 facilities include MTFs such as Brooke Army Medical Center at Joint Base San Antonio, Texas and Naval Medical Center Portsmouth at Portsmouth, Virginia. DOD Provides Beneficiary Medical Care in the United States and around the World DOD’s MHS workforce provides beneficiary medical care to 9.4 million eligible individuals, including active duty personnel and their dependents (i.e., spouse, children), medically eligible Reserve and National Guard personnel and their dependents, and retirees and their dependents and survivors. Located in the United States and around the world and ranging from small clinics to major hospitals, these facilities serve as training platforms for active duty medical personnel to maintain their skills and play a key role in the military departments’ Graduate Medical Education programs for training medical professionals. In addition to the direct provision of health care in its own hospitals and clinics, DOD maintains its TRICARE purchased care system that is used to augment the direct care system when needed. Through regional contracts, TRICARE administers the purchased care system, which comprises a civilian network of hospitals and providers. Retirees who qualify for care under Department of Veterans Affairs’ rules may also be eligible to receive health care within the Veterans Health Administration system of hospitals and clinics. Legislation, Policies, and Processes Governing the MHS Workforce Mix DOD’s management of its workforce is governed by several workforce management statutes of title 10 of the United States Code, including: Section 129a directs the Secretary of Defense to establish policies and procedures for determining the most appropriate and cost- efficient mix of military, civilian, and contracted services to perform the mission of the department. Section 2463 directs the Under Secretary of Defense for Personnel and Readiness to devise and implement guidelines and procedures to ensure that consideration is given to using, on a regular basis, DOD civilian employees to perform new functions and functions performed by contractors that could be performed by DOD civilian employees. Section 2461 directs that no DOD function performed by civilian employees may be converted, in whole or in part, to performance by a contractor unless the conversion is based on the results of a public– private competition that formally compares the cost of performance by civilian employees with the cost of contractors, among other considerations. There is currently a government-wide moratorium on performing such public-private competitions. DOD’s total workforce management policy generally emphasizes the need for agencies to utilize the least costly mix of personnel while ensuring the workforce is sufficiently sized, and comprised of the appropriate mix of personnel to carry out the mission of DOD. The departments use DOD guidance to assess the use of military, federal civilian, and contractor personnel, which includes the consideration of two key factors: (1) the risk to the military mission, and (2) the cost of the workforce. To help assess risk, the departments determine what work should be performed by military, federal civilian, or contractor personnel. For example, work that is inherently governmental must be performed only by military or civilian personnel, while work that is commercial in nature could be performed by any personnel type. To make this determination, DOD Instruction 1100.22 directs components to: use the manpower mix criteria outlined in the instruction to identify inherently governmental and commercial activities; and review the annual inventory of commercial and inherently governmental activities. In addition, DOD and the departments have established policies and procedures to assess the costs and benefits of different workforce mix options. DOD Instruction 1100.22 directs components to conduct a cost comparison of personnel when considering outsourcing new requirements that are not required to be performed by government personnel, or when considering in-sourcing functions that are currently performed by private sector contractors. Roles and Responsibilities for Managing the MHS Workforce Several officials have responsibility for governing DOD’s management of its total workforce, including The Under Secretary of Defense for Personnel and Readiness (USD(P&R)). This official has overall responsibility for issuing guidance on total workforce management to be used by the DOD components, providing guidance on manpower levels of the components, and developing manpower mix criteria and other information to be used by the components to determine their workforce mix. The Under Secretary of Defense (Comptroller). This official is responsible for ensuring that the budget for DOD is consistent with the total workforce management policies and procedures. The Secretaries of the military departments and heads of the defense agencies. These officials have overall responsibility for the requirements determination, planning, programming, and budgeting execution for total workforce management policies and procedures, as well as having numerous responsibilities related to total workforce management as detailed in DOD guidance. The Assistant Secretary of Defense for Health Affairs (ASD(HA)). This official serves as the principal advisor for all DOD health related policies, programs, and activities. The ASD(HA) has the authority to: develop policies, conduct analyses, provide advice, and make recommendations to the USD(P&R), the Secretary of Defense, and others; issue guidance; and provide oversight to the DOD Components on matters pertaining to the MHS. Further, the ASD(HA) prepares and submits a DOD unified medical program budget which includes, among other things, the defense health program budget to provide resources for the DOD MHS. The Director of the Defense Health Agency (DHA). This official, among other things, manages the execution of policies issued by the ASD(HA) and manages and executes the Defense Health Program appropriation, which partially funds the MHS. Recent MHS Personnel Reform Efforts The National Defense Authorization Act for Fiscal Year 2017 directed the transfer of administrative responsibility for MTFs from the military departments to the DHA. Specifically, the Director of the DHA shall be responsible for the administration of each MTF, including budgetary matters, information technology, health care administration and management, administrative policy and procedure, military medical construction, and any other matters the Secretary of Defense determines appropriate. Since 2016, DHA’s responsibilities in the administration of MTFs have been further articulated in DOD memoranda and in statute. In 2018, DOD directed that the DHA shall be responsible for: (1) the planning, programming, budgeting, and execution processes for the MTFs; (2) clinical and health delivery services in each MTF; and (3) for these services, the hiring and management of federal civilians and contract staffing. Further, in 2018, Congress amended the law to specify that at each MTF, the Director of the DHA has the authority to determine total workforce requirements, direct joint manning, and address personnel staffing shortages, among other things. Also in December 2016, Congress enacted legislation that allows the prohibition of converting military medical and dental positions to federal civilian positions, which had been in place since 2008, to be lifted. This change is contingent upon DOD satisfying a reporting requirement on the size and composition of its operational medical force. Specifically, Congress directed DOD to report on the process established to define the military medical and dental requirements necessary to meet operational medical force readiness requirements, and provide a list of those military medical and dental requirements. Department Planning Processes for Operational Medical Personnel Requirements Do Not Include an Assessment of All Medical Personnel or the Full Cost of Military Personnel The military departments each have their own process to determine their operational medical personnel requirements. After determining the number of medical personnel needed to support operational needs, the military departments generally consider only military personnel when conducting their planning processes to meet these requirements, and have not formally assessed the extent to which federal civilians and contractor personnel could be utilized. Further, the departments do not generally consider the full cost of active and reserve component medical personnel when determining their balance of active and reserve component medical personnel, and they have not developed such information to use in their assessment of active and reserve balance. Each Military Department Has Its Own Process to Plan for Operational Medical Personnel Requirements, Including the Balance of Active and Reserve Component Personnel Each military department has its own process to plan for operational medical personnel requirements. The departments’ operational medical personnel requirements are based on their analysis of DOD’s Defense Planning Guidance and Defense Planning Scenarios. Specifically, possible casualty streams are estimated based on the scenarios, and the required medical support is determined in conjunction with department- specific medical planning factors, such as rotation policy, the population at risk, and evacuation policy, among others. Each military department incorporates these factors to estimate the number of medical personnel needed. The Army integrates medical planning into its general process for estimating all operational requirements, whereas the Navy and Air Force have separate, medical-specific processes. The following represents an overview of each military department’s approach: Army. The Army uses its Total Army Analysis model to determine the number and type of support units across the Army, including medical forces, which will be needed to support the Army’s combat forces in operational settings. Navy. The Navy uses a medical-specific model, called the Medical Manpower All Corps Requirements Estimator, to estimate its total medical personnel readiness requirements. The Navy readiness mission is to support all Navy and Marine Corps operational missions, including operational operations (such as hospital ships and expeditionary medical facilities) and day-to-day operations (such as ships, submarines, and Special Forces). Air Force. The Air Force uses a medical-specific sizing model named the Critical Operational Readiness Requirements tool to project its minimum military personnel requirements. This tool identifies the number of military medical personnel needed to meet requirements, including requirements for en-route casualty support, theater hospitals, and critical care air transport teams. According to military department officials, the decision to apportion medical personnel requirements among the active and reserve components is based on an assessment of risk across a range of factors. In a 2013 DOD report issued in response to section 1080A of the National Defense Authorization Act for Fiscal Year 2012. DOD noted that there are several important factors in active component and reserve component mix decisions, including, among others, the timing, duration, and skills required for anticipated missions. Moreover, the report notes that active components are best suited for unpredictable and frequent deployments, dealing with complex operational environments, and unexpected contingencies and the reserve components are best suited for predictable and infrequent deployments. As noted in the report, active component personnel typically mobilize and deploy to theater the fastest. The sum of these considerations results in a different mix of active and reserve component medical personnel within each military department. Specifically, reserve personnel (as a percentage of the total workforce) varied by military department in fiscal year 2017, with reservists representing 41 percent of medical personnel of the Army, 17 percent of the Navy, and 34 percent of the Air Force, as shown in figure 3. DOD Has Not Assessed Using Federal Civilians or Contractors to Meet Operational Medical Personnel Requirements The military departments have not assessed the extent to which federal civilians and contractor personnel can be used to meet identified operational medical personnel requirements. Specifically, after the military departments have determined their operational medical personnel requirements, they generally have designated all such positions as “military-essential” (i.e., the activity must be performed by a military servicemember) and have not formally assessed the extent to which civilians or contractors could be utilized to fill these positions, according to officials. Army, Navy, and Air Force officials stated that they have historically relied on active and reserve component military personnel when planning for operational medical requirements, with a few exceptions. For example, according to Navy officials, the few federal civilians that are planned to fill operational medical requirements are technical representatives who do not travel on ships for extended periods of time. In interviews, military department officials cited key reasons for not incorporating federal civilians and contractors into their planning for operational medical care. Specifically, officials said they did not believe that federal civilians or contractors were viable workforce alternatives to military servicemembers for operational medical care roles and functions due to the unique nature of such assignments (e.g. providing medical care in a deployed setting). Moreover, officials noted that federal civilians and contractors supporting operational medical requirements are generally considered to be a temporary solution. Officials also expressed concern regarding their military department’s ability to identify and recruit federal civilians and contractors for such positions. Officials stated that while there is currently no guidance outlining the potential role of federal civilians and contractors providing medical care in operational settings, they noted that DOD workforce mix guidance includes a provision that highlights the military-essential nature of medical personnel embedded in non-medical units engaged in hostile action. However, this instruction does not otherwise address the role of federal civilians and contractors in providing medical care, including whether they can serve in medical- specific operational platforms, such as combat support hospitals providing level 3 care. To ensure that its federal civilian employees will deploy to combat zones and perform operational roles such as critical combat support functions in theater, DOD established the emergency-essential civilian program in 1985. Under this program, DOD designates as “emergency-essential” those federal civilian employees whose positions are required to ensure the success of combat operations or the availability of combat-essential systems. DOD’s emergency-essential workforce is now governed under the Expeditionary Civilian Workforce program. DOD can deploy emergency-essential federal civilian employees either on a voluntary or involuntary basis to accomplish the DOD mission. In certain DOD functional communities, federal civilians and contractors play a critical role in combat support roles. For example, as we previously reported, DOD relies on the federal civilian personnel it deploys to support a range of essential missions, including logistics support and maintenance, intelligence collection, criminal investigations, and weapons system acquisition and maintenance. Further, as we have previously reported, DOD has long used contractors to provide supplies and services to deployed forces. Since the early 1990s, much of this support has come from logistics support contracts—contracts that are awarded prior to the beginning of contingencies and are available to support the troops as needed. Although they are generally not a part of the military departments’ planning processes, and there is no guidance dedicated to delineating the role of federal civilians and contractors in providing care in deployed operational settings according to officials, these personnel have deployed within the past 5 years. Based on our analysis of DOD federal civilian deployment data—for fiscal years 2013 through 2017—about 120 DOD federal civilians, including nurses, physicians, and technicians, were deployed to provide medical services. U.S. Central Command officials stated that they have used federal civilians minimally, and U.S. Africa Command officials stated they have not used federal civilians. In addition, based on our analysis of DOD contractor deployment data for deployments from fiscal years 2013 through 2017, there were more than 1,900 deployed contractors providing medical services. U.S. Central Command officials told us that they have not used contractors to provide care to military personnel. Officials noted that the deployed contractors were not contracted by DOD for purposes of providing medical care and instead provided medical care to other contractors as they were part of a larger contract for other services, such as security services or logistics support. U.S. Africa Command officials told us that they have used contractors to provide medical care to support casualty evacuation and personnel recovery requirements, which includes providing medical care to military personnel and other eligible persons. Officials with the Joint Staff Surgeon’s Office and the Surgeon’s offices at U.S. Central Command and U.S. Africa Command agreed with the possibility of using federal civilians and contractors for certain operational medical personnel requirements. Specifically, officials stated that federal civilians and contractors likely represent an acceptable workforce alternative if they are medically ready to deploy and appropriately trained for the unique environment at a fixed facility in theater, such as a level 3 fixed expeditionary medical facility or theater hospital. While agreeing that the use of federal civilians and contractors for certain operational medical personnel requirements may be acceptable, officials also expressed concerns with this approach. A senior official with the U.S. Central Command Surgeon’s office noted concerns regarding the pre- deployment training provided to contractors. Specifically, the official stressed the importance of such training to operating effectively in the unique operational environment of a deployed medical team and that such training is only required to be completed by military personnel and DOD expeditionary civilians. U.S. Africa Command officials expressed concerns regarding challenges in obtaining clinical privileging rights (i.e., the right for a physician to perform specific health care services) for contractors supporting small teams in an operational setting. Further, OASD(HA) officials noted that a key factor to determining if federal civilians or contractors should be used to provide operational medical care is whether or not using those workforces would achieve any cost savings. Moreover, officials with the Defense Civilian Personnel Advisory Service noted that they have had limited success with using DOD’s Expeditionary Civilian Workforce program for the provision of medical administrative support and medical advising functions. A senior official from the U.S. Central Command Surgeon’s office noted this was due to relatively few qualified federal civilians within the program with medical skills. Defense Civilian Personnel Advisory Service officials noted that the fiscal year 2019 force pool that defines the number and types of federal civilian requirements needed for the program included 7 medical related positions and none of these were for medical care; 1 was administrative and 6 were medical advisors. Defense Civilian Personnel Advisory Service officials stated that the DHA has a responsibility to build 1 or 2 of the medical advisor positions in the force pool into their planning as a continuing requirement, and noted that DHA has made some recent progress with 1 medical advisor scheduled to deploy in fiscal year 2019. While there may be challenges with utilizing federal civilian personnel to fulfill operational medical requirements, DOD also faces challenges with regard to military personnel. In 2018, we reported that DOD has experienced gaps between its military physician authorizations (i.e., funded positions) and end strengths (i.e., number of physicians), and that it did not have targeted and coordinated strategies to address key physician shortages. DOD has issued several documents to guide total workforce and personnel planning. DOD Directive 1100.4 states that authorities should consider all available sources when determining workforce mix, including federal civilians and contractors. Moreover, DOD’s 2017 Workforce Rationalization Plan recognizes DOD’s federal civilians as an essential enabler of its mission capabilities and operational readiness and noted that there are numerous opportunities for the military departments, combatant commands, and others to make well-reasoned adjustments to workforce mix. Further, DOD’s National Defense Business Operations Plan for Fiscal Years 2018 to 2022 states that workforce rationalization strategies include, among other things, reassessing military manpower allocations for military essentiality, determining whether workload requires deployments and whether traditional military performance is necessary, and identifying functions and positions that are commercial in nature that may be appropriately or efficiently delivered via private sector support. Federal civilians and contractors are not incorporated into the military departments’ planning to meet operational medical requirements because DOD has not performed an assessment of the suitability of federal civilian or contractor personnel to provide operational medical care. Such an assessment could assist in developing policy for use by medical planners in determining when, where, and how federal civilians or contractors may serve in operational roles. For example, an assessment may include what level(s) of care would be appropriate for federal civilians and contractors to support, if any, and factors to take into consideration in making such decisions, such as exposure to danger and cost. By conducting such an assessment and incorporating the results into relevant policies, DOD can have greater certainty that it is planning for the most appropriate and cost-effective mix of personnel to meet the mission, and, depending on the outcome of the assessment, more options to meet its operational medical personnel requirements. The Military Departments Do Not Consider the Full Cost of Active and Reserve Component Medical Personnel When Planning for Operational Requirements and Do Not Have Full Cost Information The military departments’ planning to meet DOD’s operational personnel requirements generally do not consider the full cost of active and reserve component personnel when determining the balance of active and reserve component medical forces. Officials from Army and Navy medical headquarters stated that cost generally does not inform their decisions about the balance of active and reserve personnel. Army officials noted they consider cost of a unit when making tradeoffs within the reserve component; however, cost was not cited by Army officials as a factor when determining between the active and reserve components. Navy officials noted that while it uses certain cost information when preparing the President’s budget submission, cost is not explicitly considered when determining the balance of the active and reserve components. The Air Force is the only military department that has performed an assessment of the cost effectiveness of using active or reserve component medical personnel, although it had some limitations and did not impact the Air Force’s active and reserve component mix decisions. Army, Navy, and Air Force officials cited other key factors which they consider in determining the balance of active and reserve component personnel, such as the availability of forces to deploy quickly, length of time needed in theater, capability needed, and frequency of deployments. Moreover, the military departments have not developed full cost information of medical personnel to use in their assessment of active and reserve balance. Army and Navy officials stated that they do not maintain full cost information on its active component and reserve component medical personnel. Navy provided programming cost for the reserve component but these rates were averages across the reserve component and not specific to medical. The Air Force’s 2016 High Velocity Analysis attempted to assess the cost of active and reserve medical personnel and identify potential efficiencies within its medical workforce. However, this study was limited because it did not include the full cost of active and reserve component medical personnel. Specifically, the Air Force analysis considered only compensation and did not consider other benefits, such as medical education costs, and used average pay for officers and enlisted personnel regardless of the specialty or skill level. However, the full costs for certain medical personnel, such as officers, are generally higher than average military pay, as they are eligible for a significant number of special pays and benefits, such as graduate medical education and training. In fiscal year 2017, DOD obligated $788 million for special pays for active duty medical personnel, representing approximately 24 percent of the $3.3 billion obligated for all special pays across DOD, and $707 million for medical education. While the Air Force had full cost data for active component personnel, according to officials, they did not include it in their analysis because they did not have comparable cost data for the reserve component. Reserve medical personnel, when not mobilized, receive a fraction of what active duty personnel receive, and typically do not encumber significant education and training costs as reserve medical personnel generally are recruited as fully trained medical professionals. We have previously reported that when the reserve forces can successfully meet deployment and operational requirements, individual reserve-component units are generally less costly than similar active- component units. However, the full cost of medical personnel can vary based on a number of factors. Specifically, more than one reserve- component unit may be needed to achieve the same output as a single active-component unit. For example, the Army has a policy that states reserve-component physicians, dentists, and nurse anesthetists shall not be deployed for longer than 90 days. Thus, the Army would need to deploy four different reserve component physicians for successive 90-day rotations to fill a single 1-year active component requirement. Therefore, in some cases, using reserve units to achieve the same operational capacity over time may be more costly than using active units. However, the lack of full cost information on active and reserve component medical personnel is a barrier to an analytical-based determination on the balance between active and reserve component medical personnel. In 2013, we reported limitations with the DOD-wide software tool developed by Cost Assessment and Program Evaluation—the Full Cost of Manpower—which, among other things, is used to identify the full cost of active duty military personnel. Specifically, we reported that this tool has certain limitations for determining cost for certain cost elements. For example, instead of determining training cost by specialty, it estimates such costs by dividing total funding for such cost estimates by the number of military personnel. We recommended, among other things, that DOD, in order to improve its estimates and comparisons of the full cost of its workforces, develop guidance for cost elements that users have identified as challenging to calculate, such as general and administrative, overhead, advertising and recruiting, and training. DOD partially concurred with this recommendation but has not implemented this recommendation. We continue to believe that developing such costs is needed, especially for the medical community since training and education costs can be higher than in other communities. Moreover, in that report we also found that DOD did not include Reserve and National Guard personnel in their methodology for estimating and comparing the full cost to the taxpayer of work performed. We recommended DOD, among other things, develop business rules for estimating the full cost of National Guard and Reserve personnel. DOD partially concurred with this recommendation but has not implemented the recommendation and noted that a cost estimating function for reserve component personnel would be more complex than for active component and DOD federal civilian cost estimates. While we agree that developing cost estimates for the reserve component could be more complex, we continue to believe it is advisable for DOD to implement our recommendation. In a 2013 DOD report, DOD identified the cost of unit manning, training, and equipping as one of five factors that play a key role in decisions concerning the mix of active and reserve component forces. According to the report, cost is often outweighed by other factors when making active component and reserve component mix decisions, but should always be considered in active component and reserve component mix decisions. Further, DOD policy states that workforce decisions must be made with an awareness of the full costs of personnel to DOD and more broadly to the federal government, and highlights that the full cost of active duty personnel extends beyond cash compensation, and also includes other costs such as education and training. The military departments do not assess the full cost of personnel when determining the balance of active and reserve component medical forces because there is no DOD requirement to do so. Although DOD guidance states that cost is one of several factors that should be considered in active and reserve component balance decisions, the military departments have not conducted assessments of the full cost of active and reserve component personnel to inform their decisionmaking. Further, DOD and the military departments are unable to conduct any such assessments because they have not developed full cost information for active and reserve component medical personnel. Without developing full cost information for active and reserve component medical personnel and using that information in its determinations regarding the correct balance of such personnel, decision makers will not have complete information to make cost-effective choices about the balance of active and reserve component medical personnel. The Military Departments Have Established a Process to Assess the Appropriate Workforce Mix for Beneficiary Care within MTFs, but Face Challenges in Executing Their Plans The military departments have taken actions, such as establishing policies and procedures, to aid the execution of the appropriate workforce mix for providing beneficiary health care within MTFs. However, the military departments face challenges in executing their plans in several areas, including lengthy hiring and contracting processes and uncompetitive salaries and compensation. Further, the transfer of administrative responsibility for MTFs from the military departments to the DHA may present challenges to the management of the military medical personnel. The Military Departments Have Established Policies and Procedures to Evaluate the Risks, Costs, and Benefits of the Use of Personnel within Its Military Treatment Facilities The military departments manage the workforce within their MTFs by using various policies and procedures to determine their workforce needs and help assess the risks, costs, and benefits of using military, federal civilian, and contractor personnel to carry out their missions. Currently, each military department is responsible for determining its MTF personnel requirements: that is, the number of personnel needed to operate its MTFs based on predicted demand for health care from their military and beneficiary populations. To determine MTF personnel requirements, the military departments use their respective suite of manpower models or standards based on a number of factors, including historical medical workload information and the size of population eligible for care. According to Army and Navy medical command officials, the Army and Navy suites of models respectively include at least 36 and 46 medical specialties, and generally express historical medical workload in relative value units, a metric of the level of professional time, skill, training and intensity to provide a given clinical service. In contrast, according to Air Force medical agency officials, the Air Force suite of standards includes 11 medical specialties and expresses workload in patient encounters. According to military department officials, when considering how to meet their MTF personnel requirements given available resources, the number of military personnel is fixed and must be preserved since the operational medical personnel requirements support the readiness mission. The military departments therefore prioritize the distribution of military personnel across MTFs, and then consider how to fill the remaining authorizations with federal civilian personnel or by contracting medical services as appropriate. To make these decisions, the military departments utilize DOD workforce guidance, which requires a balance of risk and cost, but states that risk mitigation shall take precedence over cost-related concerns when necessary. DOD total workforce policies and procedures are outlined in: (1) DOD Directive 1100.4, which establishes guidance for total workforce management; and (2) DOD Instruction 1100.22, which outlines policies and procedures for determining the appropriate mix of personnel. In 2018, we reported that a DOD study found that the cost of federal civilian and contractor full-time equivalents varied by organization, location, and function being performed. According to Army, Navy, and Air Force officials, any changes to funded positions are made through formal processes and require an evaluation of the cost of the personnel options and the approval of the military departments’ respective medical commands or agencies. The military departments’ collective decisions determine their workforce mix. Figure 4 shows the number and percentage of each personnel type that provided or supported care in DOD-owned and operated MTFs for fiscal year 2017, in the United States and overseas. Military Departments Face Challenges in Executing Workforce Mixes at Military Treatment Facilities, and DHA Does Not Plan to Develop a Strategy to Address These Challenges The military departments face challenges to implementing their workforce mix of military, federal civilian, and contractor personnel. Our review, including interviews with military department officials responsible for medical personnel management and with the senior leadership of six MTFs, highlighted, as discussed below, the following distinct challenges: (1) the length of federal civilian hiring and contracting process, (2) uncompetitive federal civilian salaries and contractor compensation, and (3) FTE targets and hiring freezes. Length of Federal Civilian Hiring and Contracting Process Federal civilian hiring process. Senior officials at each of the six MTFs we spoke with stated the federal civilian hiring process, including its length and restrictions imposed by statute or policy, impedes their ability to hire desirable federal civilian candidates. Officials primarily attributed delays to the extended time for human resources offices to post a position and to process and refer applicants for interviews. For federal civilian personnel in DOD medical locations in fiscal year 2018, DOD officials reported an average hiring time of: 121 days for the Army, 157 days for the Navy, and 134 days for the Air Force. Legal restrictions can also extend the hiring process and hinder hiring desirable federal civilian candidates. For example, senior officials at five of six MTFs cited a statute requiring a 180-day waiting period before retired military personnel can be hired as DOD federal civilians and noted valuable candidates with military-specific subject matter expertise will instead seek employment in the private sector. Senior officials from one Air Force MTF stated they successfully submitted waivers to bypass the 180-day waiting period, but senior officials from one Army and one Navy MTF stated that the waiver process often takes as long as the waiting period. Senior officials from each of the six MTFs stated that hiring authorities, such as direct or expedited hiring authority, can help address challenges, but officials at four of six MTFs also expressed concerns about the adequacy of such flexibilities. Direct-hire authority allows agencies to fill occupations that have a severe candidate shortage or a critical hiring need, and is meant to expedite hiring. DOD designated a number of health care occupations as shortage category positions or critical need occupations in accordance with this expedited hiring authority. In 2017, DOD reported that it used expedited hiring authority in approximately 30 percent of hiring actions for its medical employees. Officials from one Navy MTF stated they have direct hiring authority, but their human resources office extends the process by requiring that the position be announced within the last 90 days, or else be re-announced, before they can utilize it. Army officials from one MTF stated interest in expanding the list of medical specialties granted direct hiring authority. Air Force officials from one MTF stated direct hire authority can help obtain qualified candidates, but does not necessarily shorten the hiring process. Challenges in the federal hiring process are a longstanding issue. In 2003, we reported on the need to improve executive agencies’ hiring process, with the majority of federal agencies included in our review reporting that it takes too long to hire quality employees. Our 2016 review of the extent to which federal hiring authorities were meeting agency needs found that the Office of Personnel Management (OPM) and other agencies do not know if the authorities are meeting their intended purposes. In 2018, we reported that DOD’s review of selected sites, including two MTFs, found: varying use of hiring authorities, management unfamiliarity with all available authorities, and a belief among managers that expanded use of some authorities is needed to produce more quality hires. Finally, our 2018 review of DOD laboratories’ use of hiring authorities found that officials used hiring authorities, but identified challenges such as delays in processing the personnel action and the overall length of the hiring process. Contracting process. Senior officials at five of six MTFs stated there are challenges in obtaining contractor services, including the process time before personnel are available to perform work and restrictions imposed by statute. Senior officials from two Air Force MTFs stated that after the contract is awarded, contractors may have up to 60 days to present a candidate; officials from one MTF stated if the MTF rejects the candidate, then the vendor has another 30 to 60 days to find a candidate. According to officials at one Air Force MTF, at times they have to consider whether to accept a subpar candidate or leave a position vacant. Further, senior Air Force officials stated that controls on contract spending limit their flexibility in hiring. To help fill temporary contract positions, which are less attractive to candidates, officials stated the Air Force pays higher rates to the vendor that include the salaries of the personnel and vendor’s overhead costs. In 2018, we reported that DOD’s negotiated price of a contract includes direct costs, such as labor and non-labor costs, and indirect costs, such as overhead, and service contractor profit. Senior officials from the two Army MTFs stated that the moratorium on public- private competitions is a challenge because they cannot outsource federal civilian functions to contracted services when there are shortages of military or federal civilian personnel, even when it is the optimal choice. For example, according to officials, contractors cannot perform the functions of a civilian position when a civilian position is vacated. Uncompetitive Federal Civilian Salaries and Contractor Compensation Federal civilian employee salaries. Senior officials at each of the six MTFs stated it is a challenge to fill federal civilian medical positions because of lower salaries compared to the private sector. In 2017, DOD reported difficulty hiring and retaining health care workers due to competition from the private sector, among other things. We have previously reported challenges related to the ability to provide competitive salaries for some DOD health care providers. Specifically, in 2015 we reported that officials from all three military departments stated their inability to create compensation packages for federal civilian mental health providers to compete with the private sector affected their recruiting and retention of providers. In 2018, we noted similar concerns in recruiting military physicians. Senior officials from each of the six MTFs we spoke with stated that the ability to utilize hiring flexibilities, such as special salary rates, helps mitigate this challenge, but at four of six MTFs also expressed concerns about their adequacy. To provide higher pay for some occupations, OPM may establish a higher salary rate for an occupation or group of occupations in one or more geographic areas to address existing or likely significant handicaps in recruiting or retaining well-qualified employees. Senior officials from four of six MTFs stated special salary rates are helpful but not sufficient. Officials at one Navy MTF noted that two primary care providers left within the last year for better pay in the private sector, negatively affecting access to care. Officials at one Army MTF noted that the application for special salary rates can take 2 years or more, and therefore may not address short-term hiring needs. Further, officials from one Navy MTF stated they continue to face difficulty hiring for positions allowed special salary rates, such as pharmacist and registered nurse positions. Our 2017 review of federal agency use of special payment authorities approved by OPM—such as special salary rates—found that agencies reported that access to authorities had positive effects—such as on staff retention and applicant quality—but had few documented effectiveness assessments. DOD is also authorized to offer DOD health care personnel a number of salary rates established for Veterans Health Administration (VHA) personnel. For example, DOD established a civilian physicians and dentists pay plan using this authority. However, officials stated concerns about the rates’ usefulness. Senior officials from one Air Force hospital noted that although the VHA salary levels are higher than the General Schedule levels that DOD typically offers, they may not be competitive with the private sector. Moreover, senior officials from one Army MTF expressed an interest in accessing VHA salary rates for additional occupations because Army personnel often leave to work at a nearby Veterans Affairs hospital for higher pay. In 2017, we reported on VHA physician recruitment and retention strategies and officials from the six VA medical centers in our review stated that physician salaries were often below those offered by local private sector, academic, and some state government employers. Contractor compensation. Senior officials from five of six MTFs stated private sector contractor vendors face the same challenges as the government regarding uncompetitive salaries. As a result, some contracts have low fill rates or go unfilled. For example, senior officials at one Navy MTF said one of its vendors has not been able to fill a clinical pharmacy position for more than a year. Additionally, senior officials at the other Navy MTF we spoke with stated that a vendor was not meeting its local needs because the fill rate at their MTF is lower than the average fill rate across all Navy MTFs, which is what the vendor is required to meet. Further, senior officials at two of six MTFs—one Navy and one Air Force—stated some of their vendors have attempted to fill positions by sending multiple providers on a part-time basis to fill the equivalent of one full-time position; they noted the part-time assignments are undesirable and can affect the quality of care. FTE Targets and Hiring Freezes Federal civilian FTE targets. Headquarters officials from each of the military departments stated that federal civilian FTE targets are a barrier to effective workforce mix management because they reduce flexibility in utilizing the most efficient personnel type to accomplish the beneficiary mission of the MHS. From fiscal years 2012 to 2017, OSD guidance directed the military departments to manage to a federal civilian FTE target. These targets were intended to prevent an increase in the size of the federal civilian workforce, even when federal civilians’ performance of work is most cost-effective. For example, Air Force headquarters officials stated that due to the federal civilian FTE target, they generally default to hiring contractor personnel when new personnel needs arise. Further, Air Force headquarters officials stated they have not pursued in-sourcing of some contracted functions even though such actions might result in cost savings. The federal civilian FTE targets had varying effects on the operations of the six MTF’s we spoke with. Senior officials at two of six MTFs—one Navy and one Air Force—stated that they have not been adversely affected by the federal civilian FTE targets because the relatively high number of vacancies in their funded federal civilian positions means that they never exceed their target. Conversely, officials at one Air Force MTF stated they have considered hiring additional private sector contractor services when they reach their allowed federal civilian FTEs. During the course of our review, DOD issued its National Defense Business Operations Plan for Fiscal Years 2018 to 2022, which states that it would discontinue the use of federal civilian FTE targets because they acted as artificial and arbitrary constraints on the workforce, and encouraged the military departments to utilize hiring flexibilities to identify the most appropriate and economical personnel type to achieve their mission. In 2002 we reported that federal hiring policies should, among other things, avoid arbitrary full-time equivalent or other arbitrary numerical goals. Federal civilian hiring freezes. Senior officials at five of six MTFs stated that federal civilian hiring freezes adversely affect MTF operations. As part of planning for sequestration in fiscal year 2013, DOD imposed hiring freezes on federal civilian personnel. Further, there was a federal civilian hiring freeze from January 2017 to April 2017. Senior officials from three of six MTFs reported that hiring freezes lower morale and elongate the already lengthy hiring process, even when they are granted waivers to continue to hire. Further, senior officials from one Army MTF stated hiring freezes limit their ability to shape their workforce, and often result in higher costs when they increase the size of their contracted workforce in accordance with their needs. We reported in 2018 that defense laboratory officials we surveyed identified government-wide hiring freezes as a challenge to hiring candidates, stating that candidates accepted other offers due to delays created by the freeze and that hiring efforts continue to be adversely affected even after a freeze is lifted. These three key hiring challenges limit the military departments’ ability to strategically consider the advantages of converting one source of support to another, and limit their ability to hire the appropriate personnel type or for contract vendors to fill positions. According to senior MTF officials, these key hiring challenges and low fill rates in some areas can result in personnel gaps that can adversely affect the operations of MTFs. When personnel gaps arise, officials stated, military personnel often must work additional hours or must be borrowed from other facilities. Senior officials from one Navy MTF cited the example of a cost of about $16,000 in travel expenses for the temporary transfer of an active duty nurse stationed in Japan to work for a MTF in the United States for 3 months because the MTF was not able to fill the position by other means. Additionally, senior officials from one Air Force MTF noted that morale of its military staff is negatively affected by extra hours and additional responsibilities placed on them to ensure continued operations. Further, officials stated that personnel gaps can negatively affect care. Due to concerns about patient safety, MTFs may decide to discontinue some services at MTFs. Senior officials from five of six MTFs reported discontinuing some services as a result of these challenges and referred patients to the TRICARE network or to Veterans Affairs facilities. Referring patients to the private sector can have secondary effects on MTF operations, such as on hospital accreditations. Senior officials from one Navy MTF noted that in the past fiscal year they had to refer patients to private sector care after two hematology-oncology physicians resigned, which may affect their hematology-oncology program’s accreditation. Senior officials at the other Navy MTF stated that in the last fiscal year they could not meet the minimum staffing standards for labor and delivery staff and therefore sent patients to the TRICARE network. They noted they are also having difficulty filling key administrative positions related to quality control of laboratory services and are concerned about maintaining their pathology program accreditation. Senior officials from MTFs reported varying fill rates for military and civilian personnel, and for the contractor personnel provided by private sector vendors. However, officials from the MTFs we spoke with stated that fill rates may not illustrate the availability of personnel. For example, officials stated that authorizations for military personnel are counted as filled even when a servicemember is deployed and therefore not working at the MTF. In addition, MTF officials stated that any on-board civilians without corresponding authorizations inflate the civilian fill rate, resulting in a fill rate of greater than 100 percent. In addition, DOD officials noted that DOD pays for contracted services and does not directly employ contractor personnel. Therefore, the fill rate for contractors represents either the number of authorized FTEs in the individual contract or positions filled by contactors noted on the MTF’s force planning document, which could also result in fill rates of greater than 100 percent, even as other positions remain unfilled. The MTFs that we spoke with reported the following fill rates: Two Navy MTFs. The fill rates for military personnel, federal civilian personnel, and funded positions designated for contracted services were 79 percent, 81 percent, and 94 percent, respectively, at one Navy MTF and 93 percent, 53 percent, and 62 percent, respectively, at the other MTF. Two Air Force MTFs. The fill rates for military personnel, federal civilian personnel, and funded positions designated for contracted services were 98 percent, 86 percent, and 91 percent, respectively at one Air Force MTF and 94 percent, 74 percent, and 90 percent, respectively at the other MTF. Two Army MTFs. The fill rates for military personnel, federal civilian personnel, and funded positions designated for contracted services were 91 percent, 118 percent, and 87 percent, respectively at one Army MTF. At the other MTF, the fill rate for military personnel fill rate was 94 percent and for federal civilian personnel was 107 percent, but the MTF officials did not provide fill rate information for positions designated for contracted services because there are no corresponding authorizations on their force planning document. DOD has been taking some steps to attempt to address these key hiring challenges. Specifically, DOD’s 2016 Strategic Workforce Plan included steps DOD was taking to address personnel gaps, such as a targeted recruitment program for critical skills, including 27 harder-to-fill medical occupations. In 2018, DOD published a Human Capital Operating Plan which states that it replaces the previously required Strategic Workforce Plan, but DOD does not yet have a plan of action specific to the medical professions. Further, DOD officials stated that components are encouraged to consider developing their own human capital operating plans. With regard to contracting, in response to a requirement in the National Defense Authorization Act for Fiscal Year 2017, DOD issued a status report in January 2018 on the development of its acquisition strategy for health care services at MTFs. The report notes that contracting for health care services is fragmented, and the report outlines DOD’s plan to move toward a single contract vehicle for health care services and to establish metrics for the strategy, such as measurement of contract fill rates. While these steps represent efforts to address these challenges, responsibility for management of the federal civilian and contractor workforces within the MHS will soon see significant changes. Specifically, in December 2016, Congress directed the transfer of administrative responsibility for MTFs from the military departments to the DHA. Further, Congress amended the law in 2018 to specify that the transfer should be completed by September 30, 2021. The law also states that at each MTF, the Director of the DHA has the authority to determine total workforce requirements, direct joint manning, and address personnel staffing shortages, among other things. Although the DHA will soon begin to assume these responsibilities and the challenges associated with them, a senior OASD(HA) official responsible for human capital issues stated that the DHA currently has no strategic total workforce plan, or similar document, to help ensure execution of an appropriate workforce mix at its MTFs. According to GAO’s key questions to assess agency reform efforts, strategic workforce planning should precede any staff realignments or downsizing, so that changed staff levels do not inadvertently produce skills gaps or other adverse effects that could result in increased use of overtime and contracting. GAO’s key principles for effective strategic workforce planning and applicable federal regulations have shown that addressing a critical human capital challenge—such as closing or reducing personnel gaps—requires tailored human capital strategies and tools and metrics by which to monitor and evaluate progress toward reducing gaps. Although many hiring challenges are longstanding government-wide issues, GAO’s model of strategic human capital management states that agencies need not wait for comprehensive civil service reform to modernize their human capital approaches. In addition, according to OPM’s standards for strategic workforce planning, human capital strategies should be integrated with acquisition plans, among other things, such as DOD’s acquisition strategy for health care services at MTFs. As the DHA finalizes its plans for assuming administrative control of MTFs, senior leaders may find that they face the same challenges reported by the military departments in executing an appropriate workforce mix. DHA could mitigate these challenges to executing the appropriate workforce mix in the MTFs by engaging in strategic workforce planning, including tailored human capital strategies, tools, and metrics by which to monitor and evaluate progress toward reducing gaps, and integrating this planning with DOD’s acquisition strategy for health care services at MTFs. The Military Departments and DHA Have Not Decided How Military Personnel Will Meet Operational and Beneficiary Missions after the Transfer of Administrative Responsibility for MTFs to DHA The planned transfer of administrative responsibility for MTFs from the military departments to the DHA may present challenges to DOD’s management of military personnel. Specifically, the military departments and DHA have not determined how military personnel will meet both the operational and beneficiary missions of the MHS after the transfer of administrative responsibility for MTFs to the DHA. Historically, each military department has been responsible for managing its military personnel to ensure it meets its operational mission and appropriately staffs its MTFs, and the challenge of balancing these missions was the responsibility of each respective military department. However, the transfer of administrative responsibility for MTFs to the DHA will separate these missions—the operational mission will be the responsibility of the military departments, and the beneficiary mission will be the responsibility of the DHA, with military personnel used to support both missions. The plan for transfer of administrative responsibility for MTFs to the DHA states that the military departments will retain ultimate control over military personnel, who will work within the MTFs on a day-to-day basis to maintain their readiness to provide operational medical care, while the DHA will eventually assume responsibility for federal civilian and contractor personnel and all other aspects of MTF management. DOD officials stated that the planned transfer will allow the military departments to focus their attention on readiness to provide operational medical care, while the DHA will focus its attention on efficient management of beneficiary health care operations. As a result of this separation of missions, challenges in the management of military personnel could be exacerbated by transfer of responsibility for achieving these missions to separate organizations in the following three ways. First, DHA and the military departments have not clearly identified how they will manage the assignment of military personnel to MTFs. The implementation plan for transfer of administrative responsibility for MTFs to the DHA states that the departments will continue to be responsible for assignment of military personnel to MTFs. However, DOD’s stated desire to place greater emphasis on the readiness mission may affect current MTF staffing practices. For example, military department officials told us that it is common practice to assign military personnel to locations that face challenges in hiring federal civilian and contractor medical personnel to maintain access to medical care in these locations. However, the transfer implementation plan states that the departments will provide military personnel to the MTFs only to the extent that the MTFs can provide sufficient workload to maintain providers’ military medical Knowledge, Skills, and Abilities (KSAs). KSAs are a metric for military operational readiness that DOD has not yet finalized. Officials responsible for planning the transfer of administrative responsibility for MTFs to the DHA stated that the emphasis on fulfilling KSAs in the future may result in concentrating military providers in larger MTFs, which can provide opportunities for providers to fulfill KSAs. However, this change could create a disadvantage for smaller facilities, which may not be able to provide military providers with as much practice and already face challenges in hiring federal civilian and contractor personnel. Second, DHA and the military departments have not clearly identified how they will mitigate the effect of deployments of military medical personnel on MTF operations. When medical personnel are deployed out of MTFs to provide operational care, their absence can create a gap or reduction in capability at the affected MTF, according to military department officials. The military departments, prior to the transfer, manage deployments and are responsible for ensuring appropriate staffing at the MTFs in the absence of deployed personnel. Officials at all six of the MTFs we visited cited challenges with mitigating the effect of deployments on MTF operations. DOD has stated that after the transition, there will be no barriers to the military departments’ access to personnel for deployment, and has highlighted options for addressing staffing gaps, such as using borrowed military personnel, contractors, or referral to the TRICARE network. However, officials at all six of the MTFs we spoke with stated that contracting for medical services was not sufficiently timely or effective, and officials at one MTF noted that referral to the TRICARE network was difficult in their area. According to officials within the MTFs of the National Capital Region, which is directly managed by the DHA and not the military departments, management of deployments and their adverse effect on hospital staffing has been a challenge. For example, officials cited a period in the summer of 2017 when, due to overlapping deployments across military departments, 8 of 9 general surgeons at Fort Belvoir Community Hospital in Virginia were simultaneously deployed, and patients had to be referred to private providers within the TRICARE network or sent to Walter Reed National Military Medical Center in Maryland. Although the military departments and the DHA have executed a Memorandum of Agreement concerning coordination for service personnel to fill scheduled deployments, this does not always prevent gaps in medical specialties. For example, officials noted that requests for volunteer deployments are not always vetted through NCR management. Further, addressing these gaps can be challenging. Specifically, officials cited difficulties in successfully contracting for medical services and reported that requests for backfill support from the reserve components has associated costs and is difficult to execute. Third, DHA and the military departments have not clearly identified how they will manage changes to the size or composition of the active duty medical workforce that affect workforce balance within MTFs. Since 2008, the military departments have been prohibited from converting medical positions designated for military personnel to positions that can be filled by federal civilians—even when such conversions would result in cost savings. Air Force headquarters officials noted that they have identified more than 4,000 medical positions to review for possible conversion to achieve cost savings, particularly in medical specialties with excess military personnel, such as family practice and pharmacy. Air Force officials previously identified 4,724 positions for conversion beginning in fiscal year 2005, of which 1,449 were completed before the prohibition was enacted. The Army planned to convert 4,340 military positions from fiscal year 2006 through fiscal year 2011, of which 1,459 were completed before the prohibition was enacted. The Army restored 165 of planned conversions for fiscal year 2007, and reversed, or offset the remaining through growth in the active duty medical force after the prohibition was enacted. The National Defense Authorization Act for Fiscal Year 2017 allows for the prohibition on such conversions to be lifted after DOD submits a report that defines the military medical and dental requirements necessary to meet operational medical force readiness requirements, and lists the positions necessary to meet such requirements. However, decisions on conversions taken by the departments could affect MTF operations. Specifically, existing challenges with hiring federal civilian personnel could create challenges with military-to-civilian conversions. For example, DOD has stated that during the previous round of military to federal civilian conversions, changes in local market conditions affected the ability of the military departments to fill converted positions with civilians in a timely fashion. Medical headquarters officials the Army stated that they currently have no intention to use conversions if the prohibition is lifted; Navy officials stated they currently do not plan to use conversions since their military personnel requirements exceed their authorizations. Senior officials from one Navy MTF we spoke with stated that if conversions occurred, recruitment and retention challenges related to hiring federal civilian employees would need to be addressed to ensure such positions are filled. In addition, military department policies can affect workforce balance within MTFs. Specifically, in its modeling for operational medical personnel requirements, the Air Force includes a preference for uniformed personnel to receive primary care from uniformed medical personnel. Officials told us that this approach, known as the Critical Home Station, is because Air Force leadership believes that performance of this function by military personnel provides for increased accountability for medical readiness. For example, senior officials from one Air Force MTF stated they believe the policy is important for the Air Force to maintain access to information about health factors that could render a servicemember not medically qualified to deploy. Air Force medical headquarters officials estimate that the policy results in 2,000 positons reserved for military personnel that could be designated for federal civilian or contractor performance. Leading practices for results-oriented government state that cooperating federal agencies need to sustain and enhance their collaboration in several ways, including the development of policies and procedures to operate across agency boundaries and agreement on their respective roles and responsibilities. However, planning for the transition by the DHA and the military departments has not yet included development of policies and procedures for management of military personnel and agreement on specific roles and responsibilities for the military departments and the DHA in this process. The MHS process for collaborating across agency boundaries, known as MHS Governance, emphasizes collaborative work in the management of the MHS. This forum could provide an opportunity for the military departments and the DHA to develop policies and procedures for management of military personnel and agree on specific roles and responsibilities for the military departments and the DHA in this process. Until DHA and the military departments develop such policies and procedures and agrees on roles and responsibilities, the MHS may continue to face a number of challenges related to the transfer of administrative responsibility for MTFs to the DHA. Conclusions Given the size of the MHS, its central importance to the success of DOD’s mission, and its cost, having the right mix of military, federal civilian, and contractor personnel providing medical care within MTFs and in deployed operational settings should be a key priority for DOD leadership. While the military departments have policies and procedures in place to assess medical workforce mix in both settings, the shortcomings we have highlighted present barriers to achieving an appropriate workforce mix. Recently, such as in the 2018 National Defense Business Operations Plan, DOD has emphasized the need to reassess who can most efficiently perform all aspects of DOD’s mission. However, the military departments’ planning processes for operational medical personnel requirements continue to rely solely on military personnel, despite the use of federal civilians and contractors in operational settings, and the military departments have not developed full information on the cost of their medical forces and incorporated such information into decision-making processes about the mix of active and reserve component personnel. Similarly, the transfer of administrative responsibility for MTFs to the DHA represents an opportunity to reassess workforce mix at the MTFs. However, long-standing challenges in the management of federal civilian and contractor personnel, coupled with challenges related to the management of medical personnel after the transfer, could overshadow and cast doubt on the success of that reform. Without addressing the concerns we have highlighted, DOD may miss the opportunity presented by current transformation efforts in the MHS to ensure it has in place the most cost-effective mix of personnel in its workforce to accomplish its medical mission. Recommendations for Executive Action We are making five recommendations to the Department of Defense. The Secretary of Defense should ensure that the Assistant Secretary of Defense for Health Affairs, in coordination with the military departments, perform an assessment of the suitability of federal civilian and contractor personnel to provide operational medical care and incorporate the results of the assessment into relevant policies, if warranted. (Recommendation 1) The Secretary of Defense should ensure that the Under Secretary of Defense for Personnel and Readiness require consideration of cost when making determinations regarding the mix of active and reserve component medical personnel. (Recommendation 2) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Health Affairs, in collaboration with the Director of Cost Assessment and Program Evaluation and the military departments, develop full cost information for active and reserve component medical personnel, and the military departments use that information in its determinations regarding the mix of active and reserve component medical personnel. (Recommendation 3) The Secretary of Defense should ensure that the Director of the Defense Health Agency develop a strategic total workforce plan which includes, among other things: (1) tailored human capital strategies, tools, and metrics by which to monitor and evaluate progress toward reducing personnel gaps, and; (2) integration of human capital strategies with acquisition plans, such as DOD’s acquisition strategy for health care services at DOD’s military treatment facilities. (Recommendation 4) The Secretary of Defense and the Secretaries of the Army, the Navy, and the Air Force, respectively, should ensure that accompanying the transfer of administrative responsibility for military treatment facilities to the Defense Health Agency, that the Defense Health Agency and the military departments develop policies and procedures for management of military personnel, including agreement on specific roles and responsibilities for the military departments and the Defense Health Agency in this process. (Recommendation 5) Agency Comments In written comments on a draft of this report, DOD concurred with our five recommendations concerning additional assessments needed to better ensure an efficient MHS total workforce. DOD’s comments are reprinted in appendix II. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Assistant Secretary of Defense for Health Affairs, the Director of Cost Assessment and Program Evaluation, the Director of the Defense Health Agency, and the Secretaries of the Army, the Navy, and the Air Force. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Scope and Methodology To address the extent to which the military departments’ planning process for operational medical personnel requirements have assessed the mix of federal civilian, contractor, active and reserve medical personnel (i.e. workforce mix), we compared the military departments’ efforts in planning for operational medical personnel requirements to the Department of Defense (DOD) and department-level policies and guidance on workforce mix determination and identifying the full cost of its military medical personnel. DOD Directive 1100.4 states that authorities should consider all available sources when determining workforce mix. DOD Instruction 1100.22 directs the steps that workforce planning authorities must take in planning for personnel requirements and emphasizes consideration of all potential workforce sources and an accurate understanding of personnel costs. We also reviewed related DOD documentation on identifying military essential positons and the use of alternative workforces. Specifically, DOD’s National Defense Business Operations Plan for fiscal years 2018 through 2022 states that workforce rationalization strategies include, among other things, reassessing military manpower allocations for military essentiality and identifying functions and positions that are commercial in nature that may be appropriately or efficiently delivered via private sector support. Moreover, DOD’s 2017 Workforce Rationalization Plan recognizes DOD’s civilians as an essential enabler of its mission capabilities and operational readiness and noted that there are numerous opportunities for the military departments, combatant commands, and others to make well-reasoned adjustments to workforce mix. To determine the extent to which federal civilians and contractors were deployed to provide medical care we reviewed federal civilian and contractor deployment data from fiscal years 2013 through 2017. We analyzed data for this timeframe to enable us to identify deployments over the last 5 years, and fiscal year 2017 was the most recent full fiscal year of available data at the time of our review. To assess the reliability of these data, we electronically tested the data to identify obvious problems with completeness or accuracy and interviewed knowledgeable agency officials about the data. We found the data to be limited in that the deployment data may not be sufficiently reliable for identifying the universe of deployments. However, we found the data to be sufficiently reliable for the purposes of reporting that federal civilians and contractors have been deployed to provide medical care. Further, we interviewed officials from the Office of the Under Secretary of Defense for Personnel and Readiness (USD(P&R)), Office of the Assistant Secretary of Defense for Health Affairs (OASD(HA)), Defense Civilian Personnel Advisory Service, the military departments, and selected combatant commands to identify considerations and any challenges of using different personnel categories as workforce alternatives for meeting operational medical requirements. To determine the appropriate use of the active and reserve components for DOD’s operational medical personnel military requirements, we compared the military departments’ efforts in assessing their active and reserve balance to DOD and department-level policies and guidance. Specifically, in a 2013 DOD report issued in response to section 1080A of the National Defense Authorization Act for Fiscal Year 2012, DOD established five factors that play a key role in active and reserve component balance decisions, including the cost of unit manning, training, and equipping. According to the report, cost is often outweighed by other factors when making active component and reserve component mix decisions, but should always be considered in active component and reserve component mix decisions. DOD Instruction 7041.04 has guidance for military departments to use to identify the full cost of their active component, federal civilian, and contractor workforces. Moreover, we interviewed officials from the military departments to discuss: (1) how they determine their operational medical requirements and if they identified the full cost of active and reserve component medical personnel, and (2) the use of the active and reserve components for operational requirements and any efforts to assess the balance of active and reserve component medical personnel. To determine the mix of active and reserve component medical personnel, we analyzed authorization data from the Health Manpower and Personnel Data System for fiscal year 2017. We analyzed data for fiscal year 2017 because this was the most recent year of available data at the time of our review. To assess the reliability of these data, we electronically tested the data to identify obvious problems with completeness or accuracy and interviewed knowledgeable agency officials about the data. We found the data to be sufficiently reliable for reporting on the allocation of authorizations for active and reserve component medical personnel. To address how the military departments determine the most appropriate workforce mix at military treatment facilities (MTFs) and any challenges in executing an appropriate workforce mix, we reviewed DOD and department-level policies and guidance on workforce mix determination. We also reviewed the military departments’ efforts in planning, staffing, and filling MTF requirements. We spoke with knowledgeable officials from the Office of the USD(P&R), OASD(HA), DHA, and the military departments and requested documentation related to how they oversee or implement legal or policy requirements, such as DOD Instruction 1100.22’s manpower mix criteria, and the annual inventory of inherently governmental and commercial activity. To determine the proportion of reported military, federal civilian, and contractor personnel providing or supporting care in MTFs, we obtained budgetary data for fiscal year 2017, which was the most recent full fiscal year of available data at the time of our review. To assess the reliability of these data, we compared them to the information reported in the fiscal year 2017 Defense Health Program justification estimates published in February 2018 to identify key differences and interviewed knowledgeable agency officials about the data. We found the data to be sufficiently reliable for the purposes of describing workforce mix of military, federal civilian, and contractor personnel within MTFs. To understand how policies and procedures to determine and execute an appropriate workforce mix are implemented at MTFs, we interviewed military department medical command or agency officials responsible for implementing DOD total force policy. To better understand policy and procedure implementation at MTFs we selected six MTFs - two each from the Army, Navy, and Air Force - to allow a cross-section of views concerning the management of the military departments’ workforce mix at the MTFs and hiring conditions in different types of labor markets. The two MTFs from each military department were selected based on consideration of average daily patient load and MTF bed size, which we obtained from the Defense Health Agency. For each MTF, we interviewed officials responsible for the leadership and management of MTF personnel and operations and requested and reviewed relevant documentation. We reviewed their responses, which highlighted some challenges related to achieving an appropriate workforce mix, and DOD’s plans for addressing these challenges. We compared these to GAO’s key questions to assess agency reform efforts, which note that strategic workforce planning should precede any staff realignments or downsizing, and GAO’s key principles for effective strategic workforce planning, which state that addressing a critical human capital challenge—such as closing or reducing personnel gaps—requires tailored human capital strategies and tools and metrics by which to monitor and evaluate progress toward reducing gaps. We also reviewed these plans in light of OPM’s standards for strategic workforce planning, which note that human capital strategies should be integrated with acquisition plans, among other things, such as DOD’s acquisition strategy for health care services at MTFs. Finally, we requested from officials at each MTF information on personnel inventory and authorizations to understand their ability to fill military and civilian positions, and the contract vendors’ ability to fill positions designated for contracted services. We also reviewed how the planned transfer of administrative responsibility for MTFs from the military departments to the DHA might affect DOD management of military personnel within the MHS. To identify (1) responsibilities of the military departments that may be transferred to the DHA, and (2) challenges that may continue under the new organizational structure, we reviewed relevant documentation and interviewed knowledgeable officials. To understand potential challenges related to the assignment of military personnel to MTFs, we interviewed military department officials responsible for the assignment of military personnel. To identify how deployments affect MTF operations, if at all, we interviewed officials responsible for the leadership and management of MTF personnel and operations. Lastly, to understand how the military departments manage the size and composition of the active duty medical workforce, we requested documentation related to the development of operational personnel requirements and interviewed knowledgeable officials. We also reviewed previous efforts to alter the size or composition of the active duty medical workforce, such as military to civilian conversions. We compared DOD’s efforts to plan for these challenges to leading practices for results-oriented government, which state that cooperating federal agencies need to sustain and enhance their collaboration in several ways, including the development of policies and procedures to operate across agency boundaries and agreement on their respective roles and responsibilities. We conducted this performance audit from September 2017 to November 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Lori Atkinson, Assistant Director; Tracy Barnes; Alexandra Gonzalez; Adam Howell-Smith; Kirsten Leikem; Amie Lesser; Richard Powelson; Clarice Ransom; Stephanie Santoso; Amber Sinclair, and John Van Schaik; made key contributions to this report. Related GAO Products Military Personnel: Additional Actions Needed to Address Gaps in Military Physician Specialties. GAO-18-77. Washington, D.C.: February 28, 2018. Defense Health Reform: Steps Taken to Plan the Transfer of the Administration of the Military Treatment Facilities to the Defense Health Agency, but Work Remains to Finalize the Plan. GAO-17-791R. Washington, D.C.: September 29, 2017. Defense Health Care Reform: DOD Needs Further Analysis of the Size, Readiness, and Efficiency of the Medical Force. GAO-16-820. Washington, D.C.: September 21, 2016. Human Capital: Additional Steps Needed to Help Determine the Right Size and Composition of DOD’s Total Workforce. GAO-13-470. Washington, D.C.: May 29, 2013. Military Personnel: DOD Addressing Challenges in Iraq and Afghanistan but Opportunities Exist to Enhance the Planning Process for Army Medical Personnel Requirements. GAO-11-163. Washington, D.C.: February 10, 2011. Military Personnel: Enhanced Collaboration and Process Improvements Needed for Determining Military Treatment Facility Medical Personnel Requirements. GAO-10-696. Washington, D.C.: July 29, 2010. Military Personnel: Status of Accession, Retention, and End Strength for Military Medical Officers and Preliminary Observations Regarding Accession and Retention Challenges. GAO-09-469R. Washington, D.C.: April 16, 2009.
Why GAO Did This Study The MHS includes more than 241,000 active duty, reserve, federal civilian, and contractor personnel who provide (1) operational medical care in support of war and other contingencies and (2) beneficiary medical care within DOD's hospitals and clinics. The Senate Report 115-125 accompanying a bill for the National Defense Authorization Act for Fiscal Year 2018 contained a provision for GAO to review how DOD determines its mix of military, federal civilian, and contractor personnel. This report examines the military departments' planning processes for determining (1) operational medical personnel requirements, including an assessment of the mix of federal civilian, contractor, and active and reserve medical personnel; and (2) the most appropriate workforce mix at MTFs and any challenges in executing their desired workforce mix. GAO compared MHS staffing practices with DOD policy, and analyzed fiscal year 2017 budgetary data to determine the proportion of military, federal civilian, and contractor personnel. GAO also interviewed senior leaders at six MTFs. What GAO Found The military departments each have their own processes to determine their operational medical personnel requirements; however, their planning processes to meet those requirements do not consider the use of all medical personnel or the full cost of military personnel. Specifically: The Department of Defense (DOD) has not assessed the suitability of federal civilians and contractors to meet operational medical personnel requirements. Federal civilians and contractors play key roles in supporting essential missions, i.e. providing operational assistance via combat support. Military department officials expressed a preference for using military personnel and cited possible difficulties in securing federal civilian and contractor interest in such positions. An assessment of the suitability of federal civilians and contractors could provide options for meeting operational medical personnel requirements. When determining the balance of active and reserve component medical personnel, the military departments' processes generally do not consider full personnel costs, including education and benefits. Specifically, officials stated that the Army and the Navy do not consider personnel costs in their assessment of the appropriate balance between active and reserve personnel, and the Air Force's analysis had some limitations. DOD policy states that workforce decisions must be made with an awareness of the full costs. Further, in a 2013 report, DOD identified the cost of unit manning, training, and equipping as one of five factors that play a key role in decisions concerning the mix of active and reserve component forces. By developing full cost information for active and reserve component medical personnel, DOD can better ensure an appropriate and cost-effective mix of personnel. The military departments have taken actions, such as establishing policies and procedures, to assess the appropriate workforce mix for beneficiary care within Military Treatment Facilities (MTFs), but challenges remain. The military departments distribute military personnel across the MTFs and then use policies and procedures to consider risks, costs, and benefits to determine how to fill the remaining positions with federal civilians and contractors. However, a number of challenges, including lengthy hiring and contracting processes and federal civilian hiring freezes affect DOD's ability to use federal civilians and contractors. For example, senior officials at each of the six MTFs that GAO spoke with cited challenges with the federal civilian hiring process, and five of six MTFs cited challenges with the contracting process. As a result, senior officials from five of six MTFs reported discontinuing some services and referring patients to DOD's TRICARE network of private sector providers or Veterans Affairs facilities. The Military Health System (MHS) is also preparing for the phased transfer of administrative responsibility for MTFs to the Defense Health Agency (DHA), including management of the MTF workforce. According to GAO's report on agency reform efforts, strategic workforce planning should precede any staff realignments or downsizing. However, according to a senior official, the DHA has not developed a strategic workforce plan. Without developing such a plan, the DHA may continue to face the same challenges experienced by the military departments in executing an appropriate and efficient workforce mix at its MTFs. What GAO Recommends GAO recommends that DOD, among other things, (1) assess the suitability of federal civilians and contractors to provide operational medical care; (2) develop full cost information for active and reserve component medical personnel; and (3) develop a strategic total workforce plan for the DHA to help ensure execution of an appropriate workforce mix at its MTFs. In commenting on a draft of this report, DOD concurred with each of GAO's recommendations.
gao_GAO-18-70
gao_GAO-18-70_0
Background CMS intends for the T-MSIS initiative to provide a national data repository that would support federal and state program management, financial management, and program integrity activities, among other functions. T- MSIS is also intended to benefit states by reducing the number of reports CMS requires them to submit, and by improving program efficiency by allowing states to compare their data with other states’ data in the national repository or with information in other CMS repositories, including Medicare data. For example, CMS intends to use T-MSIS data for reports that states are currently required to submit, such as Early and Periodic Screening, Diagnostic, and Treatment Program reports. T-MSIS is designed to capture significantly more data from states than is the case with MSIS, thereby collecting data not previously reported that should provide CMS and states with information to enhance their oversight efforts. T-MSIS includes the five data files that were collected through MSIS: an eligibility file and four claims files (inpatient, long-term care, pharmacy, and other). The scope of data to be collected from these five previously defined MSIS files has expanded to include more detailed information on enrollees, such as their citizenship, immigration, and disability status; and expanded diagnosis and procedure codes associated with their treatments. Additionally, T-MSIS requires states to report three new data files on (1) providers, (2) third-party liability, and (3) managed care organizations (MCO). The provider file includes a unique identifier for each provider, as well as data fields to show provider specialty and practice locations. Each of these identifiers can assist CMS and state oversight by providing information on provider referrals, Medicaid payments to specific providers, and identifying ineligible providers. The third-party liability file includes data on whether a beneficiary has any health insurance in addition to Medicaid, or other potential sources of funds that could reduce Medicaid’s expenditures. Medicaid is generally the payer of last resort, meaning if Medicaid enrollees have another source of health care coverage, that source should pay, to the extent of its liability, before Medicaid does. Information on beneficiaries’ other sources of coverage could help ensure that Medicaid pays only those expenditures for which it is liable. The managed care file includes more detailed information on MCOs, such as type and name of managed care plans, covered eligibility groups, service areas, and reimbursement arrangements. In addition to identifying which MCOs are reporting encounter data as required, this file could help CMS’s oversight by allowing the agency to identify excess plan profits and volatility of expenditures for some beneficiary groups across states. In total, T-MSIS includes approximately 1,400 data elements, according to CMS. Many of these elements, however, have content that is used in more than one of the eight T-MSIS files. For example, the element “DATE OF BIRTH” is required in five T-MSIS files—Eligibility, Claim Inpatient, Claim Long-term Care, Claim Prescription, and Claim Other. CMS requires states to report all T-MSIS elements that are applicable to their programs, and has worked closely with states to facilitate their efforts to report these data. For example, before CMS approves a state for reporting T-MSIS data, states must complete a number of activities, including developing detailed work plans and completing a series of data testing phases. For a state to meet CMS’s requirements for submitting T-MSIS data, it must report data for all eight files, but not necessarily all elements within each file. In addition, T-MSIS includes aspects aimed at improving the timeliness and accuracy of data submitted by states. For example, CMS requires states to report T-MSIS data monthly, rather than quarterly, as was the case with MSIS. Regarding data accuracy, T-MSIS includes approximately 2,800 automated quality checks that provide states with feedback on data format and consistency, according to CMS; this is in contrast to MSIS, which had relatively few automated checks. Other quality checks are to ensure logical relationships across T-MSIS files. Both we and the HHS-OIG have previously recommended that CMS take steps to address the quality of T-MSIS data. In our January 2017 report, we recommended that CMS take immediate steps to assess and improve T-MSIS data. As part of that effort, we noted that CMS could refine their T-MSIS data priority areas to identify those that are critical for reducing improper payments and expedite efforts to assess and ensure their quality. CMS agreed with our recommendation, but as of September 2017, the agency had not implemented it. More recently, the HHS-OIG reported that CMS and states continue to have concerns regarding the completeness and reliability of T-MSIS data, echoing concerns raised in its 2013 review of CMS’s T-MSIS pilot program. The HHS-OIG noted it was concerned that CMS and states would delay further efforts rather than assign the resources needed to address the outstanding challenges, and reaffirmed its 2013 recommendation that CMS establish a deadline for when T-MSIS data will be available for program analysis and other management functions. Despite Challenges Converting State Data to the T-MSIS Format, Nearly All States are Reporting T-MSIS Data, and CMS Has Shifted Its Focus to Improving Data Quality Despite challenges converting their data to the T-MSIS format, most states were reporting T-MSIS data as of November 2017, representing significant progress over the past year. With most states reporting, CMS has shifted its efforts to working with states to improve the quality of T- MSIS data. Overall, 49 States Are Reporting T-MSIS Data; Selected States Identified Converting their Data into the T-MSIS Format as a Significant Reporting Challenge As of November 2017, 49 states have begun reporting T-MSIS data, a significant increase from the 18 states that had started reporting these data in October 2016. These reporting states represent over 97 percent of the 2017 Medicaid population nationwide. CMS officials told us that they expect all states to report T-MSIS data by 2018. (See fig. 1.) As of November 2017, all eight of our selected states were reporting T- MSIS data, with seven of them having begun in September 2016 or later. Selected states’ estimated spending a collective $14.16 million on their efforts to report T-MSIS data from October 2011 through June 2017, ranging from approximately $850,000 in Virginia to $4.42 million in Minnesota. (See table 1.) The age and scope of states’ existing Medicaid Management Information Systems (MMIS) were among the factors that affected certain states’ spending and timing on this effort. Mapping the data—the process by which states convert their data to the T-MSIS format on an element-by-element basis—was the primary challenge our eight selected states identified in reporting T-MSIS data. In some cases, before converting their data to the T-MSIS format, states had to obtain data they had not previously collected from other state entities, MCOs, or providers. For example, Minnesota had to begin collecting information on denied claims from MCOs, and Utah had to collect third-party liability information from other state agencies. In addition, while some state data elements could be converted to the T- MSIS format fairly easily, because the relationships between the two were clear, the conversion of other data elements was more complicated. For example, the T-MSIS data element for male and female is “M” and “F,” respectively. Accordingly, in states that identified gender by a numeric value, “1” for male and “2” for female, the conversion to T-MSIS for this element was a fairly straightforward one-to-one relationship. However, for other data elements, the conversion process was more complex, requiring states to expand or collapse their data to match the T-MSIS format. (See fig. 2.) Selected states shared examples of steps they took to convert state data to the T-MSIS format. Louisiana officials noted that they had to map the state’s single durable medical equipment (DME) element to multiple specific T- MSIS DME elements, such as DME pharmacy or DME orthotics. Virginia officials said they had to combine three state ambulance service provider elements into a single T-MSIS element. In addition, individuals who had experience with other states’ T-MSIS reporting efforts also noted that states may not have always collapsed categories in the same way. For example, one state collapsed its 109 provider categories to match T-MSIS’s 57 provider categories, according to an individual who worked with the state on this effort. This individual noted that there were 32 state provider elements that did not directly match a specific T-MSIS element, so the state grouped them all into the “other” T-MSIS element. Changes in CMS’s data reporting requirements further complicated some states’ efforts to convert their data to the T-MSIS format, according to officials from our selected states. CMS updated the T-MSIS data dictionary—the document that defines the required T-MSIS elements and their reporting formats—twice in 2013 and again in November 2015. According to CMS officials, they updated the data dictionary to clarify and remove inconsistencies from guidance in response to feedback from states. Some of the selected states reported that the changes included in this update required considerable rework, and in some cases, delayed their T-MSIS reporting. For example, Washington officials noted that the 2015 update became available at the point it was completing a T-MSIS testing phase. Due to the rework required to comply with the new data specifications, the state’s efforts to report T-MSIS data were delayed by nearly one year. Similarly, Minnesota officials also cited rework associated with changes to the 2015 data dictionary, which contributed to delays in their efforts to report T-MSIS data. CMS’s Efforts to Support States Have Shifted from Reporting T-MSIS Data to Improving T-MSIS Data Quality Over the past six years, CMS has relied on a variety of mechanisms to support states’ efforts to report T-MSIS data. CMS assigned technical assistants to help states understand the T- MSIS requirements, prioritize steps to report T-MSIS data, and serve as a resource on technical issues. The majority of selected states had positive comments about the technical assistance they received. For example, Pennsylvania officials said its technical assistant regularly met with them, answered any questions they had, and facilitated their efforts to complete T-MSIS testing. CMS began hosting national webinars covering a range of topics, including clarification on specific T-MSIS elements that CMS identified as challenging or subject to error, and updates on the nationwide implementation. The webinars also provided an opportunity for states to ask CMS questions about T-MSIS requirements. CMS established web-based avenues through which the agency could compile and disseminate information, as well as elicit questions from states and contractors. For example, CMS provided an electronic option for states to submit questions regarding policy and technical issues. CMS took additional steps to help states, including creating a SharePoint web site through which states are notified about changes in guidance. With nearly all states having begun reporting T-MSIS data, CMS has shifted its efforts to improving the quality of the T-MSIS data reported, and these efforts are still evolving. For example, to provide states with immediate feedback on their reported T-MSIS data, CMS created an online “operational dashboard” for each state, which provides specific information on errors in its reported data. Using information on the operational dashboard, states can identify the frequency and cause of certain errors, which facilitates their efforts to resolve them more expeditiously and to improve future submissions. All six of the selected states reporting T-MSIS data had positive comments about the value of the operational dashboard, with some of them noting that the feedback on errors was a significant improvement from their experience with MSIS, where feedback had a considerable time-lag. More recently, according to agency officials, CMS has initiated a pilot study with four states to identify anomalies in their reported data that merit further attention, obtain feedback on automated quality measures, and determine the best approach for ongoing quality review. While work on the pilot is ongoing, CMS officials anticipate using what they learned to expand the agency’s quality review to include all states. In addition, CMS has turned to external stakeholders to evaluate the quality of T-MSIS data. Specifically, CMS has shared T-MSIS data with a Technical Expert Panel it formed to obtain feedback on inconsistencies and other quality concerns. According to CMS officials, the Technical Expert Panel focused on a preliminary set of T-MSIS data from a limited number of states. The agency officials noted that Technical Expert Panel members include individuals from HHS’s Office of the Actuary, the Congressional Budget Office, and the Medicaid and CHIP Payment and Access Commission, among others. Panel participants analyzed the T- MSIS data from 11 states on the specific topics in which they have expertise. According to CMS officials, the panel is to provide its results to the agency in a summary report. Data Completeness and Comparability Concerns Hinder CMS’s and States’ Use of T-MSIS for Oversight Ongoing data concerns raise questions about how soon—and to what extent—T-MSIS data will be sufficient to achieve the goals of improving CMS’s and states’ ability to use Medicaid data for oversight. For example, none of the six selected states that were reporting T-MSIS data as of August 2017 were reporting complete data at that time. In reviewing selected states’ documentation of unreported data elements, we determined that the number of unreported data elements ranged from about 80 elements to 260 elements. Although T-MSIS includes about 1,400 data elements, the number of data elements relevant to each state varies, in part, because certain elements may not be applicable to all states and others may be populated at the state’s discretion. In addition, the content of some data elements are present in more than one of the eight T-MSIS files. As a result, the number of unreported elements may overstate the extent of state efforts needed to report complete T-MSIS data. Our selected states provided a range of reasons for not reporting T-MSIS data elements, including that certain elements were contingent on federal or state actions. In other cases, state officials indicated that data elements were too costly to report, so they would not be reporting them. We identified further examples of where certain data elements were not applicable to states’ Medicaid programs, and therefore were not required. (See table 2.) Although CMS requires states to report all T-MSIS data elements applicable to their program, CMS officials said they did not specify a reporting deadline for states, and selected states’ documentation to CMS did not always include the reasons they did not report certain elements, or whether or when they planned to report them. Due to the lack of clarity and completeness in selected states’ documentation, we were not able to identify the reasons for all unreported data elements. However, among our selected states, Virginia’s documentation more clearly specified most—but not all—of the reasons it was not reporting 260 T-MSIS elements. Virginia identified 167 elements that its MMIS did not capture, and noted that once the state’s new Medicaid information system is fully implemented in 2019, the state will be able to report them. Virginia identified 16 elements as pending other state or related actions. Virginia identified 18 elements as pending the implementation of HHS efforts. Virginia identified 53 elements as not applicable to aspects of its Medicaid program. Without complete information from all states on unreported data elements and their plans to report them, it is unclear when—and to what extent—T- MSIS data will be available to use for oversight, which is inconsistent with federal internal control standards for using quality information to achieve objectives. In some cases, data elements important for program oversight were not reported by two or more of the six selected states reporting T-MSIS data, limiting T-MSIS’s usefulness for oversight in these areas. (See table 3.) Another factor affecting the ability of CMS and states to use T-MSIS data for oversight is that not all of the 49 states submitting T-MSIS data are submitting current data. According to CMS officials, before beginning to report T-MSIS data, each state stops reporting MSIS data. At that point, there is a temporary gap in the state’s reporting until it receives CMS’s approval to begin reporting T-MSIS data. After a state gets CMS’s approval, it must first submit the T-MSIS data that correspond to the date that it stopped submitting MSIS data; the data for previous months is known as “catch up” data. Once a state reports that data, it then shifts to reporting current T-MSIS data. According to CMS, as of November 2017, 42 of the 49 states reporting T-MSIS data were reporting current data; the remaining 7 states were still reporting catch up data for previous months. Regarding the comparability of T-MSIS data across states, state officials we interviewed cited concerns that could affect their use of T-MSIS for oversight. Officials from most selected states cited the benefit that a national repository of T-MSIS data could provide by allowing them to compare their Medicaid program data—such as spending or utilization rates—to other states, which could potentially improve their oversight. However, concerns about comparability of the data make officials from most selected states hesitant to use the data for this purpose. In particular, officials from six of eight selected states, and other individuals we interviewed, are not confident that the decisions states made when converting their data to the T-MSIS format were consistent across states. An individual who worked with other states on T-MSIS reporting efforts noted that states may have made different decisions about what types of providers to include as part of the “all other” category of providers within T-MSIS. While one state he worked with included a range of provider types, such as licensed drug and alcohol counselors and non-emergency medical transportation providers, in the “all other” T-MSIS provider category, other states may have made different decisions. Some state officials and individuals working with states noted that states’ different decisions may complicate their ability to use the data for cross-state comparisons. Further, officials from some of the selected states noted that they were not familiar with the quality of other states’ T-MSIS data. CMS has begun to take steps to address the quality of the T-MSIS data; however, its efforts are still evolving. For example, in May 2017, CMS identified 12 data quality priority areas for states to focus on for improving the accuracy and consistency of T-MSIS data, including accurately categorizing beneficiaries into T-MSIS eligibility groups and ensuring consistency related to MCO reporting. CMS has worked to identify existing or develop new guidance for each of these priority areas, and to compile the guidance in a central location for states’ reference. As of August 2017, CMS officials said they compiled guidance for 11 of the 12 areas, and intended to continue work with states on these priorities. In addition, CMS has not created a mechanism to disseminate information about states’ data limitations or states’ efforts to improve and use the data, which also affects their utility for oversight. Officials from four of the eight selected states said that learning more about other states’ T-MSIS data could help allay their concerns about comparability, and two of the four states said it could also help them address their own data quality issues. Additionally, officials from all eight selected states were interested in opportunities to learn more about other states’ use of the data. CMS officials acknowledged the benefits of a mechanism to disseminate information about states’ data limitations more broadly, and to facilitate information sharing among states. CMS officials told us that they plan to launch a Learning Collaborative with states to facilitate feedback and collaboration. This effort could address a range of data-related topics, including data quality. CMS officials told us they were taking actions to put the Learning Collaborative in place, and may launch the collaborative in early 2018. The lack of an effort to facilitate information sharing is inconsistent with CMS’s goals for T-MSIS and with federal internal control standards for using and communicating quality information to achieve objectives. Absent such an effort, CMS is missing an opportunity to help states understand ways they could improve the quality of their T-MSIS data and facilitate states’ use of the data for oversight. CMS is also missing an opportunity to expedite quality improvements that could result from states conducting their own independent analyses. Although CMS has taken steps to begin using T-MSIS data, it has not yet fully articulated a plan for how and when it will use T-MSIS data for its own broader oversight efforts of state Medicaid programs. For example, according to CMS officials, the agency has begun to use T-MSIS data to generate Money Follow the Person reports, and has begun exploring additional uses of T-MSIS data to reduce states’ reporting burden. These preliminary efforts are consistent with one of CMS’s stated goals for T-MSIS, which is to reduce states’ reporting burden by relying on T- MSIS data in place of separate reports that states currently submit, and officials from six of eight selected states indicated that such an effort would reduce their reporting burden. However, as of August 2017, CMS officials acknowledged that they had yet to outline how best to use T-MSIS data for program monitoring, oversight, and management, because they were still largely focused on working with the remaining states to begin reporting T-MSIS data, analyzing the quality and usability of the T-MSIS data, and preparing the data for research purposes. CMS’s lack of a specific plan and time frames for using T-MSIS data is inconsistent with federal internal control standards related to using and communicating quality information to achieve objectives. Absent a specific plan and time frames, CMS’s ability to use these data to oversee the program, including ensuring proper payments and beneficiaries’ access to services, is limited. Conclusions As part of its efforts to address longstanding concerns about the data available to oversee the Medicaid program, CMS has taken important steps toward developing a reliable national repository for Medicaid data. T-MSIS has the potential to improve CMS’s ability to identify improper payments, help ensure beneficiaries’ access to services, and improve program transparency, among other benefits. By providing more standardized data on various aspects of Medicaid—such as spending or utilization rates—states could be better positioned to compare their programs to other states, thereby improving their ability to identify program inefficiencies or opportunities for improvement. Implementing the T-MSIS initiative has been a significant undertaking. Over the past 6 years, CMS has worked closely with states and has reached a point where nearly all states are reporting T-MSIS data. While recognizing the progress that has been made, more work needs to be done before CMS or states can use these data for program oversight. It remains unclear when all states will report complete and comparable T- MSIS data, and how CMS and states will use them to improve oversight. In the interim, improper Medicaid payments continue to increase, reaching $36.7 billion in fiscal year 2017. Further delays in T-MSIS’s use limit CMS’s ability to reverse that trend in the near term, underscoring the need for CMS to take additional steps to expedite the use of the data. Recommendations for Executive Action We are making the following two recommendations to CMS. The Administrator of CMS, in partnership with the states, should take additional steps to expedite the use of T-MSIS data for program oversight. Such steps should include, but are not limited to, efforts to obtain complete information from all states on unreported T-MSIS data elements and their plans to report applicable data elements; identify and share information across states on known T-MSIS data limitations to improve data comparability; and implement mechanisms, such as the Learning Collaborative, by which states can collaborate on an ongoing basis to improve the completeness, comparability, and utility of T-MSIS data. (Recommendation 1) The Administrator of CMS should articulate a specific plan and associated time frames for using T-MSIS data for oversight. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to HHS for comment. In its written comments, HHS concurred with our recommendations, and noted that strong Medicaid data can help the federal government and the states move toward better health outcomes and improve program integrity, performance, and financial management. With most states now reporting T-MSIS data, HHS highlighted efforts it has taken to improve the quality of T-MSIS data. For example, HHS developed a database on data quality findings, which could be used to identify solutions for common problems across states, and has begun to develop a data quality scorecard for T- MSIS users, which aggregates data quality findings in a user-friendly tool. Regarding taking steps to expedite the use of T-MSIS data for program oversight, HHS stated that it will (1) continue to work to obtain complete T-MSIS information from all states; (2) take additional steps to share information across states on T-MSIS data limitations; and (3) implement ways for states to collaborate regarding T-MSIS. HHS also noted that it is in the process of developing a plan for using T-MSIS data for oversight. HHS emphasized that it is dependent on states—and their available staffing and resources—to address data quality and reporting issues. HHS also provided technical comments, which we incorporated as appropriate. HHS’s comments are reprinted in appendix I. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of HHS, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. Major contributors to this report are listed in appendix II. Appendix I: Comments from the Department of Health and Human Services Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Carolyn L. Yocom, (202) 512-7114 or [email protected]. Staff Acknowledgments In addition to the contact named above, individuals making key contributions to this report include Susan Anthony (Assistant Director), Manuel Buentello (Analyst-in-Charge), Anna Bonelli, and Robin Burke. Also contributing were Muriel Brown, Drew Long, and Jennifer Rudisill.
Why GAO Did This Study GAO and others have identified insufficiencies in state-reported Medicaid data that affect CMS's ability to oversee the program effectively. Recent increases in improper payments—estimated at $36.7 billion in fiscal year 2017—exacerbate concerns about program oversight. CMS officials identified the T-MSIS initiative, which began in 2011, as its main effort to improve Medicaid data, and cited aspects of T-MSIS aimed at improving the scope and quality of state-reported data. GAO reported in January 2017 that it is unclear when T-MSIS data will be available from all states; how CMS will ensure data quality; or how the data will be used to enhance oversight of Medicaid. GAO was asked to review states' experiences with T-MSIS implementation and planned uses of T-MSIS data. This report examines (1) states' experiences regarding T-MSIS implementation, and (2) challenges to CMS's and states' use of T-MSIS data for oversight. GAO reviewed federal laws, guidance, and internal control standards; reviewed documents and interviewed officials from eight states, selected based on their T-MSIS reporting status, location, program expenditures, and other factors; and interviewed CMS officials, CMS contractors, and individuals involved with other states' T-MSIS efforts. What GAO Found As of November 2017, 49 states had begun reporting Transformed Medicaid Statistical Information System (T-MSIS) data—a significant increase from 18 states reporting these data one year earlier. All eight states GAO reviewed identified converting their data to the T-MSIS format on an element-by-element basis as the main challenge in their reporting efforts. For some data elements, states had to expand or collapse their data to match the T-MSIS format. With the continued implementation of T-MSIS, the Centers for Medicare & Medicaid Services (CMS) has taken an important step toward developing a reliable national repository for Medicaid data. However, data challenges have hindered states' and CMS's use of the T-MSIS data for oversight. None of the six selected states reporting T-MSIS data in August 2017 was reporting complete data. These states said that certain unreported elements were contingent on federal or state actions, and others were not applicable to the state's Medicaid program. States did not always specify in their documentation whether they planned to report elements in the future or when they would report complete data. Six of eight selected states expressed concerns about the comparability of T-MSIS data across states. Further, all states were interested in CMS facilitating information sharing among states. CMS has not compiled and shared information about states' data limitations, which would help states accurately compare their T-MSIS data to other states' T-MSIS data. CMS has taken steps for the initial use of T-MSIS data, but does not have a plan or associated timeframes for using these data for oversight. As a result, important CMS goals for T-MSIS, such as reducing states' reporting burden and enhancing program integrity activities, are not being fully realized. What GAO Recommends GAO recommends that CMS (1) improve T-MSIS's completeness and comparability to expedite its use, and (2) articulate a specific oversight plan. The Department of Health and Human Services concurred with GAO's recommendations.
gao_GAO-18-315
gao_GAO-18-315_0
Background Definitions of and terms for recovery housing can vary, and recovery housing may differ in the types of services offered and resident requirements. Alcohol- and drug-free housing for individuals recovering from SUD may be referred to as “recovery residences,” “sober homes,” or other terms. NARR has defined four levels of recovery housing (I through IV) based on the type and intensity of recovery support and staffing they offer, up to and including residential, or clinical, treatment centers. For the purposes of this report, we use the term “recovery housing” to refer to peer-run, nonclinical living environments for individuals recovering from SUD in general, and “recovery homes” to refer to specific homes. These homes generally are not considered to be residential treatment centers, not eligible to be licensed providers for the purposes of billing private insurance or public programs—such as Medicaid and Medicare—and residents typically have to pay rent and other housing expenses themselves. Recovery home residents may separately undergo outpatient clinical SUD treatment, which is typically covered by health insurance. In addition, recovery homes may encourage residents to participate in mutual aid or self-help groups (e.g., 12-step programs such as Alcoholics Anonymous) and may require residents to submit to drug screenings to verify their sobriety. Residents may be referred to recovery homes by treatment providers, the criminal justice system, or may voluntarily seek out such living environments. In addition to SAMHSA, two national nonprofit organizations that have missions dedicated to recovery housing include NARR and Oxford House, Inc. NARR promotes standards for recovery housing, provides training and education to recovery housing operators and others, and conducts research and advocacy related to recovery housing to support individuals in recovery from SUD. As of January 2018, NARR’s membership comprised 27 state affiliates that work to promote and support NARR’s quality standards for recovery housing and other activities in their states. Of the 27 NARR affiliates, 15 were actively certifying recovery homes. Oxford House, Inc. connects individual Oxford Houses across the United States and in other countries. Individual Oxford Houses, which operate under charters granted by Oxford House, Inc., are democratically run, self-supporting homes. According to the Oxford House manual and related documents, all Oxford Houses are rentals, and residents are responsible for sharing expenses, paying house bills on time, and immediately evicting residents who drink or use illicit drugs while living in the house. Oxford House, Inc. maintains a directory of houses on its website, and individuals can search this directory for vacancies by state. Oxford Houses align with NARR’s definition of level I residences; that is, peer-run, self-funded, typically single family homes where residents have an open-ended length of stay. SAMHSA and other organizations recognize recovery housing as an important step in SUD treatment and recovery. Research has shown positive outcomes of recovery housing on long-term sobriety, such as at 6-, 12-, and 18-month follow up. However, according to SAMHSA and NARR officials, much of the available research on effectiveness of recovery housing focuses on the Oxford House population, and research on other types of recovery homes is limited. Nationwide Prevalence of Recovery Housing Is Unknown, but National Organizations Collect Data on the Number and Characteristics of a Subset of Recovery Homes The nationwide prevalence of recovery housing is unknown because there are no comprehensive data regarding the number of recovery homes in the United States, although NARR and Oxford House, Inc. collect data on a subset of recovery homes across the United States. Specifically, NARR collects data only on recovery homes that seek certification from one of its 15 state affiliates that certify homes. However, NARR-certified homes may represent only a portion of existing recovery homes, as NARR does not know how many such homes are uncertified. As of January 2018, NARR reported that its affiliates had certified almost 2,000 recovery homes, which had the capacity to provide housing to over 25,000 individuals; NARR-certified recovery homes include recovery housing across all four NARR levels, including residential treatment centers that provide clinical services, which are outside the scope of our study. Oxford House, Inc. collects data annually on the prevalence and characteristics of Oxford Houses across the United States. In its 2017 annual report, Oxford House, Inc. reported that there were 2,287 Oxford Houses in 44 states that provided housing to a total of 18,025 individuals. Of the total number of Oxford Houses in 2017, 71 percent served men and 29 percent served women, with the average resident aged 37 years. The Oxford House, Inc. report also provides information on other characteristics of Oxford House residents. For example, of the 18,025 Oxford House residents in 2017, Oxford House, Inc. reported the following: 79 percent were addicted to drugs and alcohol; 21 percent were addicted to alcohol only. 77 percent had been incarcerated. 68 percent had previously experienced homelessness. 12 percent were veterans. 87 percent were employed. 98 percent regularly attended 12-step meetings, such as Alcoholics Anonymous or Narcotics Anonymous. 45 percent attended weekly outpatient counseling in addition to Average length of sobriety was 13.4 months. Most States We Reviewed Have Investigated Potential Fraud Related to Recovery Housing and Taken Steps to Enhance Oversight The five states we selected for review have taken actions to investigate and oversee recovery housing. Four of the five states have conducted law enforcement investigations of recovery homes in their states and some of these investigations have resulted in arrests and changes to public and private insurance policies. In addition to actions taken in response to state investigations, three of the five states in our review have also taken steps to formally enhance their oversight of recovery homes, and the other two states have taken other steps intended to increase consistency, accountability, and quality across recovery homes. Four of Five States Have Conducted Investigations of Recovery Housing Officials from four of the five states we reviewed (Florida, Massachusetts, Ohio, and Utah) told us that since 2007, state agencies have conducted, or are in the process of conducting, law enforcement investigations of unscrupulous behavior and potential insurance fraud related to recovery housing, and outcomes of some of these investigations included criminal charges and changes to health insurance policies. An official from the fifth state, Texas, told us that the state had not conducted any recent law enforcement investigations related to recovery housing. This official, from the Texas Department of Insurance, told us that the department received two fraud reports in 2014 and 2016 related to recovery homes and that the state was unable to sufficiently corroborate the reports to begin investigations. Across the four states, officials told us that potential insurance fraud may have relied on unscrupulous relationships between SUD treatment providers, including laboratories, and recovery housing operators, because recovery homes are not considered eligible providers for the purposes of billing health insurance. For example, treatment providers may form unscrupulous relationships with recovery housing operators who then recruit individuals with SUD in order to refer or require residents to see the specific SUD treatment providers. This practice is known as patient brokering, for which recovery housing operators receive kickbacks such as cash or other remuneration from the treatment provider in exchange for patient referrals. The extent of potential fraud differed across the four states, as discussed below. Officials from several state agencies and related entities described investigations into fraud related to recovery housing in southeastern Florida as extensive, although the scope of the fraud within the industry is unknown. In 2016, the state attorney for the 15th judicial circuit (Palm Beach County) convened a task force composed of law enforcement officials tasked with investigating and prosecuting individuals engaged in fraud and abuse in the SUD treatment and recovery housing industries. The task force found that unscrupulous recovery housing operators or associated SUD treatment providers were luring individuals into recovery homes using deceptive marketing tactics. Deceptive marketing practices included online or other materials that willfully misdirected individuals or their family members to recruiters with the goal of sending these individuals to specific treatment providers, in order to receive payments from those treatment providers for patient referrals. According to officials from the Florida state attorney’s office, these individuals, often from out of state, were lured with promises of free airfare, rent, and other amenities to recover in southern Florida’s beach climate. Recruiters brokered these individuals to SUD treatment providers, who then billed their private insurance plans for extensive and medically unnecessary urine drug testing and other services. Officials from the Florida state attorney’s office told us that SUD treatment providers were paying $300 to $500 or more per week to recovery housing operators or their staff members for every patient they referred for treatment. In addition, these officials cited one case in which a SUD treatment provider billed a patient’s insurance for close to $700,000 for urine drug testing in a 7-month period. Officials from the state attorney’s office noted that the recovery homes that the task force was investigating were not shared housing in the traditional, supportive sense, such as Oxford Houses, where residents equally share in the rent and division of chores, but rather existed as “warehouses” intended to exploit vulnerable individuals. As a result of these investigations, as of December 2017, law enforcement agencies had charged more than 40 individuals primarily with patient brokering, with at least 13 of those charged being convicted and fined or sentenced to jail time, according to the state attorney’s office. In addition, the state enacted a law that strengthened penalties under Florida’s patient brokering statute and gave the Florida Office of Statewide Prosecution, within the Florida Attorney General’s Office, authority to investigate and prosecute patient brokering. An official from the Massachusetts Medicaid Fraud Control Unit told us that the unit began investigating cases of Medicaid fraud in the state on the part of independent clinical laboratories associated with recovery homes in 2007. The unit found that, in some cases, the laboratories owned recovery homes and were self-referring residents for urine drug testing. In other cases, the laboratories were paying kickbacks to recovery homes for patient referrals for urine drug testing that was not medically necessary. According to the Medicaid Fraud Control Unit official, as a result of these investigations the state settled with nine laboratories between 2007 and 2015 for more than $40 million in restitution. In addition, the state enacted a law in 2014 prohibiting clinical laboratory self-referrals and revised its Medicaid regulations in 2013 to prohibit coverage of urine drug testing for the purposes of residential monitoring. Ohio has also begun to investigate an instance of potential insurance fraud related to recovery housing, including patient brokering and excessive billing for urine drug testing. Officials from the Ohio Medicaid Fraud Control Unit told us that the unit began investigating a Medicaid SUD treatment provider for paying kickbacks to recovery homes in exchange for patient referrals, excessive billing for urine drug testing, and billing for services not rendered, based on an allegation the unit received in September 2016. As of January 2018, the investigation was ongoing, and the Ohio Medicaid Fraud Control Unit had not yet taken legal or other action against any providers. Officials from other state agencies and related state entities, such as the state substance abuse agency and the state NARR affiliate, were not aware of any investigations of potential fraud on the part of recovery housing operators or associated treatment providers when we spoke with them and stated that this type of fraud was not widespread across the state. In August 2017, officials from the Utah Insurance Department told us that the department is conducting ongoing investigations of private insurance fraud similar to the activities occurring in Florida, as a result of a large influx of complaints and referrals it received in 2015. These officials told us that the department has received complaints and allegations that SUD treatment providers are paying recruiters to bring individuals with SUD who are being released from jail to treatment facilities or recovery homes; billing private insurance for therapeutic services, such as group or equine therapy, that are not being provided, in addition to billing frequently for urine drug testing; and encouraging patients to use drugs prior to admission to qualify patients and bill their insurance for more intensive treatment. In addition, insurance department officials told us that they believed providers are enrolling individuals in private insurance plans without telling them and paying their premiums and copays. According to these officials, when doing so, providers may lie about patients’ income status in order to qualify them for more generous plans. Officials found that providers were billing individual patients’ insurance $15,000 to $20,000 a month for urine drug testing and other services. Officials noted that they suspect that the alleged fraud was primarily being carried out by SUD treatment providers and treatment facilities that also own recovery homes. Officials told us that the department has not been able to file charges against any treatment providers because it has been unable to collect the necessary evidence to do so. However, according to insurance department officials, the state legislature enacted legislation in 2016 that gives insurers and state regulatory agencies, such as the state insurance department and state licensing office, the authority to review patient records and investigate providers that bill insurers. This authority may help the insurance department and other state regulatory agencies better conduct investigations in the future. Three States Have Established Oversight Programs, and Two States Are Taking Other Steps to Support Recovery Housing In addition to actions taken in response to state investigations, three of the five states in our review—Florida, Massachusetts, and Utah—have taken steps to formally increase oversight of recovery housing by establishing state certification or licensure programs. Florida enacted legislation in 2015 and Massachusetts enacted legislation in 2014 that established voluntary certification programs for recovery housing. Florida established a two-part program for both recovery homes and recovery housing administrators (i.e., individuals acting as recovery housing managers or operators). According to officials from the Florida state attorney’s office and Massachusetts Medicaid Fraud Control Unit, their states established these programs in part as a result of state law enforcement investigations. In 2014, Utah enacted legislation to establish a mandatory licensure program for recovery housing. According to officials from the Utah substance abuse agency and the state licensing office, the state established its licensure program to, in part, protect residents’ safety and prevent their exploitation and abuse. Although state recovery housing programs in Florida and Massachusetts are voluntary and recovery homes and their administrators can operate without being certified, there are incentives for homes to become certified under these states’ programs, as well as incentives to become licensed under Utah’s program. Specifically, all three states require that certain providers refer patients only to recovery homes certified or licensed by their state program. Thus, uncertified and unlicensed homes in Florida, Massachusetts, and Utah would be ineligible to receive patient referrals from certain treatment providers. Further, state officials told us that state agencies are taking steps to ensure providers are making appropriate referrals. For example, according to officials from the Florida substance abuse agency, treatment providers may refer patients to certified recovery homes managed by certified recovery home administrators only and must keep referral records. These officials also told us that the state substance abuse agency can investigate providers to ensure they are referring patients to certified homes and issue fines or revoke providers’ licenses if the program finds providers are referring patients to uncertified homes. Recovery homes may also view certification as a way to demonstrate that they meet quality standards. For example, the official from the Massachusetts NARR affiliate told us that some residential treatment centers that are required to be licensed by the state are also seeking certification to demonstrate that they meet the NARR affiliate’s quality standards. To become state-certified or licensed, recovery homes in Florida, Massachusetts, and Utah must meet certain program requirements— including staff training, documentation submissions (such as housing policies and code of ethics), and onsite inspections to demonstrate compliance with program standards—though specific requirements differ across the three states. For example, while all three state programs require recovery housing operators or staff to complete training, the number of hours and training topics differ. In addition, for recovery homes to be considered certified in Florida, they must have a certified recovery housing administrator. Similar to Florida’s certification program for the homes, individuals seeking administrator certification must also meet certain program requirements, such as training in recovery residence operations and administration and legal, professional, and ethical responsibilities. Features of the state-established oversight programs may also differ across the three states, including program type, type of home eligible for certification or licensure, how states administer their programs, and initial fees. See table 1 for additional information on features of state- established oversight programs for recovery housing. State-established oversight programs in Florida, Massachusetts, and Utah also include processes to monitor certified or licensed recovery homes and take action when homes do not comply with program standards. For example, an official from the Florida Association of Recovery Residences—the state NARR affiliate and organization that certifies recovery homes in Florida—told us that the entity conducts random inspections to ensure that recovery homes maintain compliance with program standards. State-established oversight programs in the three states also have processes for investigating grievances filed against certified or licensed recovery homes. Further, officials from certifying or licensing bodies in all three states—the Florida Association of Recovery Residences, Massachusetts Alliance for Sober Housing, and the Utah Office of Licensing—told us their organizations may take a range of actions when they receive complaints or identify homes that do not comply with program standards, from issuing recommendations for bringing homes into compliance to revoking certificates or licenses. According to officials from the certifying body in Florida, the entity has revoked certificates of recovery homes that have acted egregiously or have been nonresponsive to corrective action plans. Officials from the certifying and licensing bodies in Massachusetts and Utah told us that these entities had not revoked certificates or licenses when we spoke to them for this review, but may have assisted homes with coming into compliance with certification standards or licensure requirements. Officials from Ohio and Texas told us that their states had not established state oversight programs like those that exist in Florida, Massachusetts, and Utah, but their states had provided technical assistance and other resources to recovery homes that were intended to increase consistency, accountability, and quality: Officials from the Ohio substance abuse agency told us that since 2013 the state has revised its regulatory code to define recovery housing and minimum requirements for such housing. Officials also told us that the agency does not have authority to establish a state certification or licensure program for recovery housing. According to these officials, the state legislature wanted to ensure that Ohio’s recovery housing community maintained its grassroots efforts and did not want a certification or licensure program to serve as a roadblock to establishing additional homes. However, officials from the Ohio substance abuse agency told us that the agency encourages recovery homes to seek certification by the state NARR affiliate—Ohio Recovery Housing—to demonstrate quality. In addition, these officials told us that the state substance abuse agency also provided start-up funds for Ohio Recovery Housing and has continued to fund the affiliate for it to provide training and technical assistance, as well as to continue certifying recovery homes. According to officials from Ohio Recovery Housing, the NARR affiliate regularly provides the state substance abuse agency with a list of newly-certified recovery homes, as well as updates on previously-certified homes, as part of ongoing efforts to develop a recovery housing locator under its contract with the agency. Officials from the Texas substance abuse agency noted that establishing a voluntary certification program, such as one that certifies homes according to NARR’s quality standards, would be beneficial. However, the state legislature has not enacted any legislation establishing such a program to date. The agency is in the process of developing guidance for providers on where and how to refer their patients to recovery housing, which includes a recommendation to send patients to homes certified by the Texas NARR affiliate, but officials could not tell us when they expected the guidance to be finalized. Certain SAMHSA Grant Funding Can Be Used for Recovery Housing, and Selected States Have Used SAMHSA and State Funding to Support Recovery Housing SAMHSA provides some funding for states to establish recovery homes. Of the five states we reviewed, two used SAMHSA funding and four used state funding to help support recovery housing from fiscal year 2013 through fiscal year 2017. SAMHSA Provides Funding for Recovery Housing and Has Undertaken Other Initiatives to Support Recovery Housing SAMHSA makes funding available to states for recovery housing through certain grant programs for SUD prevention and treatment. Specifically, under its Substance Abuse Prevention and Treatment block grant, which totaled approximately $1.9 billion in fiscal year 2017, SAMHSA makes at least $100,000 available annually to each state to provide loans for recovery housing. States that choose to use this funding may provide up to $4,000 in loans to each group that requests to establish alcohol- and drug-free housing for individuals recovering from SUD. The loan can be used for start-up costs such as security deposits and must be repaid within 2 years. Loans are to be made only to nonprofit entities that agree to requirements for the operation of the recovery homes outlined in the authorizing statute, namely that (1) the homes must prohibit the use of alcohol and illegal drugs; (2) the homes must expel residents who do not comply with this prohibition; (3) housing costs, such as rent and utilities, are to be paid by the residents; and (4) residents are to democratically establish policies to operate the homes. According to SAMHSA officials, states are prohibited from using block grant funding other than the loan funding for recovery housing. However, the block grant application does not require states to provide a description of whether and how they will use the loan. SAMHSA has also made funding for recovery housing available under the agency’s State Targeted Response to the Opioid Crisis grant (opioid grant), a 2-year grant program under which SAMHSA anticipated awarding up to $485 million for each of fiscal years 2017 and 2018. The opioid grant is intended to supplement states’ existing opioid prevention, treatment, and recovery support activities, and SAMHSA requires most of states’ funding to be used for opioid use disorder treatment services, such as expanding access to clinically appropriate, evidence-based treatment. States may also use their opioid grant funding for recovery housing and recovery support services—which SAMHSA recognizes as part of the continuum of care—such as establishing recovery homes and providing peer mentoring. (See the next section of this report for information on how states have used SAMHSA funding.) In addition to providing funding, SAMHSA has undertaken other initiatives related to recovery housing, including an assessment of needs for certifying recovery housing in the future. In 2017, SAMHSA held two recovery housing meetings that covered topics including research on emerging best practices in recovery housing, state recovery housing programs, available funding for recovery housing, and challenges that state entities have experienced regulating recovery homes in their states. SAMHSA contracted with NARR at the end of fiscal year 2017 to provide technical assistance and training to recovery housing organizations, managers, and state officials on NARR’s quality standards and certification process, including presentations at three to four national and regional SUD conferences, such as those held by the National Association of State Alcohol and Drug Abuse Directors and other associations. NARR is also required to submit a final report to SAMHSA before the 1-year contract ends with recommendations for future needs for certifying recovery housing and establishing additional NARR state affiliates. SAMHSA officials told us that this is the agency’s first contract with NARR, and SAMHSA plans to conduct an internal assessment at the end of fiscal year 2018 to determine next steps. Selected States Have Used SAMHSA and State Funding for Recovery Housing Two of the five states we reviewed used SAMHSA funding to help support recovery housing in their states from fiscal years 2013 through 2017, according to state officials. Texas was the only state in our review that used the loan funding available under SAMHSA’s block grant. Officials from the Texas substance abuse agency told us that from fiscal years 2013 through 2017, the state used at least $150,000 of this funding annually to increase the number of Oxford Houses in the state and hire Oxford House outreach workers. Texas and Ohio also used a portion of their SAMHSA opioid grant funding for recovery housing. For example, in fiscal year 2017, officials from Ohio’s substance abuse agency told us that the state used $25,000 of its approximately $26 million in opioid grant funding to support and train recovery housing operators, with the goal of increasing the number of recovery homes that accept individuals who receive medication-assisted treatment. The other states we reviewed— Florida, Massachusetts, and Utah—did not opt to use the loan funding available under the SAMHSA block grant and did not use their SAMHSA opioid grant funding for recovery housing services, according to state officials. Four of the five states in our review—Florida, Massachusetts, Ohio, and Texas—have used state funding to establish and support recovery housing and recovery housing-related activities. For example, officials from the Texas substance abuse agency told us that, since 2013, the state legislature has authorized at least $520,000 annually for recovery housing. In fiscal years 2015 through 2017, the state used this funding for personnel costs and related expenditures, such as hiring seven Oxford House outreach workers and establishing a state loan fund of $200,000 to supplement the SAMHSA loan funding to support the establishment of an additional 25 new Oxford Houses. Officials from the Massachusetts substance abuse agency told us that the agency has received annual state appropriations in the amount of $500,000 since fiscal year 2015 to contract with the entities that inspect and certify recovery homes for the state certification program and to contract with the state NARR affiliate for technical assistance with developing recovery housing certification standards and supporting the certification process. State substance abuse agency officials from the fifth state, Utah, told us that the state did not use state funding to establish recovery homes during fiscal years 2013 through 2017. See table 2 for states’ use of SAMHSA and state funding for recovery housing activities. Agency Comments We provided a draft of this report to HHS. HHS did not have any comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact Katherine M. Iritani, Director, Health Care at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: State Agencies and Related Entities GAO Interviewed Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Tom Conahan, Assistant Director; Shana R. Deitch, Analyst-in-Charge; Kristin Ekelund; and Carmen Rivera-Lowitt made key contributions to this report. Also contributing were Lori Achman, Jennie Apter, Colleen Candrl, and Emily Wilson.
Why GAO Did This Study Substance abuse and illicit drug use, including the use of heroin and the misuse of or dependence on alcohol and prescription opioids, is a growing problem in the United States. Individuals with SUD may face challenges in remaining drug- and alcohol-free. Recovery housing can offer safe, supportive, drug- and alcohol-free housing to help these individuals maintain their sobriety and can be an important resource for individuals recovering from SUD. However, the media has reported allegations about potentially fraudulent practices on the part of some recovery homes in some states. GAO was asked to examine recovery housing in the United States. This report examines (1) what is known about the prevalence and characteristics of recovery housing across the United States; (2) investigations and actions selected states have undertaken to oversee such housing; and (3) SAMHSA funding for recovery housing, and how states have used this or any available state funding. GAO reviewed national and state data, federal funding guidance, and interviewed officials from SAMHSA, national associations, and five states—Florida, Massachusetts, Ohio, Texas, and Utah—selected based on rates of opioid overdose deaths, dependence on or abuse of alcohol and other drugs, and other criteria. State information is intended to be illustrative and is not generalizable to all states. What GAO Found Nationwide prevalence of recovery housing—peer-run or peer-managed drug- and alcohol-free supportive housing for individuals in recovery from substance use disorder (SUD)—is unknown, as complete data are not available. National organizations collect data on the prevalence and characteristics of recovery housing but only for a subset of recovery homes. For example, the National Alliance for Recovery Residences, a national nonprofit and recovery community organization that promotes quality standards for recovery housing, collects data only on recovery homes that seek certification by one of its 15 state affiliates that actively certify homes. The number of homes that are not certified by this organization is unknown. Four of the five states that GAO reviewed—Florida, Massachusetts, Ohio, and Utah—have conducted, or are in the process of conducting, investigations of recovery housing activities in their states, and three of these four states have taken formal steps to enhance oversight. The fifth state, Texas, had not conducted any such investigations at the time of GAO's review. Fraudulent activities identified by state investigators included schemes in which recovery housing operators recruited individuals with SUD to specific recovery homes and treatment providers, who then billed patients' insurance for extensive and unnecessary drug testing for the purposes of profit. For example, officials from the Florida state attorney's office told GAO that SUD treatment providers were paying $300 to $500 or more per week to recovery housing operators for every patient they referred for treatment and were billing patients' insurance for hundreds of thousands of dollars in unnecessary drug testing over the course of several months. Some of these investigations have resulted in arrests and other actions, such as changes to insurance payment policies. Florida, Massachusetts, and Utah established state certification or licensure programs for recovery housing in 2014 and 2015 to formally increase oversight. The other two states in GAO's review—Ohio and Texas—had not passed such legislation but were providing training and technical assistance to recovery housing managers. The Substance Abuse and Mental Health Services Administration (SAMHSA), within the Department of Health and Human Services (HHS), administers two federal health care grants for SUD prevention and treatment that states may use to establish recovery homes and for related activities. First, under its Substance Abuse Prevention and Treatment block grant, SAMHSA makes at least $100,000 available annually to each state to provide loans to organizations seeking to establish recovery homes. Second, states have discretion to use SAMHSA funding available under a 2-year grant for 2017 and 2018 primarily for opioid use disorder treatment services, to establish recovery homes or for recovery housing-related activities. Of the five states GAO reviewed, only two, Texas and Ohio, have used any of their SAMHSA grant funds for these purposes. Four of the five states—Florida, Massachusetts, Ohio, and Texas—have also used state general revenue funds to establish additional recovery homes. HHS had no comments on this report.
gao_GAO-18-530
gao_GAO-18-530_0
Background The Federal Acquisition Streamlining Act of 1994 established a preference within the federal government to procure commercial items rather than items developed exclusively for the government. Between fiscal years 2013 and 2018, Congress passed additional legislation to address various aspects of how DOD defines and purchases commercial items, and how DOD makes commercial item and price reasonableness determinations. For example, legislation passed in 2015 included a provision stating DOD contracting officers may presume that previously established commercial item determinations shall serve as determinations for future procurements of an item. The law further stipulated that if a prior determination is not used for an item previously determined to be commercial, the contracting officer must request a review by the head of the contracting activity to either confirm that it is still valid or issue a revised determination. In January 2018, DOD revised its regulations and corresponding procedures, guidance, and information related to the procurement of commercial items to reflect recent legislative changes. The DFARS was updated to provide guidance to contracting officers for making price reasonableness determinations, promote consistency in making commercial item determinations (including updating guidance regarding the use of prior determinations), and expand opportunities for nontraditional defense contractors to do business with DOD. The department also updated its Guidebook for Acquiring Commercial Items, which includes information on how to define, determine, and price commercial items, to reflect the regulatory changes. Also in January 2018, a DOD advisory panel established to help streamline the defense acquisition process released a report with recommendations to revise definitions related to commercial buying and minimize government-unique terms applicable to commercial buying. DOD Process for Making Determinations When Acquiring Commercial Items During the pre-award process for commercial procurement actions over $1 million, two distinct determinations take place: 1. a contracting officer must determine in writing whether a product or service being procured is commercial, and 2. the contracting officer must determine if the offered price is fair and reasonable. According to the DOD Guidebook for Acquiring Commercial Items, the government’s ability to acquire affordable products and services significantly improves when contracting officers have in-depth knowledge of the market. The guidebook establishes that market research should be an ongoing effort throughout the commercial item procurement process to: (1) identify the industry and market for capabilities or technologies; (2) identify prices at which the capabilities or technologies have been sold or offered for sale; and, (3) continuously capture market information at different points to ensure the best acquisition. When determining a fair and reasonable price, market research should be conducted in order to compare the proposed price to market pricing. Figure 1 illustrates the process contracting officers generally follow to make commercial item and price reasonableness determinations for more complex procurements. The contracting officer is ultimately responsible for making these determinations, but, as appropriate, he or she may seek the assistance of the Defense Contract Audit Agency (DCAA), military service organizations such as the Navy Price Fighters or the Air Force Pricing Center of Excellence, or the DCMA Commercial Item Group. The DCMA Commercial Item Group, which became operational in 2016, provides recommendations on commercial item determinations within DOD. It has created six Commercial Item Centers of Excellence, each of which has its own area of market expertise, to assist contracting officers in making timely and consistent commercial item determinations. These centers are staffed with engineers and price/cost analysts who advise and make recommendations on commerciality based on market analysis, commercial item reviews and determinations, and commercial pricing analysis. Additionally, the centers provide training and assistance to the DOD acquisition community on various techniques and tools used to evaluate commercial items and commercial item pricing. In order to make a commerciality determination, contracting officers may need information specifying whether the items have been sold or offered for sale to the general public. And, as noted above, the contracting officer must determine that the government is getting a fair and reasonable price. Some of this information may be acquired through market research; however, as appropriate, the contracting officer may require or request that the contractors submit information, such as price lists and sales invoices, with their offers or during the evaluation. For more details on the information and data required for commercial item and price reasonableness determinations at different times in the procurement process, see appendix II. Case Studies Identified Four Interrelated Factors That Influenced Commercial Item and Price Reasonableness Determinations, but Generally Did Not Delay Awards In the case studies we examined, we found four interrelated factors that influenced how DOD determines if an item is commercial and whether the price is fair and reasonable, and that each factor had its own set of challenges: Availability of marketplace information Ability to obtain contractor data Extent of modifications to an item needed to satisfy DOD Reliability of prior commercial item determinations Despite these challenges, contract award was not typically delayed. In other cases where DOD was not able to obtain the information or data it needed to make a determination, the department’s options, such as not awarding the contract or exploring other suppliers, were often not feasible because DOD was working in a sole-source environment and not procuring the item was not an option. Limited Market Information Can Complicate Contracting Officers’ Commercial Item and Price Reasonableness Determinations When there is a healthy marketplace of items and services that the government wants to buy, contracting officers can more readily support their commerciality and price reasonableness determinations. However, in our review, we identified cases in which limited market information made such determinations more involved. For example, the Army was working with a contractor to acquire repair and upgrade services for navigation systems. The contractor said the services were commercial, but when the contracting officer conducted market research to determine the commerciality of the services, she found no similar services available in the commercial market. According to a contracting official, the Army’s particular units had to be nuclear hardened to withstand an explosion and needed some functional interfaces added, which made finding a similar commercial service difficult. In the end, the DCMA’s Commercial Item Group officials completed an on-site review of the manufacturing process to gain an in-depth understanding of the services provided. Using this additional information, the contracting officer deemed the services commercial. In contrast, for a previous report on commercial item acquisitions, we reviewed selected Air Force contracts for information technology services and video teleconferencing design and installation. Because these items and services are available in the commercial marketplace, the availability of information helped the contracting officers efficiently determine that the items were commercial and that the prices were fair and reasonable. Obtaining Information from Contractors and Subcontractors Can Be Challenging Contracting officials from our case studies had difficulty obtaining information from contractors after they could not find adequate information in the marketplace. This difficulty occurred for a number of reasons, including the contractor’s own challenges in obtaining information from their subcontractors. While several of the contracts we reviewed showed that either the prime contractor or subcontractor eventually provided sufficient information, obtaining this information was not without difficulties. For example: In a $1.7 billion Army sole-source contract for helicopter engines, the prime contractor asserted that two small engine parts—provided by a subcontractor—were commercial, but did not provide any documents to support its assertion. After several requests for information on commercial sales data, the prime contractor provided invoices for a commercial engine that contained similar engine parts. The prime contractor representative told us the reason it took so long to provide the requested information was because the subcontractor would not provide commercial sales data. As a result, the prime contractor needed to research commercial engines that used similar parts in order to support the commerciality assertion. In an $873 million Air Force sole-source contract for aircraft engines, the contracting officer had difficulty obtaining commercial sales data through market research for engine castings. The prime contractor did not initially provide support for its assertion that the castings were commercial, stating that it had difficulty obtaining supporting information from its subcontractor. Air Force officials visited the subcontractor’s facility to determine that the item was a modified version of a commercial item and was therefore commercial. In a $53 million Navy sole-source contract for KC-130J aircraft propeller engineering and sustainment services, the contracting officer told us she had difficulty determining if the proposed prices for these commercial services were fair and reasonable because the contractor provided invoices with the prices redacted. After several months of back and forth, the contractor provided unredacted invoices for similar services, which the contracting officer used to determine price reasonableness. A contractor representative told us that the contractor initially provided the redacted invoices in order to quickly respond to the Navy’s request for information, but that additional time was needed to evaluate if releasing the unredacted price information would violate a contractual agreement the contractor had with its suppliers. In other cases, the contractor provided information or data that the contracting officer considered insufficient to support a commercial item or price reasonableness determination. For example: In an F-15 aircraft production contract, Air Force contracting officials had difficulty determining whether the prices of oil bypass valves were fair and reasonable due to redactions in data the subcontractor provided. The subcontractor’s proposed price was four times more than it had previously charged the government for the same item, according to contracting officials. To support its prices, the subcontractor provided a commercial price list and customer invoices with redacted customer information, which the subcontractor considered to be proprietary. According to contracting officials, the redacted invoices did not provide enough detail to confirm whether non-governmental end users were paying a price similar to the proposed price in the Air Force contract. The subcontractor subsequently provided a customer list associated with the redacted invoices. Also, while the subcontractor showed that its proposed price was lower than its commercial price list, contracting officers did not consider subcontractor-provided support sufficient to explain why the proposed price was higher than what the government had previously paid. According to contracting officials, the prime contractor absorbed the price difference between the subcontractor’s proposed price and what the Air Force paid for the valves. On a $2 million Army task order for navigation software upgrades on Global Positioning System (GPS) units used in missiles, the DCMA Commercial Item Group obtained redacted invoices and quotes from a subcontractor to determine commerciality. But this information did not provide enough detail to substantiate the commerciality determination. A subcontractor representative told us that the company provided redacted information to the government because contractual agreements with its customers required them to not reveal the customer name. After evaluating multiple factors, the DCMA Commercial Item Group concluded that the GPS units did not match the form, fit, or function of the commercial ones, and recommended that this service and item were not commercial. Contractor representatives cited multiple reasons why they were unable to provide data (see text box). Examples of Reasons Contractors Cited for Not Providing Data: One prime contractor told us that some subcontractors are unwilling to provide information, such as unredacted invoices, to them and therefore prime contractors cannot provide this information to the government. Some subcontractors we interviewed explained that certain information, such as customer names and prices paid in invoices, is considered proprietary data. One subcontractor representative said that while the company cannot provide unredacted invoices to a prime contractor, it is willing to provide this information directly to the government, such as the DCMA Commercial Item Group, which can verify the content of the invoices at the contractors’ facilities. Additionally, one contractor representative told us that when a previously determined commercial item is later determined noncommercial, specific cost or pricing data can be difficult to gather for companies that operate primarily in the commercial market. This is because these companies were not previously required to collect and provide this cost or pricing data to the government. For example, the subcontractor that produces an item for the Army told us that this item had been previously purchased by the government on a commercial basis under an agreement that was later canceled in 2014. When the government later determined this item was noncommercial, the subcontractor had difficulty providing detailed cost data for the government’s units because they are procured on the same manufacturing line as their commercial units. According to contractor officials, the costs for subcomponents and labor hours for engineers that work on these units are pooled together with cost for the commercial business. A contracting officer’s ability to obtain data is further affected once an item has been deemed commercial. Several contracting officers told us that once an item is determined commercial, contractors are less willing to provide any pricing data. While certified cost and pricing data cannot be required, the government can request uncertified data if needed to make a price reasonableness determination. As previously noted, we found cases in which contractor-provided information included redacted invoices as evidence that an item was commercial. When the government later requested uncertified cost and pricing data to determine price reasonableness—after exhausting government and public market research resources—the contractors were not willing or able to provide the data. In most cases contractors eventually provided data after multiple requests. Modified Items Require Additional Steps Our case studies showed that determining commerciality and price reasonableness for items that are modified from the commercial variant can be difficult, in part, because what can be deemed ‘a minor modification’ is subject to interpretation. The commercial item definition includes some types of commercial items that have minor modifications not customarily available in the commercial marketplace, but that are made to meet federal government requirements. For our case studies, when prime contractors or subcontractors claimed a modified item was commercial, contracting officers had to take extra steps to determine whether the commerciality assertions appropriately met the commercial item definition, such as completing a comparative analysis of commercial items to the modified item. However, determinations in our case studies were challenging to make because the items were generally acquired through sole-source procurements and had no identified commercial market. This made it more difficult for the contracting officer to make a determination based on market research. In one of our case studies, there was a difference of opinion within DOD as to whether a modified item was commercial. The prime contractor for an Army sole-source contract procuring modified fuel systems to meet military safety, crashworthiness, and ballistic tolerance requirements for Blackhawk helicopters claimed that its modified fuel system was a commercial item. However the contracting officer found no commercial market existed for this item so the contracting officer had to take additional steps. To make a commerciality determination, the contracting officer sought assistance from the DCMA Commercial Item Group, which recommended that the fuel system was not commercial. The contracting officer submitted a request to waive the requirement for certified cost or pricing data. The DOD official reviewing the waiver request discovered that the fuel system had previously been determined commercial for another helicopter program and the Director of Defense Pricing concurred with that commerciality determination. Contracting Officers Found Prior Commercial Item Determinations Not Always Accurate Some of our case studies exhibited challenges related to prior commercial item determinations: The Navy contracted for a radio used in a variety of aircraft throughout DOD. The contracting officer stated that the radio had been considered commercial for 20 years. However, for the most recent follow-on contract the contracting officer, who was new to the program, reviewed the prior determination and found it to be in error. In the prior determination the radio was compared to another radio considered noncommercial. As part of his review for the new determination, the contracting officer consulted with Air Force officials because they procure the same radio for some of their aircraft programs. The contracting officer ultimately determined that the radio was, in fact, commercial by comparing it—at the suggestion of the Air Force—to a different radio with similar features that is sold commercially to the public. According to the contracting officer, the Navy also benefited because the radio they had purchased for 20 years was also cheaper and more capable than the one that was sold commercially, to which it was compared. For a $2.5 billion Air Force sole-source contract, the prime contractor asserted that a cargo part, called a winch—which had previously been sold to the government as a commercial item—was commercial. However the contracting officer reviewed the support for the prior commercial item determination and found it was based on sales to a holding company for a foreign government. Additional information requested and received included catalog prices and the invoice to the foreign holding company. The contracting officer determined this support was not sufficient for determining commerciality because sales to foreign governments were not considered commercial sales. Additionally, market research did not yield any commercial sales or evidence that the part was sold in the commercial marketplace. The part was determined noncommercial. The National Defense Authorization Act for Fiscal Year 2016 states contracting officers may presume a prior commercial item determination made by a DOD component shall serve as a determination for subsequent procurements of an item. In fact, if a previous determination is not used, a contracting officer must request that the head of the contracting activity review the prior determination and either confirm its validity or issue a revised determination. Most contracting officers with whom we spoke indicated that prior determinations should be reviewed to determine if they were made under similar terms and conditions and whether circumstances have changed since the determinations were made. We found diverse opinions among contracting officers on whether they would elevate concerns about a previous determination to the head of the contracting activity. Some contracting officers said they would elevate the determination if they had supporting data while others would be hesitant under most circumstances due to the extensive process involved. Challenges in Making Commercial Item and Price Reasonableness Determinations Did Not Typically Delay Contract Award Despite the different factors involved, for most of our case studies, challenges in making the commercial item and price reasonableness determinations did not ultimately affect the government’s ability to award the contract as planned. The time it took for the contractor to provide information to the government and the government to make a determination ranged from a few days to over a year. In most of our case studies, contracting officials said that this time did not solely affect the contract award because other factors, such as staff changes or awarding multiple contracts at the same time, also delayed the process. However, in two of our 15 cases, contracting officers told us they were delayed in awarding contracts when the contractor did not provide the requested information in the anticipated timeframe. In one example, an Army contracting official told us that a contract award was delayed when a subcontractor did not provide information to the contractor to support its commerciality assertion. The contracting officer noted that this delay also placed the program at risk of a funding loss because the service reallocated funding to another program that it viewed as less risky. Finally, contractors told us that they have taken steps to improve how they assert the commerciality of their items. For example, several contractors now use standardized forms to make commercial item assertions and keep prior assertions in a centralized place. Several contractor representatives we spoke with also told us that they have an internal panel of experts review commercial item and price reasonableness assertions to ensure consistency and that the assertions meet federal regulations. The contractors’ hope is that these forms and processes will help reduce the back and forth in requesting information among the government, prime contractor, and subcontractors. In addition, some contractor representatives told us that they work with the DCMA Commercial Item Group to better understand what information contracting officers are requesting and to obtain assistance with subcontractors that are unwilling to provide information to the prime contractor. When Information Needed to Make Commercial Item and Price Reasonableness Determinations Is Not Readily Available, Contracting Officers’ Options Are Limited Some contracting officials told us that they have few options at their disposal when they have difficulty obtaining information from the contractor to make a commercial item or price reasonableness determination in a sole-source environment. For example: For a $2 million Army task order for engineering services to upgrade navigation software and several GPS units with these upgrades, the contracting officer stated that procuring from an alternative source was not an option because this GPS was unique to the program and qualifying a different GPS would cost an estimated $50 million. In a nearly $2 million sole-source delivery order for Blackhawk helicopter fuel tanks, the Army contracting official told us that the program needed this fuel tank because the tank’s configuration was specific to the helicopter. As a result, the contracting official said they could not walk away from the contractor. The contracting official further noted that certifying an item from a second source would be cost and time prohibitive for the government. Although in most of our sole-source case studies other options (e.g., contracting with a different vendor) were not viewed as being feasible, we did have one case where DOD made the choice to not award a contract, when the government and contractor could not agree on a reasonable price. DLA wanted to negotiate a long-term contract for night vision goggles, but after the contracting officer made repeated attempts to obtain data from the contractor, they could not agree on a fair and reasonable price. The prices were over 45 percent higher than prices that DLA had previously paid for the same item. As a result, the acquisition was canceled, and according to the contracting officer, the government plans to buy quantities as needed through an existing vehicle. Another option is to elevate issues to DOD management, which can make a determination on whether an item is commercial and is being offered at a fair and reasonable price. One example from our case studies includes a $1.7 billion Army sole source contract for helicopter engines. The contractor asserted commerciality for the engines, which had historically been procured as a noncommercial item. After extensive market research, the contracting officer asked for information from the contractor to support its commerciality assertion, but had difficulty obtaining it. According to the contracting officer, the Army discussed the possibility of not awarding this contract, but this was not considered feasible since the engine is used in multiple aircraft. After months of back and forth between the contracting officer and contractor, this commerciality issue was elevated to the Director of Defense Pricing, who agreed with the contracting officer’s assessment that the engines were not commercial and procured them on that basis. Both Formal and Informal Information Sharing Efforts Exist, but With No Comprehensive Strategy DOD has taken steps to share more information across the department to inform commercial item and price reasonableness determinations, but efforts to date are in early stages of development or happening informally across the department. Despite these efforts, contracting officers still face challenges in obtaining adequate information to make informed commercial item and price reasonableness determinations, in part because no comprehensive information sharing strategy exists to outline responsibilities and funding of these efforts. DOD officials told us they plan to explore other options for the sharing of commercial item information, such as communities of practice, but have not made any formal plans. One information sharing effort still in its early stages is the DCMA Commercial Item Group’s publicly available database, created in 2017 to centralize commercial item information across DOD. The database, however, has not been fully established as an effective tool. In its current form it consists of a spreadsheet primarily listing items that contracting officers have determined to be commercial. According to DCMA Commercial Item Group officials, the database contains fewer items than expected because not all DOD contracting officers have submitted their commercial item determinations. The Office of Defense Procurement and Acquisition Policy updated its Guidebook for Acquiring Commercial Items in January 2018 to state that a commercial item determination is not complete until the contracting officer submits it to the DCMA Commercial Item Group along with a summary of pricing information. These submissions are meant to improve consistency and efficiency in making commercial item determinations. On February 22, 2018, the Air Force Deputy Assistant Secretary for Contracting issued a memorandum that reminded its contracting officers of this responsibility. We found that the database has limitations. For example, it includes only a list of items evaluated and not the results of recommendations made on commerciality by the DCMA Commercial Item Group. These recommendations can be obtained by contacting the office directly. DCMA officials stated results of their recommendations are specifically not included in the public database because of concerns that a prime contractor may prefer a subcontractor with a commercial item determination over another without one. Most commercial item determinations included in the database go back only to 2016, since this is when the DCMA Commercial Item Group began collecting them. Additionally, DCMA Commercial Item Group officials said they have no funding to support the database. Officials plan to meet with DOD’s Office of Defense Procurement and Acquisition Policy to discuss funding and other potential systems to maintain the information as well as provide DOD officials with direct access to copies of previous determinations and related information. Defense Procurement and Acquisition Policy and DCMA officials acknowledged that DOD has not yet determined who is responsible for the funding and upkeep of this information. Internal control standards promote assigning responsibility and delegating authority to key roles to achieve an organization’s objectives. Without appropriate funding and clearly defined roles and responsibilities for management and upkeep of the database, its effectiveness as a tool to provide contracting officers with information to help make commercial item determinations will continue to be limited. While the database serves as a means to formally share information to help contracting officers make commercial item and price reasonableness determinations, contracting officers in our case studies noted instances where informal sharing of information between programs and services led to improved outcomes, such as a lower price. For example, In a $257 million sole source MQ-9 aircraft contract, the Air Force contracting team questioned whether a modified commercial engine being provided by a subcontractor was offered at a fair and reasonable price. While the Air Force contracting team relied on uncertified cost and pricing data provided by the subcontractor, a contracting official told us that the team also relied on information shared by Air Force officials in other programs that were procuring similar commercial items at the same time. The contracting team discovered that another contracting official obtained a lower price for a similar commercial item, and as a result, used this information to negotiate a lower price. In the procurement for radios used in a variety of aircraft, as discussed earlier, Navy contracting officials used informal information sharing to make a commercial item determination. The Navy obtained information from the Air Force, which was procuring the same radio and which had performed a review in January 2017 that it shared with the Navy. The review noted that other similar commercial radios existed and that a comparison of this radio to these other commercial radios could help determine that the radio is commercial. Navy contracting officials, using the Air Force’s review as well as their own technical analysis, determined the radios were a modified commercial item. Despite the creation of the database and the informal information sharing that occurs, contracting officers still face challenges in obtaining adequate information to make informed commercial item and price reasonableness determinations. Specifically, DOD lacks a strategy for improving the sharing of commercial item and price reasonableness information across the department, such as efforts like the DCMA Commercial Item Group’s database. Internal control standards promote effective sharing of information to ensure managers have the information they need to make informed decisions. In addition, internal control standards state that management should communicate information internally and assign responsibilities for key roles while also considering the cost necessary to communicate the information. In an environment where information is difficult to obtain from the contractor, as we have outlined in this report, the ability for contracting officers to have easy access to all necessary commerciality and pricing information within DOD is critical. If DOD does not have such information easily available, contracting officers will continue to struggle with obtaining all the information they need to make informed and efficient commercial item and price reasonableness determinations. Conclusions When dealing with a limited marketplace and price data, determining commerciality and price reasonableness can be challenging for DOD’s contracting staff. Ultimately, the effectiveness of determining commerciality and fair and reasonable prices will depend on what meaningful information the government successfully obtains to conduct its analysis. Therefore, information sharing within the department is critical in helping DOD’s contracting officers determine commerciality and reasonable prices on DOD’s acquisitions. As our findings show, DOD has made some efforts to facilitate the sharing of information, such as establishing the DCMA Commercial Item Group. This group, in turn, set up a database to increase the accessibility and utility of commercial and pricing data. But the database is not yet robust enough to eliminate the need for more sharing of information—formal or informal—across the department. Enhancing information sharing efforts could address some of the challenges we identified. Further, clearly defining the roles and responsibilities for management of the database and identifying viable funding sources to support the upkeep of the database will help ensure it becomes a useful resource for contracting officials. Recommendation for Executive Action We are making the following recommendation to DOD: The Director of Defense Procurement and Acquisition Policy should work with the Defense Contract Management Agency to develop a strategy for sharing information related to commerciality and price reasonableness determinations across DOD, including a plan to increase the information available in the Commercial Item alternative mechanisms to share information, either formal or informal; and assignments of roles and responsibilities with regard to sharing commercial item information, including how the database should be funded, supported, and maintained. Agency Comments We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix III, DOD concurred with our recommendation, stating that it plans to issue a policy memo requiring all commercial item determinations made after September 30, 2018 to be included in the existing commercial item database. DOD further stated that it will update its commercial item determination form to enhance informal information sharing. In addition, DOD stated that the Director of Defense Pricing within the Defense Procurement and Acquisition Policy office and the Director of DCMA will enter into a memorandum of agreement specifying roles and responsibilities in determining commercial item policy and funding the commercial item database. DOD also provided technical comments, which were incorporated as appropriate. We are sending copies of this report to the appropriate congressional committee, the Secretary of Defense, and the Director of Defense Procurement and Acquisition Policy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to: (1) identify the factors that influenced the Department of Defense’s (DOD) commercial item and price reasonableness determinations and (2) assess the extent to which DOD has taken steps to make information available to help make these determinations. To identify factors that influence the process and what DOD has done to address them, we reviewed relevant sections of the Federal Acquisition Regulation (FAR); Department of Defense Federal Acquisition Regulation Supplement; DOD memorandums; policy, guidance, and instructions related to the acquisition of commercial items, including the Guidebook for Acquiring Commercial Items Part A: Commercial Item Determination and Part B: Price Reasonableness Determination; and service-specific guidance regarding commercial items. To assess challenges in making commercial item and price reasonableness determinations, we identified a non-generalizable sample of contracts which were reported by DOD officials and contractors as a contract where it was difficult to make commercial item determinations, price reasonableness determinations, or both, from a variety of sources. Due to limitations of the Federal Procurement Data System-Next Generation (FPDS-NG) we could not identify all DOD commercial item acquisitions in the data system, specifically contracts that had been coded as having used procedures other than FAR Part 12, Acquisition of Commercial Items. Additionally, contracts which had issues in making commercial item or price reasonableness determinations would not be identifiable in FPDS-NG. Due to these limitations, we requested that three DOD services – Air Force, Army, and Navy – and the Defense Logistics Agency (DLA) each provide us with five contracts that had points of contention with the commercial item determination or the price reasonableness determination, either at the prime contract or subcontract level. We also identified contracts by asking officials at the Defense Contract Management Agency (DCMA) Commercial Item Group and the Navy Price Fighters for contracts as well as identified contracts through previous GAO work. Additionally, we asked contractors to identify contracts they believed had issues in determining commerciality and/or price reasonableness. One contractor identified two contracts, which we reviewed, but did not find to have any issues concerning commerciality or price reasonableness. From these requests we collected a non- generalizable sample of 56 contracts for commercial items. From the non-generalizable sample of 56 contracts, we selected 15 contracts awarded between 2010 and 2018 that met various criteria as case studies. We selected 4 case studies from the Air Force, 4 from the Army, 5 from the Navy, and 2 from DLA. The 15 case studies were selected to represent: (1) multiple services; (2) a variety of issues with commercial item or price reasonableness determinations, (3) reoccurring prime contractors or subcontractors, and (4) a mix of product and services acquired. We conducted an in-depth review of these contracts and selected related orders to assess what challenges occurred when the contracting officer was determining whether an item was commercial and whether the price was fair and reasonable, and why these challenges occurred. To assess challenges in making commercial item and price reasonableness determinations, we reviewed the contract file documentation for the 15 case studies, and interviewed contracting and pricing officials. We reviewed documentation including commercial item determinations, price negotiation memorandums, market research, and DCMA Commercial Item Group and Defense Contract Audit Agency reports. We also interviewed contracting officials and contractors to obtain perspectives on how an item was determined to be commercial and then subsequently, determined to be offered at a fair and reasonable price. We interviewed contracting officers to obtain their views on the effect the new Guidebook for Acquiring Commercial Items and recently passed legislation would have on these challenges, and how they might affect contracts in the future. We interviewed officials from the DCMA Commercial Item Group to understand how they assist contracting officers in making determinations, and about the publicly available database that centralizes commercial item information. We also reviewed this database to understand what types of information it contained. Additionally, we discussed the management and funding of the database with the Office of Defense Procurement and Acquisition Policy. We interviewed contractors to discuss commercial item and price reasonableness issues on the selected contracts, discuss general areas of concern with regard to commercial item and price reasonableness determinations, and identify other contentious contracts. We conducted this performance audit from July 2017 to July 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on audit objectives. Appendix II: DOD General Process for Making Commercial Item and Price Reasonableness Determinations Appendix II: DOD General Process for Making Commercial Item and Price Reasonableness Determinations applicable to most of the contracts in our case studies because all but one of the contracts were awarded before the DFARS changes were implemented. Furthermore, where the Federal Acquisition Regulation (FAR) and DFARS differed in terminology (e.g., the FAR noted a requirement for “data” to determine price reasonableness but the DFARS noted a requirement for “information”), the table and report use the DFARS terminology. Appendix III: Comments from the Department of Defense Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Tatiana Winger (Assistant Director), Emily Bond, Jared Dmello, Lorraine Ettaro, Julie Hadley, Julia Kennon, Timothy Moss, Miranda Riemer, Raffaele (Ralph) Roffo, and Carmen Yeung made key contributions to this report.
Why GAO Did This Study DOD buys goods and services from the commercial market to take advantage of new innovations and save on acquisition costs. However, the department's process for determining whether an item can be purchased commercially—and, at a fair and reasonable price—can be long and challenging in certain situations. GAO was asked to review this process. This report identifies (1) factors that influenced DOD's commercial item and price reasonableness determinations, and (2) the extent to which DOD has taken steps to make information available to facilitate these determinations. To conduct this work, GAO examined federal regulations and guidance and selected case studies, which included a non-generalizable sample of 15 contracts awarded between fiscal years 2010 and 2018. GAO identified the case studies based on input from multiple sources that those contracts involved commercial item or price reasonableness determination challenges. GAO interviewed government and contractor officials responsible for those contracts. What GAO Found The Department of Defense (DOD) has a process to determine if an item is available for purchase in the commercial marketplace at a reasonable price. Among selected case studies, GAO found four interrelated factors, each with its set of challenges that influenced how and whether DOD determines if an item is commercial and if its price is reasonable. These factors are: Availability of marketplace information : Market research is a key component that informs commercial item and price reasonableness determinations. However, GAO found that obtaining market-related information can be challenging because the products DOD requires may not be widely available in the commercial marketplace. Ability to obtain contractor data : When adequate market information is not available, DOD officials turn to the contractor for information to support the commercial item determination or data to make a price reasonableness determination. In the case studies GAO reviewed, most contractors provided relevant information, but not without delays and challenges. For example, while pricing data is key to DOD's ability to determine price reasonableness, several contracting officers reported that contractors were less willing to provide this data once an item was determined commercial. Extent of modifications to an item : When a commercial item must be modified to meet DOD's requirements, DOD officials may have to take additional steps, such as completing a comparative analysis of commercial items to the modified item. For example, in one case, a commercial navigation system had to be modified to withstand an explosion. To make the commercial item determination DOD officials had to make an on-site visit to the manufacturer to gain in-depth understanding of the services provided and to ensure they met DOD requirements. Reliability of prior commercial item determinations : Contracting officers may presume that an item is commercial if a DOD component had previously made that determination. However, GAO found that, in some cases, contracting officers reviewing a prior determination discovered that it was based on inaccurate information. DOD has taken steps to share more information across the department to inform these determinations, but efforts are in early stages of development or informal. No comprehensive information sharing strategy exists. In 2016, DOD established the Commercial Item Group within the Defense Contract Management Agency to provide recommendations on commercial item determinations. This group created a publicly available database to centralize commercial item information across DOD. However, this effort is incomplete. Also, according to DOD officials, they have not yet established who is responsible for the funding and upkeep of the information. Additionally, GAO case studies included instances where informal information sharing resulted in better outcomes, such as a lower price. Creating more opportunities to share information internally is crucial for DOD to facilitate a timely and efficient process in making these determinations and ensuring the best financial outcome for the government. What GAO Recommends GAO recommends that DOD develop a strategy for how information related to commerciality and price reasonableness determinations should be shared across the department, including making improvements to the existing database and determining responsibilities for its funding and upkeep. DOD agreed with GAO's recommendation and stated that actions will be taken starting in 2018 to address it.
gao_GAO-18-151SP
gao_GAO-18-151SP_0
Section 1: Economic Analysis For the purpose of this report, an economic analysis is defined as an analysis that is intended to inform decision-makers and stakeholders about the economic effects of an action. Economic effects (hereafter also called “effects”) commonly include costs, benefits, and/or economic transfers (for example, transfer payments). Action is defined to include a government law, rule, regulation, project, policy, or program. An action may be examined in the context of legislation, regulation, advocacy, agency operations, or in response to certain events (such as a natural disaster, for example). An economic analysis may be prospective, examining an action that could be taken, or retrospective, examining the outcome of an action that has already been taken. Examples of economic analyses include: 1. An economic analysis of the costs of a government program, project, 2. An economic analysis of the benefits and costs of a government rule 3. An economic analysis of the impact of a proposed or existing regulation on regulated entities and consumers. 4. An economic analysis of an action in response to an event (for example, an analysis of a federal response to a natural disaster); 5. A benefit-cost analysis or cost-effectiveness analysis. Section 2: Key Elements of an Economic Analysis GAO reviewed handbooks and guidance on economic analysis that have been issued by various government agencies and institutions and consulted with experts. (Appendix 1 details GAO’s objective, scope, and methodology.) In this Section, GAO synthetizes economic elements and concepts embodied in this literature for use by GAO in its assessment methodology for the review of an economic analysis. GAO identifies five key methodological elements to the baseline structure of an economic analysis. These key elements are: 1. Objective and scope— the objective and scope of the analysis. 2. Methodology— the methodology used to examine the economic effects. 3. Analysis of effects — the analysis of economic effects. 4. Transparency— the transparency of the analysis of economic effects. 5. Documentation— the documentation included in the analysis. These key elements are standard to the structure of analyses, generally speaking. That is, an analysis is performed to address an objective; the analysis is scoped to address that objective; the analysis adopts a methodology, which is used to analyze the economic effects of interest; and the analysis is transparent and properly documented. The emphasis on transparency is consistent with the final implementation guidelines of the Office of Management and Budget (OMB). “The primary benefit of public transparency is not necessarily that errors in analytic results will be detected, although error correction is clearly valuable. The more important benefit of transparency is that the public will be able to assess how much of an agency’s analytic result hinges on the specific analytic choices made by the agency.” Having identified key elements to the structure of an economic analysis we synthetized for each key element, economic concepts embodied in the literature that we reviewed. For example, what might we be assessing under the key element: Objective and Scope? We considered economic concepts commonly identified across the documents we reviewed, and incorporated feedback from the experts and agencies with whom we consulted. The documents that we reviewed included, among others, Circulars A-94 and A-4 released by OMB, handbooks for economic analysis from federal and international agencies, such as the U.S. Environmental Protection Agency, the U.S. Department of Defense, the U.S. Department of Transportation, the Organization for Economic Co- operation and Development (OECD), and the United Kingdom’s HM Treasury. These documents generally outline a methodical structure to an economic analysis. This methodical structure takes the form of a set of issues or sequence of steps to address while conducting the analysis. These issues or steps, in turn, embody economic concepts. For the purpose of developing our assessment methodology, we synthetized and categorized these economic concepts with the key elements that we identified—as listed below. The concepts for each listed key element are not intended to be exhaustive and do not supplant or alter existing requirements for economic analysis. Depending on the context in which an action is examined, GAO’s assessment of a key element could exclude some concepts, or extend beyond the concepts listed for that element. In such cases, GAO’s written assessment of the relevant key element will specify the concepts that were actually considered in the review process. The five key elements and economic concepts in GAO’s assessment methodology for an economic analysis are: 1. Key Element: Objective and Scope—the objective and scope of the analysis. The economic analysis explains the action examined and includes a rationale and justification for the action. The analysis states its objective. The scope of the analysis is designed to address this objective. Unless otherwise justified, the analysis focuses on economic effects that accrue to citizens and residents of the United States, and its time horizon is long enough to encompass the important economic effects of the action. 2. Key Element: Methodology—the methodology used to examine the economic effects. The economic analysis examines the effects of the action by comparing alternatives, using one of them as the baseline. Unless otherwise justified, it considers alternatives that represent all relevant alternatives, including that of no action. The analysis defines an appropriate baseline. The analysis justifies that the world specified under each alternative considered (including the baseline) represents the best assessment of what the world would be like under that alternative. The analysis identifies the important economic effects for each alternative considered, their timing, and whether they are direct or ancillary effects. 3. Key Element: Analysis of Effects—the analysis of economic effects. Where feasible, the economic analysis quantifies the important economic effects and monetizes them using the concept of opportunity cost. The analysis applies the criterion of net present value, or related outcome measures, to compare these effects across alternatives. It controls for inflation and uses economically justified discount rates. Where important economic effects cannot be quantified, the analysis explains how they affect the comparison of alternatives. Where the equity and distributional impacts are important, the full range of these impacts is separately detailed and quantified, where feasible. 4. Key Element: Transparency—the transparency of the analysis of economic effects. The economic analysis describes and justifies the analytical choices, assumptions, and data used. The analysis assesses how plausible adjustments to each important analytical choice and assumption affect the estimates of the economic effects and the results of the comparison of alternatives. The analysis explains the implications of the key limitations in the data used. Where feasible, the analysis adequately quantifies how the statistical variability of the key data elements underlying the estimates of the economic analysis impacts these estimates, and the results of the comparison of alternatives. 5. Key Element: Documentation—the documentation included in the analysis. The economic analysis is clearly written, with a plain language summary, clearly labeled tables that describe the data used and results, and a conclusion that is consistent with these results. The analysis cites all sources used and documents that it is based on the best available economic information. The analysis documents that it complies with a robust quality assurance process and, where applicable, the Information Quality Act. The analysis discloses the use and contributions of contractors and outside consultants. In summary, GAO identifies five key elements, with associated economic concepts, to the structure of an economic analysis. GAO’s assessment methodology then examines the extent to which an economic analysis properly dealt with these key elements. Section 3: Assessment Methodology GAO’s assessment methodology has two steps: (1) an assessment of each individual key element and (2) an overall assessment based on the assessment of the individual key elements. Below, these two types of assessment are discussed. While GAO’s assessment methodology typically considers all five key elements, there may be cases, for example depending on the scope of an engagement, where it may consider only certain key elements. In those cases, GAO may not be able to make an overall assessment. Step 1: Assessing Each Individual Element The first step in the review process is an assessment of the extent to which the economic analysis has considered and properly dealt with each key element. For each element, the outcome of the review is a written assessment and an assessment score. The written assessment details the extent to which the analysis considered and properly dealt with the element. To the extent that important limitations are identified in the review, the written assessment describes these limitations. This written assessment informs the assessment score, which is one of three mutually exclusive scores: 1. fully met—that is, the economic analysis has considered and properly dealt with the element; 2. partly met—that is, the economic analysis has only partly considered and properly dealt with the element; 3. not met—that is, the economic analysis has not considered or not properly dealt with the element. If the outcome is “partly met” or “not met,” the written assessment should describe the limitations of the analysis. Assessments are made from expertise in economics and with professional judgment. The guiding principles of a review are objectivity, integrity, and compliance with generally accepted government auditing standards (GAGAS). An assessment is contextual—that is, it is conditional on the evidence underlying the action examined, and the context in which it takes place. The assessment is also conditional on the reasonably obtainable information available at the time of the economic analysis that is being reviewed. A caveat may be added to the assessment if new information that would affect it, has become available and is reasonably obtainable since the analysis was made. Should the review of a key element exclude certain concepts, or extend beyond the concepts listed for that key element, GAO’s written assessment of the key element will specify the concepts that were actually considered in the assessment process. Step 2: Overall Assessment Once each element has been individually reviewed, an overall assessment is made of the extent to which the economic analysis accordingly informs decision-makers and stakeholders about the economic effects of the action examined. Four outcomes are possible: 1. The analysis informs decision-makers and stakeholders about the economic effects of the action examined. 2. The analysis informs, with caveats, decision-makers and stakeholders about the economic effects of the action examined. (If this is the outcome, the review should describe the caveats in writing.) 3. The analysis needs additional work to inform decision-makers and stakeholders about the economic effects of the action examined. (If this is the outcome, the review should describe in writing the important limitations of the economic analysis.) 4. The analysis does not inform decision-makers and stakeholders about the economic effects of the action examined. (If this is the outcome, the review should describe in writing the deficiencies of the analysis.) An economic analysis that has fully met all the key elements should be identified as informing decision-makers and stakeholders about the economic effects of the action examined (this is outcome 1). This determination is neither an endorsement of the specific findings and conclusions of the analysis, nor is it a determination that these are correct. For example, a prospective analysis is predictive of a potential result, but cannot definitively determine the result. It is a determination that the analysis is adequately and properly designed, and accordingly, it can inform the public discourse about the economic effects of the action examined. The written statements added to outcomes 2–4 should refer to the review of the individual elements. The difference between outcomes 2 and 3 is a matter of degree and professional judgment. Generally speaking, the caveats under outcome 2 will be relatively minor or few, whereas the important limitations under outcome 3 are likely to be more consequential. Should the economic analysis suffer from major deficiencies in meeting the key elements, then outcome 4 may be appropriate. Appendix I: Objective, Scope, and Methodology Our objective was to identify, for the purpose of developing GAO’s assessment methodology for the review of economic analysis, key methodological elements to the structure of an economic analysis that is intended to inform decision-makers and stakeholders about the economic effects of a public action. To address this objective, we reviewed existing Circulars issued by the Office of Management and Budget (OMB), handbooks for economic analysis issued by federal agencies, international government agencies and institutions, and established textbooks on economic theory and benefit-cost analysis. We also solicited feedback from economics experts (in academia and public policy) and international audit agencies. We identified in our document review, five key elements to the structure of an economic analysis. These key elements are: (1) objective and scope; (2) methodology; (3) analysis of effects; (4) transparency; (5) documentation. These key elements are standard to the structure of analyses, generally speaking. That is, an analysis is performed to address an objective; the analysis is scoped to address that objective; the analysis adopts a methodology, which is used to analyze the economic effects of interest; and the analysis is transparent and properly documented. Having identified key elements to the structure of an economic analysis, we synthetized for each key element economic concepts embodied in the literature we reviewed. For example, what might we be assessing under the key element objective and scope? To do so, we looked for economic concepts commonly identified across the documents we reviewed and incorporated feedback from the experts and agencies that we consulted with. We then categorized these economic concepts across the key elements that we identified. Among the documents we reviewed in our process were the following: OMB Circular A-94, Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs, Revised (Oct. 29, 1992). Circular A-94 provides a checklist of whether an agency has considered and properly dealt with all the elements for sound benefit-cost and cost- effectiveness analyses. OMB Circular A-4, Regulatory Analysis, (Sept. 17, 2003). Circular A-4, released in collaboration with the Council of Economic Advisors, identifies key elements to the structure of economic analyses in regulatory proceedings. Office of Management and Budget, Office of Information and Regulatory Affairs, Regulatory Impact Analysis: A Primer (Washington, D.C.: The White House). The purpose of the primer is to offer a summary of OMB Circular A-4. Agency-issued handbooks for economic analysis, such as, for example, those issued by the U.S. Environmental Protection Agency, the U.S. Department of Defense, the U.S. Department of Transportation, and the Organization for Economic Co-operation and Development (OECD). We also reviewed The Green Book: appraisal and evaluation in central government, issued by HM Treasury, Government of the United Kingdom, (London; July, 2011). HM Treasury describes The Green Book as a best practice guide for all central departments and executive agencies, and covers projects of all types and size. The guide applies to appraisals—defined as any analysis used to support a government decision to adopt a new policy, or to initiate, renew, expand or re-orientate programs or projects that would result in measurable benefits and/or costs to the public—and evaluations—defined as retrospective analysis of a policy, program or project at its completion, conclusion or revision. The National Academies of Sciences, Engineering, and Medicine, Guidelines for the review of Reports of the National Academies of Sciences, Engineering, and Medicine. While these guidelines are specific to the review of reports issued by the National Academies and outline review criteria that apply across a broad range of disciplines, not just economics, they provide review criteria for scientific analysis. We conducted our work from June 2017 to April 2018 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this product. Third-party Comments We provided a draft of this product for third-party outside review to experts at various U.S. and international government agencies and specialists. Reviewers provided technical comments which we incorporated as appropriate. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Carol Bray, Timothy Carr, Tim Guinane, Kathleen Scholl, Paola Tena and Elaine Vaurio made key contributions to this report.
Why GAO Did This Study We prepared this report to answer the question: What are key methodological elements of an economic analysis that is intended to inform decision-makers and stakeholders? What GAO Found GAO identifies five key methodological elements to the baseline structure of an economic analysis: Objective and scope, Methodology, Analysis of effects, Transparency and Documentation. GAO's assessment methodology evaluates each key element and provides an overall assessment based on the assessment of the individual key elements.
gao_GAO-18-363T
gao_GAO-18-363T_0
The Coast Guard Attributed IHiS Termination to Financial and Other Risks, after Spending Approximately $60 Million on the Project According to the Director of HSWL, who was appointed to the position in August 2015, financial, technical, schedule, and personnel risks led the Coast Guard’s Executive Oversight Council to decide to terminate the IHiS project in October 2015: Financial risks. Internal investigations were initiated in January 2015 and May 2015 to determine whether the HSWL Directorate had violated the Antideficiency Act by using incorrect funding sources and incorrect fiscal year funds for the IHiS project. The Coast Guard ordered project management and contractor staff to cease work on IHiS until a determination was made regarding the antideficiency violation. Technical risks. IHiS lacked an independent security assessment and full interface testing to ensure security and data integrity. In addition, key functionality for the system, such as user verification, had not been completed. Schedule risks. The HSWL Director stated that she requested that the Department of Defense’s (DOD) Defense Health Agency Solution Delivery Information Technology (IT) team independently validate the IHiS timelines and the status of the project in 2015 because of the identified technical risks and concerns as to whether the system would be ready to be piloted in the fall of 2015. According to the Director, the Defense Health Agency team projected the timeline for the first clinic implementation to be approximately 1 year later than originally estimated due, in part, to incomplete interfaces and workflows. Personnel risks. Although HSWL staff had been managing the IHiS project since it was initiated in 2010, Command, Control, Communications, Computers, and Information Technology (C4&IT) was directed to assume the oversight responsibilities for IHiS implementation in May 2015. This action was due to concerns about the project’s adherence to established governance processes raised by the internal investigators looking into the potential Antideficiency Act violations. By August 2015, the key HSWL project management personnel that had overseen the project since 2010 had been removed. As a result of the changes in staff, one vendor noted that it was unclear as to who were the stakeholders, responsible parties, and decision makers. According to an analysis conducted by the Coast Guard, which included obligations and expenditures from September 2010 to August 2017, the agency had obligated approximately $67 million for the IHiS project and, of that amount, had spent approximately $59.9 million at the time of its cancelation. In addition, over 2 years after the project’s cancelation, the Coast Guard continued to pay vendors. In this regard, it paid approximately $6.6 million to vendors between November 2017 and February 2018 to satisfy existing contractual obligations for services such as leased equipment that was damaged or missing; software licensing and support; a data storage center; and removal and shipment of equipment. Further, according to staff in Coast Guard’s Office of Budget and Programs, no equipment or software from the IHiS project could be reused for future efforts. The Coast Guard Could Not Demonstrate Effective Project Management, Lacked Governance Mechanisms, and Did Not Document Lessons Learned for the IHiS Project The Coast Guard could not demonstrate that it effectively managed and oversaw the IHiS project prior to its discontinuance, and did not document and share valuable lessons learned from the failed project. Specifically, although the Coast Guard was to follow its System Development Life Cycle (SDLC) Practice Manual to guide its management and oversight of the project, the agency could not provide complete evidence that it had addressed 15 of the 30 SDLC practices we selected for evaluation. For example, the Coast Guard could not demonstrate that it had conducted IHiS system testing, although the agency granted an authority to operate (ATO) and indicated in the ATO memorandum that the system had undergone some form of testing. The Coast Guard’s SDLC specifies that system testing is to take place prior to the issuance of an ATO. Project team members provided inconsistent explanations regarding whether or not documentation existed to demonstrate the actions taken to manage and oversee the project. The absence of the various documents and other artifacts that would support the required SDLC activities raises doubts that the Coast Guard took the necessary and appropriate steps to ensure effective management of the IHiS project. Further, although the Coast Guard developed charters for various governance boards to provide project oversight and direction, the boards were not active and the Chief Information Officer (CIO) was not included as a member of the boards. Taking steps to fully implement governance boards that include the CIO will be important to the Coast Guard’s oversight efforts in implementing a future EHR system and may decrease the risk of IT project failure. Lastly, although Coast Guard officials stated that lessons learned had been identified throughout the process of developing IHiS, as of 2 years after its cancelation, the agency had not documented and shared any lessons learned from the project and did not have established plans for doing so. Until the Coast Guard takes steps to document and share identified lessons learned with individuals charged with developing and acquiring its IT systems, opportunities to protect future systems against the recurrence of mistakes that contributed to the failure of IHiS will likely be missed. The Coast Guard Is Managing Health Records Using a Predominately Paper Process, but Many Challenges Hinder Service Delivery In the absence of an EHR system, the Coast Guard is relying on a predominately paper health record management process to document health care services for its nearly 50,000 military members. Currently, the Coast Guard’s clinical staff perform various manual steps to process each paper health record. For example, clinical staff schedule appointments for patients using Microsoft Outlook’s calendar feature and provide the patient with paper forms for completion upon his or her arrival. In addition, clinical staff must handwrite clinical notes in the paper health record during the appointment, as well as handwrite prescriptions, among other manual processes. In response to our survey, the 12 HSWL Regional Managers identified a number of challenges that clinics and sick bays in their regions had experienced in managing and maintaining paper health records. These challenges were grouped into 16 categories. Further, the 120 clinic and sick bay administrators that subsequently responded to a separate survey reported varying degrees to which they viewed each category as challenging. Figure 1 provides the clinic and sick bay respondents’ views of the top four challenges. With regard to these top four challenges to managing and maintaining paper health records, clinic and sick bay respondents offered the following examples: Incomplete records. Ninety-eight (82 percent) of the respondents reported incomplete records as challenging. In this regard, 34 of the survey respondents reported that not all records from the Coast Guard legacy EHR systems were printed out and included in patients’ paper health records as required before the systems were retired. Thus, they had no way to ensure the patients’ paper records were complete. Penmanship. Among the 91 (76 percent) survey respondents that reported penmanship as challenging, several respondents noted that it is difficult for staff to read illegible handwritten medical notes. This, in turn, results in difficulty determining the accurate diagnosis, the required prescription, or a referral. Tracking medications. According to 89 (76 percent) of the respondents, it is challenging to track medications without an EHR. For example, one administrator stated that staff members rely heavily on patients to remember what medications they are taking—potentially causing harm if patients cannot remember what medications they are taking and the medications have dangerous interactions. Amount of time to manage records. According to 86 (72 percent) of the respondents, managing paper health records is challenging and requires more time for staff to complete and file paperwork. Several respondents stated that the size of the paper health records has increased, resulting in additional time required to review and file records. The responding clinic and sickbay administrators described a range of alternative work-around processes that they have developed to help alleviate several of the challenges. Specifically, they reported having developed additional forms, tracking methods, and alternative processes, as well as having notified Coast Guard HSWL management of the challenges they face. However, these alternative processes may not provide sustained solutions to overcoming these challenges. Until Coast Guard implements a new EHR solution, the challenges inherent in a predominantly paper process will likely remain. The Coast Guard Intends to Acquire a New EHR System, but Has Not Yet Chosen a Solution The Coast Guard has begun taking steps to acquire a new EHR system referred to as the Electronic Health Record Acquisition (eHRa). The Coast Guard plans to manage and oversee the acquisition of eHRa through its non-major acquisition process (NMAP), as described in its Non-Major Acquisition Process (NMAP) Manual. NMAP requires formal approval reviews at three discrete knowledge points called acquisition decision events (ADE) and includes three phases to assess the readiness and maturity of the acquisition. The Coast Guard formally identified the need for a new EHR system on February 1, 2016, and obtained approval for the first of three ADE’s on February 13, 2016. It subsequently initiated market research activities by collecting cost, schedule, and capabilities information from commercial and government solution providers, including DOD and the Department of Veterans Affairs. The Coast Guard used the providers’ responses to develop an alternatives analysis report that was completed in October 2017. The report recommended a solution based on performance, risk, cost, and schedule advantages. The report indicated that the Coast Guard plans to use the results of the alternatives analysis to refine the acquisition strategy, and to support the development of artifacts which are required to successfully achieve the ADE-2 milestone. Staff within the Acquisitions Directorate stated that they were also in the process of finalizing a life cycle cost estimate and a project plan for eHRa—documents necessary for ensuring that appropriate business decisions will be made regarding eHRa’s logistics, affordability, and resources, among other things. As of December 2017, the Coast Guard had not yet made a final determination as to which option would be chosen as the solution for the eHRa acquisition. Implementation of Our Recommendations Should Better Position Coast Guard to Overcome Challenges with Paper Health Records Our report that is being released today contains four recommendations to the Coast Guard. Specifically, we recommend that the Coast Guard: expeditiously and judiciously pursue the acquisition of a new EHR ensure established processes required for the future acquisition or development of an EHR are effectively implemented and adequately documented; direct the Chief Information Officer and the Chief Acquisition Officer to establish and fully implement project governance boards for the future EHR effort that include the Chief Information Officer; and document any lessons learned from the discontinued IHiS project, share them with the new project management team, and ensure lessons learned are utilized for the future EHR effort. The Department of Homeland Security concurred with our four recommendations and identified actions being taken or planned to implement them. If the Coast Guard fully and effectively implements our recommendations, many of the challenges faced by its clinics and sick bays and the thousands of Coast Guard members utilizing its health services could be diminished. In summary, given the numerous challenges inherent with managing and maintaining paper health records, it will be important for the Coast Guard to prioritize obtaining an EHR for its thousands of members. Until a solution for its EHR system is chosen and successfully implemented, the agency is likely to continue to face these challenges. In addition, ensuring established project management and governance processes are effective, as well as documenting and sharing lessons learned, will be essential in avoiding past mistakes and helping to ensure a successful implementation of a future EHR solution at the Coast Guard. Chairman Hunter, Ranking Member Garamendi, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. GAO Contact and Acknowledgments If you or your staff have any questions about this testimony, please contact David A. Powner, Director, Information Technology Management Issues, at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony statement. GAO staff who made key contributions to this statement are Nicole Jarvis (Assistant Director), Ashfaq Huda (Analyst in Charge), Sharhonda Deloach, Rebecca Eyler, Monica Perez-Nelson, and Scott Pettis. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study In 2010, the Coast Guard initiated an effort—known as IHiS—to replace its aging EHR system with a new system that was to modernize various health care services for its nearly 50,000 military members. However, in October 2015, the Coast Guard announced that the modernization project would be canceled. GAO was asked to summarize its report that is being released today on the Coast Guard's actions related to its EHR modernization initiative. GAO's testimony specifically addresses Coast Guard's (1) reasons for deciding to terminate further IHiS development; (2) management and oversight actions for the discontinued project and whether lessons learned were identified; (3) current process for managing health records and the challenges it is encountering; and (4) plans for effectively implementing a new EHR system and the current status of its efforts. In preparing the report on which this testimony is based, GAO reviewed IHiS project expenditures; analyzed key project management documentation; surveyed Coast Guard's Regional Managers and clinical staff; and interviewed key staff. What GAO Found Financial, technical, schedule, and personnel risks led to the United States Coast Guard's (Coast Guard) decision to terminate the Integrated Health Information System (IHiS) project in 2015. According to the Coast Guard (a military service within the Department of Homeland Security), as of August 2017, $59.9 million was spent on the project over nearly 7 years and no equipment or software could be reused for future efforts. In addition, the Coast Guard could not fully demonstrate the project management actions taken for IHiS, lacked governance mechanisms, and did not document lessons learned for the failed project. In the absence of an electronic health record (EHR) system, the Coast Guard currently relies on a predominately paper health record management process to document health care services. Currently, the Coast Guard's clinical staff perform various manual steps to process each paper health record. Coast Guard Regional Managers and clinic and sick bay administrators informed GAO of the many challenges encountered in returning to a paper process. These challenges include the inability for some clinics to adequately track vital information such as medications—potentially causing harm to members if they take medications that have dangerous interactions. To help alleviate several of these challenges, the Coast Guard has developed alternative work-around processes. However, these alternative processes may not provide sustained solutions to overcoming these challenges. In February 2016, the Coast Guard initiated the process for acquiring a new EHR system. As of November 2017, agency officials had conducted research and recommended a solution based on performance, risk, cost, and schedule advantages. However, 2 years after canceling IHiS and moving toward a predominately manual process, the agency has not yet made a final determination on this. Successfully and quickly implementing an EHR system is vital to overcoming the challenges the Coast Guard currently faces in managing paper health records. The expeditious implementation of such a system can significantly improve the quality and efficiency of care to the thousands of Coast Guard active duty and reserve members that receive health care. What GAO Recommends In the report being released today, GAO is recommending that the Coast Guard (1) expeditiously and judiciously pursue a new EHR system, and in doing so (2) ensure key processes are implemented; (3) establish project governance boards; and (4) document lessons learned from the IHiS project. The Department of Homeland Security concurred with GAO's recommendations.
gao_GAO-18-366
gao_GAO-18-366_0
Background The Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act), as amended, establishes the process for states or tribal entities to request a presidential disaster declaration. The act also generally defines the federal government’s role during the response and recovery after a disaster and establishes the programs and process through which the federal government provides disaster assistance to state, local governments, tribal entities and individuals. In addition to its central role in recommending to the President whether to declare a disaster, FEMA has primary responsibility for coordinating the federal response when a disaster is declared as well as recovery, which typically consists of providing grants to assist state and tribal entities to alleviate the damage resulting from such disasters. Once a disaster is declared, FEMA provides assistance through the IA, Public Assistance, and Hazard Mitigation Assistance programs. For instance, some declarations may provide grants only for IA and others only for Public Assistance. Hazard Mitigation Assistance grants, on the other hand, are available for all declarations if the affected area has a FEMA-approved Hazard Mitigation plan. The process for requesting assistance is the same for the three types of assistance. Disaster Declaration Process Under the Stafford Act, states’ governors or tribal chief executives may request federal assistance, if state and tribal resources are overwhelmed after a disaster. As part of the request to the President, a governor or tribal chief executive must affirm that the state or tribe has implemented an emergency plan and that the situation is of such severity and magnitude that effective response is beyond the capabilities of the state or tribal entity, among other things. After a state or tribe submits a request for disaster declaration through FEMA’s regional office, the regional office is to evaluate the request and make a regional recommendation through the RVAR, which is submitted to FEMA headquarters for further review. The FEMA administrator then is to review the state’s or tribe’s request and the RVAR, and recommend to the President whether a disaster declaration is warranted. Figure 1 shows the process for a disaster declaration from the time a disaster occurs until the President approves or denies a declaration request. Five Programs Available under Individual Assistance The IA program provides financial and direct assistance to disaster victims for expenses and needs that cannot be met through other means, such as insurance. The IA comprises five different programs as shown below. When states or tribal entities request disaster declarations, they may request assistance under any or all of the five programs. Likewise, when the President makes a disaster declaration, the declaration may authorize IA which may also include any or all of the five IA programs. 1. Individuals and Households Program provides assistance to eligible disaster survivors with necessary expenses and serious needs which they are unable to meet through other means, such as insurance. According to FEMA headquarter officials, direct assistance is provided to individuals to meet housing needs. 2. Crisis Counseling Program assists individuals and communities by providing community-based outreach and psycho-educational services. 3. Disaster Legal Services provides assistance through an agreement with the Young Lawyers Division of the American Bar Association for free legal help to survivors who are unable to secure legal services adequate to meet their disaster-related needs. 4. Disaster Case Management Program involves a partnership between a FEMA disaster case manager and a survivor to develop and carry out a Disaster Recovery Plan. 5. Disaster Unemployment Assistance provides unemployment benefits and reemployment services to individuals who have become unemployed as a result of a major disaster and who are not eligible for regular state unemployment insurance. The Six IA Regulatory Factors Used to Assess IA Declaration Requests In accordance with its responsibilities under the Stafford Act, FEMA issued a regulation in 1999 that outlines the six factors regional and headquarters officials are to consider when assessing requests for a disaster declaration and when developing a recommendation to the President for a federal disaster declaration. The regulation states that FEMA considers the six factors not only to evaluate the need for IA but also to measure the severity, magnitude, and impact of the disaster. The state or tribe provides information on these factors when submitting its disaster declaration request. The six factors for IA include the following: 1. Concentration of Damages—characterizes the density of the damage in individual communities. The regulation states that highly concentrated damages “generally indicate a greater need for federal assistance than widespread and scattered damages throughout a state.” For example, concentration of damage data includes the numbers of homes destroyed, homes with major or minor damages, and homes affected. 2. Trauma—the regulation provides conditions that might cause trauma including large numbers of injuries and deaths, large-scale disruption of normal community functions, and emergency needs such as extended loss of power or water. 3. Special Populations—FEMA considers the impact of the disaster on special populations, such as low-income populations, the elderly, or the unemployed. 4. Voluntary Agency Assistance—involves the availability and capabilities of voluntary, faith, and community-based organizations, and state and local programs to help meet both the emergency and recovery needs of individuals affected by disasters. 5. Insurance Coverage—addresses the level of insurance coverage among those affected by disasters. Because disaster assistance cannot duplicate insurance coverage, as recognized in the regulation, if a disaster occurred where almost all of the damaged dwellings were fully insured for the damage that was sustained, FEMA could conclude that a disaster declaration by the President was not necessary in accordance with this factor. 6. Average Amount of Individual Assistance by State—according to the regulation, there is no set threshold for recommending IA. However, it states that the averages, depicted in table 1, may prove useful to states and voluntary agencies as they develop plans and programs to meet the needs of disaster victims. The inference is that these averages generally indicate the amount of damages that could be expected for a state based on its size (small, medium, and large). The averages contained within the regulation and depicted in table 1 are based on disasters that occurred between July of 1994 and July of 1999. The President Declared 57 Percent of All IA Requests from 2008 through 2016, with Total Obligations of Approximately $8.6 Billion The Number of IA Declarations Varied by Region and Severe Storms Were the Most Frequent Disaster Type The President declared 57 percent of all IA declaration requests from calendars years 2008 through 2016, with total IA obligations of approximately $8.6 billion. FEMA received 294 IA declaration requests from calendar years 2008 through 2016. Of these, the President declared 168 requests (57 percent), and 51 percent of these declarations were from Regions IV and VI, as shown in table 2. Additionally, of the 126 IA declaration requests denied by the President, Regions X and IX had the highest percentage of denials, at 71 percent (10 out of 14) and 67 percent (12 out of 18), respectively, and Region I had the lowest percentage of denials at 13 percent (2 out of 15), as shown in table 3. See appendix I for the number of IA declarations requested, declared, and denied by states and tribes from each FEMA region for disaster declarations requested from calendar years 2008 through 2016. According to a FEMA headquarters official, when a disaster declaration is denied, FEMA sends a denial letter to states or tribes based on the review of all the information available. The letter generally states that the damage was not of such severity and magnitude as to be beyond the capabilities of the state, affected local governments, and voluntary agencies, and accordingly the supplemental federal assistance is not necessary. Of the emergency management officials we interviewed in 11 states, officials in five states reported that FEMA provided a rationale behind the denial, while officials in three states reported that no rationale was provided. Among the various types of disasters for which IA declaration requests were received, severe storms, flooding, and tornados accounted for the highest number of IA requests, with drought, fishery closure, and contaminated water being the least common, as shown in table 4. FEMA IA Obligations Varied by Region and State FEMA obligated a total of approximately $8.6 billion in IA for disaster declarations made from calendar years 2008 through 2016. These actual obligations were provided to 46 states and they ranged from less than $1 million to more than $1 billion as shown in figure 2. See appendix II for FEMA’s IA actual obligations by state and type of disasters for disaster declarations made from calendars years 2008 through 2016. Additionally, actual obligations for IA declarations made from calendar years 2008 through 2016 varied greatly by FEMA region, as also shown in figure 3. For example, FEMA Region VI had the highest obligations at around $3.3 billion. Region X had the lowest obligations at $24.8 million. As shown in table 5, the amount of obligations for disasters declarations also varied greatly by state. For example, Louisiana had the highest obligations at approximately $2 billion, followed by New York and Texas at about $1.3 billion and $1.1 billion, respectively. The state with the lowest obligations was the U.S. Virgin Islands at about $2,100. FEMA Regions Varied in How They Considered IA Regulatory Factors and Did Not Consistently Obtain and Document Information on All Elements of These Factors FEMA Regions Varied in Their Consideration of the IA Regulatory Factors Based on Disaster Circumstances Six of FEMA’s 10 regional offices reported using all six regulatory factors when evaluating states’ or tribes’ IA declaration requests. Officials from the other 4 regions reported using five of the six factors, with the exception being the average amount of individual assistance by state factor. These officials noted that they do not use this factor because FEMA considers the factor to be outdated or they consider all of the factors holistically. Officials from FEMA’s regional offices also generally reported that the extent to which they consider the six IA regulatory factors equally in all cases varies, depending on the circumstances of the related disaster. Specifically, officials from 7 of the 10 regions stated that they use the regulatory factors on a case-by-case basis as certain factors are more relevant than others based on the disaster. For example, if a tornado hits a rural community and completely destroys all properties within the community with no death or injury, then the regulatory factor for trauma may not be as applicable, while the concentration of damages regulatory factor would have greater relevance. On the other hand, if a tornado hits the center of a town resulting in damages with death and injuries, then the trauma regulatory factor would become more important to consider. Additionally, officials in 3 of the 10 regions reported that in addition to the six regulatory factors, they also take into account institutional knowledge and staff experience when evaluating the regulatory factors. For example, officials in one region stated that their staff have more than 10 years of IA declaration experience, and as such, they are familiar with the extent of the information needed and collect the information accordingly. FEMA Regions Did Not Consistently Obtain and Document Information on All Elements of the IA Regulatory Factors in RVARs Based on our analysis of RVARs from July 2012 through December 2016 used to recommend approving or denying IA requests, FEMA regional offices did not consistently obtain and document information on all elements of the IA regulatory factors. As described earlier, FEMA regions are to use the RVAR to document information on the IA factors and to recommend to the FEMA administrator whether a disaster should be declared. According to FEMA headquarters officials, FEMA developed the RVAR template in June 2012 to help ensure consistency across regions when making recommendations to headquarters on IA declaration requests. Officials stated that prior to the template, information on the six factors was mainly provided in narrative format. The new template listed the various elements found within each of the six regulatory factors, guiding the regional offices to provide information based on those elements. For example, instead of providing a general narrative on the trauma factor, the new template listed the elements to be provided under trauma, such as the number of injuries and deaths, as well as information on power outages and disruption of other community functions and services. Also, instead of summarizing the concentration of damages factor, the template allowed regional offices to categorize the damage concentration as low, medium, high, or extreme. Furthermore, the template also provided a uniform format to present quantitative information such as the number of homes destroyed; whether home damages are major or minor; the number of homes affected; and level of home ownership. See appendix III for a sample RVAR template. We analyzed 81 RVARs developed by the 10 FEMA regions from July 2012 through December 2016 and found that regions did not consistently obtain and document information on all elements related to each of the six regulatory factors in their RVARs. As shown in table 6, all 81 RVARs had at least some elements documented but not all for each of the IA regulatory factors. For example, for the IA concentration of damages regulatory factor, the six elements to be addressed include the number of homes destroyed, damaged or affected, damage concentration, and damage to critical facilities. While 44 of the 81 RVARs documented all of the six elements, 37 documented some but not all of the elements. Similarly, for the trauma regulatory factor, the four elements to be addressed include injuries, death, power outages, and disruption of community functions. While 30 of the 81 RVARs documented all of the four elements, 51 documented some but not all of the elements. For the insurance coverage factor, while five RVARs documented all of the elements, 73 RVARs documented some but not all of the elements. Elements under this factor include home ownership, insurance, and flood insurance, when applicable. None of the six regulatory factors were fully documented across all RVARs. See appendix IV for detailed information on the extent to which all of the elements of the six regulatory factors were documented in the RVARs from July 2012 through December 2016. FEMA headquarters officials acknowledged that information related to all the elements for each of the IA regulatory factors were missing from the RVARs. They stated that they had not collected all information on all factors because one factor may have more weight than another based on the specific incident that has occurred. However, they also indicated that they do not fully know and have not evaluated all of the reasons why a region may have omitted information on an element of a factor. FEMA headquarters officials agreed that having complete information on all elements of the regulatory factors in the RVARs would assist in their recommendation process. Standards for Internal Control in the Federal Government suggest that agencies should establish and operate monitoring activities to ensure that internal controls—such as the documentation of all of the elements of the IA regulatory factors FEMA regions considered—are effective, and to take corrective actions as appropriate. Because it is unclear why regions are not completely documenting all elements related to the current six regulatory factors, such an evaluation could help FEMA identify whether any corrective steps are needed. Doing so could help FEMA ensure it is achieving its stated goals in providing consistency in the evaluation process and in the types of factors it considers. FEMA and States Reported Challenges in the IA Declaration Process, and FEMA Is Revising the Regulatory Factors Used to Assess Declaration Requests FEMA and State Officials Reported Both Positive Relationships and Some Challenges in the IA Declaration Process Officials we interviewed in 9 of the 10 FEMA regions and state emergency management offices in all 11 states reported the positive relationship they maintain with each other as a strength in the IA declaration process. For example, both FEMA regional officials and state emergency management officials stated that they have a good working relationship and are in regular communication via telephone or in-person meetings with each other. Also, state emergency management officials we spoke to stated that whenever they are in need of assistance, they know they can reach out to FEMA regional officials for assistance. However, FEMA regional and state emergency management officials we spoke to also reported various challenges with the process. These include the subjective nature of the IA regulatory factors given the lack of eligibility thresholds, the lack of transparency in the decision-making process, and difficulty gathering information on IA regulatory factors. Subjective nature of the IA factors and lack of eligibility thresholds. Officials from 9 of 10 FEMA regions stated the subjective nature of the IA program is a challenge; and officials in 6 of the 10 regions also said they found the lack of eligibility thresholds a challenge. An official in one region stated that unlike FEMA’s Public Assistance program, which has minimum thresholds for eligibility, it is unclear when states should apply for IA funds. Under the Public Assistance program, for example, for states or tribes to qualify for assistance, they must demonstrate that they have sustained a minimum of $1 million in damages and the impact of damages must amount to $1.00 per capita in the state. An official in another region explained that although the subjectivity of the IA factors provides flexibility in determining the type of IA program needed, having some quantifiable criteria could help officials explain to states why their requests were denied or approved. Similarly, officials we interviewed in 7 of the 11 states said they found the subjective nature of the factors with no threshold to be a challenge. A state emergency management official in one state said this subjectivity makes it difficult to determine whether or not the state should make an IA request. A state emergency management official in another state reported that the subjectivity can cause the IA declaration process to be inconsistent, and it is not always clear how or why certain declarations were approved and others were not. Further, a state emergency management official in an additional state also pointed to the subjective nature of the factors with no threshold as a reason for not being able to provide a more detailed rationale behind a declaration denial. To illustrate this, table 7 shows how four states requested IA declarations related to the same tornado in 2012 and varied in what they reported across the six IA factors, such as the levels of damages incurred, special populations among their residents, and insurance coverage. Two of these four states—Kentucky and Indiana—received IA declarations and the other two—Ohio and Illinois—were denied. Lack of transparency. Another challenge reported by FEMA regional and state emergency management officials was the lack of transparency in how FEMA evaluates and provides a recommendation to the President on whether a declaration is warranted. For example, officials we interviewed in 4 of 10 regions indicated the lack of transparency as a challenge. A FEMA official in one region stated that the region would like more transparency regarding what FEMA headquarters recommends to the President and whether the President’s decision aligns with FEMA’s recommendation. State emergency management officials we interviewed in 10 of 11 states also reported that lack of transparency with the IA process is a challenge. For example, an emergency management official in one state said it is not clear how or if FEMA considers all of the factors. Also, an emergency management official in another state reported that it was unclear to him why his state’s declaration request was denied while the requests of other states with similar incidents were declared. Difficulty gathering information on IA regulatory factors. Officials in 4 of 10 FEMA regions reported difficulty gathering information, such as income or insurance coverage, as a challenge. An official in one region stated that it is difficult to obtain information related to IA factors from states. For example, the official said that calculating the concentration of damages is difficult absent technical guidance from FEMA headquarters, as the current guidance only accounts for the number of structure damage but not the impact of damage. Further, officials in two FEMA regions stated that states lack a dedicated IA official, making it difficult for state officials, who play multiple roles, to provide the necessary information related to the IA factors in their IA declaration request. Additionally, a state emergency management official in one state also reported that lack of staff resources in her state makes it difficult to verify all the local damage assessments prior to making a declaration request. FEMA Is Taking Steps to Revise the IA Regulatory Factors Pursuant to the Sandy Recovery Improvement Act of 2013, in November 2015, FEMA issued a Notice of Proposed Rulemaking to revise the six current IA regulatory factors to the following proposed factors: state fiscal capacity and resource availability; uninsured home and personal property losses; disaster-impacted population profile; impact to community infrastructure; casualties; and disaster-related unemployment. According to FEMA headquarters officials, the revisions aim to provide more objective criteria, clarify the threshold for eligibility, and speed the declaration. The officials said the proposed rule also seeks to provide additional clarity and guidance for all the established factors. Table 8 shows FEMA’s description of current and proposed IA factors. FEMA received public comments from 14 states in the Federal Register during the comment period for the proposed rule and proposed guidance. The 14 states expressed concern about the proposed factor for state fiscal capacity and resource availability, including the reliability and relevance of data sources such as total taxable resources. These states expressed concern that the data collection necessary to meet the new requirements would fall upon them, adding to the cost burden of completing an IA disaster declaration request. They also explained that the use of total taxable resources and other similar data is not an effective way to assess a state’s current ability to provide resources following a disaster. Also, these states indicated that the data points such as total taxable resources and per capita personal income that would be used to evaluate state fiscal capacity are outdated and inaccurate and would be an inefficient way to evaluate a state’s true fiscal capacity to respond to a disaster. Regarding the other five proposed factors, several states in their comments raised questions about ambiguities in interpreting the factors or the feasibility and cost of gathering related data. For example, in regards to the factor on disaster impacted population, five states expressed concern that the data required for the disaster-impacted population factor would be a cost burden to the state or that the data would be inappropriate for evaluation. Additionally, two states said unemployment related to a disaster incident for the disaster-related unemployment factor would be hard to quantify in the first 30 days following a disaster. They stated that this was especially an issue given that states work to submit an IA disaster declaration request as soon as possible following a disaster. According to the Office of Management and Budget’s Office of Information and Regulatory Affairs website, the projected date for finalization of the proposed rule is September 2018; however, as of April 2018, FEMA officials stated that they were not certain whether that timeframe would be met. Until the proposed rule is finalized, we will not know the extent to which the various challenges FEMA regions and state officials raised in our interviews and in comments on the proposed rule will be addressed. Conclusions FEMA has obligated over $8.6 billion nationwide in IA from calendar years 2008 through 2016, highlighting the importance of FEMA’s evaluation of states’ and tribes’ IA declaration requests. FEMA’s regional offices evaluate the request and make a regional recommendation through the Regional Administrator’s Validation and Recommendation, which documents information on all relevant IA regulatory factors. FEMA has developed the Regional Administrator’s Validation and Recommendation to ensure regions consistently obtain and document the information needed by FEMA to make a disaster declaration recommendation to the President based on the IA regulatory factors. However, FEMA’s regional offices do not consistently obtain and document information on all elements of the current IA regulatory factors. Because it is unclear why regions are not always documenting all of the elements related to these factors, evaluating the reasons why could help FEMA identify if any corrective steps are needed. Doing so could also help FEMA ensure it is meeting its stated goals in providing consistency in the evaluation process and in the types of factors it considers. Recommendation for Executive Action We recommend that the Administrator of FEMA evaluate why regions are not completing the Regional Administrator’s Validation and Recommendations for each element of the current IA regulatory factors and take corrective steps, if necessary. Agency Comments and Our Evaluation We provided a draft of this report to DHS for its review and comment. DHS provided written comments, which are summarized below and reproduced in full in appendix V. DHS concurred with the recommendation and described planned actions to address it. In addition, DHS provided written technical comments, which we incorporated into the report as appropriate. DHS concurred with our recommendation that FEMA evaluate why regions are not completing the Regional Administrator’s Validation and Recommendations for each element of the IA regulatory factors and take corrective steps, if necessary. DHS stated that a FEMA working group consisting of headquarters stakeholders will draft survey questions for FEMA region officials to identify the common reasons why an element of an IA regulatory factor may not be addressed within a RVAR. According to DHS, the working group will also analyze, assess, and present the findings of the survey responses to FEMA senior leadership, and if needed, FEMA will develop and send a memorandum to the regions with additional guidance regarding the appropriate preparation of RVARs. DHS stated that the estimated completion date is in the fall of 2018. These actions, if implemented effectively, should address the intent of our recommendation. We will send copies of this report to the Secretary of Homeland Security, the FEMA Administrator, and the appropriate congressional committees. If you or your staff have any questions about this report, please contact me at (404) 679-1875 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Appendix I: Individual Assistance Declarations Requested, Declared, and Denied, Calendar Years 2008-2016 Table 9 provides the total number of Individual Assistance declaration requests made, declared, and denied, by region, state, and tribe for disaster declarations requested from calendar years 2008 through 2016. Appendix II: Individual Assistance Actual Obligations for Declarations Made, Calendars Years 2008-2016 Table 10 provides Federal Emergency Management Agency’s (FEMA) Individual Assistance (IA) actual obligations for declarations made from calendar years 2008 through 2016 by state and type of disaster. Appendix III: Regional Administrator’s Validation and Recommendation Template As part of the Federal Emergency Management Agency’s (FEMA) declaration process, FEMA’s regional offices are to evaluate states’ or tribes’ declaration requests, including the IA declaration request, and make a recommendation called the Regional Administrator’s Validation and Recommendation (RVAR) and submit the RVAR to FEMA headquarters. In June 2012, FEMA headquarters issued a template for FEMA regional offices to use in developing the RVAR as identified in figure 3. Appendix IV: Information on the Elements of the Six Individual Assistance Regulatory Factors Tables 11 through 16 provide information on each element of the 6 Individual Assistance (IA) regulatory factors documented in the Regional Administrator’s Validation and Recommendation (RVAR) from July 2012 through December 2016 by the Federal Emergency Management Agency region. Appendix V: Comments from the Department of Homeland Security Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Aditi Archer (Assistant Director), Su Jin Yon (Analyst-In-Charge), Hiwotte Amare, Eric Hauswirth, Susan Hsu, Jun S. (Joyce) Kang, Christopher Keisling, Heidi Nielson, Hadley Nobles, Anne Rhodes-Kline, and Jerome (Jerry) Sandau made significant contributions to this report.
Why GAO Did This Study FEMA's IA program provides help to individuals to meet their immediate needs after a disaster, such as shelter and medical expenses. When a state, U.S. territory, or tribe requests IA assistance through a federal disaster declaration, FEMA evaluates the request against regulatory factors, such as concentration of damages, and provides a recommendation to the President, who makes a final declaration decision. GAO was asked to review FEMA's IA declaration process. This report examines (1) the number of IA declaration requests received, declared, and denied, and IA actual obligations from calendar years 2008 through 2016, (2) the extent to which FEMA accounts for the regulatory factors when evaluating IA requests, and (3) any challenges FEMA regions and select states reported on the declaration process and factors and any FEMA actions to revise them. GAO reviewed FEMA's policies, IA declaration requests and obligation data, and FEMA's RVARs from July 2012 through December 2016, the most recent years for which data were available. GAO also reviewed proposed rulemaking comments and interviewed FEMA officials from all 10 regions and 11 state emergency management offices selected based on declaration requests and other factors. What GAO Found From calendar years 2008 through 2016, the Department of Homeland Security's (DHS) Federal Emergency and Management Agency (FEMA) received 294 Individual Assistance (IA) declaration requests from states, U.S. territories, and tribes to help individuals meet their immediate needs after a disaster. Of these, the President declared 168 and denied 126 requests. Across the various types of IA declaration requests, severe storms (190) were the most common disaster type and drought (1) was among the least common. FEMA obligated about $8.6 billion in IA for disaster declarations during this period. GAO found that FEMA regions did not consistently obtain and document information on all elements of established IA regulatory factors when making IA recommendations to headquarters. Following a declaration request, a FEMA region is to prepare a Regional Administrator's Validation and Recommendation (RVAR)—a document designed to include data on each of the six IA regulatory factors for each declaration request as well as the regional administrator's recommendation. GAO reviewed all 81 RVARs from July 2012—the date FEMA began using the new RVAR template—through December 2016. GAO found that regions did not consistently obtain and document information for the elements required under the six regulatory factors (see table). For example, only 44 of the 81 RVARs documented all elements under the concentration of damage factor. By evaluating why regions are not completing all elements of each current IA regulatory factor, FEMA could identify whether any corrective steps are needed. Officials from the 10 FEMA regions and 11 states GAO interviewed, reported positive relationships with each other, but also cited various challenges with the IA declaration process and regulatory factors. For example, these officials told GAO that there are no established minimum thresholds for IA, making final determinations more subjective and the rationale behind denials unclear. However, as required by the Sandy Recovery Improvement Act of 2013, FEMA has taken steps to revise the IA factors by issuing a notice of proposed rulemaking. According to FEMA, the proposed rule aims to provide more objective criteria, clarify the threshold for eligibility, and speed up the IA declaration process. As of April 2018, the proposed rule was still under consideration. According to FEMA officials, they plan to finalize the rule in late 2018; therefore, it is too early to know the extent to which it will address these challenges. What GAO Recommends GAO recommends that FEMA evaluate why regions are not completing the RVARs for each element of the current IA regulatory factors and take corrective steps, if necessary. DHS concurred with the recommendation.
gao_GAO-18-143
gao_GAO-18-143_0
Background FEMA’s Public Assistance Process The Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act), as amended, defines FEMA’s role during disaster response and recovery. One of the principal programs that FEMA operates to fulfill its role is the PA program. PA is a complex and multistep grant program administered through a partnership between FEMA and states, which pass these funds along to eligible local grant applicants. Thus, PA entails an extensive paperwork and review process between FEMA and the state based on specific eligibility rules that outline the types of damage that can be reimbursed by the federal government and steps that federal, state, local, territorial, and tribal governments as well as certain nonprofit organizations must take in order to document eligibility. The complexity of the process led FEMA to re-engineer the PA program, which FEMA has referred to as its “new delivery model.” FEMA began testing the new delivery model at select disaster locations in 2015, in preparation for implementing it nationwide for all new disasters. On September 12, 2017, FEMA announced that the new delivery model would be used in all future disasters unless determined infeasible in a particular instance. The process begins after FEMA determines that the applicant meets eligibility requirements. FEMA then works with the state and the applicant to develop a project worksheet describing the scope of work and estimated cost. Once FEMA and the applicant agree on the damage assessment, scope of work, and estimated costs, the PA grant obligation is determined. After FEMA approves a project, funds are obligated—that is, they are made available—to the state recipient, which, in turn, passes the funds along to applicants. Applicants may appeal project decisions if they disagree with FEMA’s decisions on project eligibility, scope of damage, or cost estimates. Appealable decisions can occur at various times during the PA grant process, including during project closeout as long as they meet applicable time limits. FEMA’s PA Appeals Process Figure 2 summarizes the first- and second-level appeals process under FEMA’s PA program. The first-level appeals process begins after FEMA makes its determination on a project for PA grant assistance. Within 60 days of receiving FEMA’s initial determination, the applicant must file an appeal through the state to the relevant FEMA regional office. The state must forward the appeal and a written recommendation to the relevant FEMA regional office within 60 days. In reviewing the first-level appeal before forwarding it to FEMA, the state has discretion to support or oppose all or part of the applicant’s position in the appeal. Under the Stafford Act, the FEMA regional office shall render a decision within 90 days from the date it received the first-level appeal from the state. The PA appeals process can take longer if regional officials issue a request for information (RFI) to the applicant or request technical advice from subject-matter experts. According to a senior PAAB official, a regional office may issue an RFI or seek technical advice when an applicant’s appeal is incomplete, lacks referenced documentation, or raises additional eligibility concerns. The regional office may issue multiple RFIs prior to rendering a final decision on an appeal. Within 90 days following the receipt of the requested additional information or following expiration of the period for providing the information, FEMA is to notify the state in writing of the disposition of the appeal. Regional Administrators are responsible for authorizing a final decision on a first-level appeal. A decision may result in an appeal being granted, partially granted, or denied. An appeal is considered granted when FEMA has approved the relief requested by the applicant as part of the appeal. An appeal is considered partially granted when FEMA has approved a portion of the relief requested by the applicant. An appeal is considered denied when FEMA has decided not to approve the relief requested by the applicant. If the regional office is considering denying or partially granting a first appeal, it must issue an RFI to provide applicants with a final opportunity to supplement the administrative record (i.e., the documents and materials considered in processing a first-level appeal), which closes upon issuing a first-level appeal decision. According to a senior PAAB official, this process adds additional time to first-level appeal processing, but ensures that FEMA has considered all relevant and applicable documentation. The applicant may file a second-level appeal through the state within 60 days of receiving a first-level appeal decision. The second-level appeal must explain why the applicant believes the original determination is inconsistent with law or policy and the monetary amount in dispute. The state then has 60 days to provide a written recommendation to FEMA. In reviewing the second-level appeal, just as with the first-level appeal, the state has discretion to support or oppose all or part of the applicant’s position in the appeal. The FEMA Assistant Administrator for Recovery or the PA Division Director through a delegation of authority shall render a decision within 90 days of receipt of the second-level appeal from the state. All second-level appeal decisions are posted to FEMA’s website, so applicants can review the previous decisions. As is the case with first-level appeals, the PA appeals process can take longer if PAAB officials request additional information or technical advice on an appeal. These requests must also include a date by which the information must be provided. According to a senior PAAB official, RFIs are seldom issued for second-level appeals because the administrative record is closed after a decision is rendered on a first-level appeal. Similarly, this official told us that technical advice is rarely sought for second-level appeals because such issues are typically explored during the first-level appeal process. Within 90 days following the receipt of the requested additional information or following expiration of the period for providing the information, FEMA is to notify the state in writing of the disposition of the appeal. FEMA’s response to a second- level appeal is the last and final agency decision in the appeals process. Organization of FEMA’s PA Appeals Program Located within the Recovery Directorate, PAAB maintains overall responsibility for administering and overseeing FEMA’s PA appeals program. Among other things, PAAB is responsible for ensuring that all appeal decisions are issued within regulatory timelines by developing and maintaining SOPs; arranging for supplemental staff support as needed; providing regular updates for both first- and second-level appeal decisions through a range of communications; and providing training to certify PA program staff on appeals processing. PA program appeals staff in each of FEMA’s 10 regional offices are responsible for processing first-level appeals, while PAAB staff in FEMA’s Headquarters office are responsible for processing second-level appeals. Accordingly, each regional office is required to follow FEMA’s Directive, Manual, and Regional SOP for processing first-level appeals, consistent with those established for second-level appeals. FEMA regional offices are also required to forward all incoming second-level appeals to PAAB. In addition, regional office staff must, within 3 business days of receiving a first-level appeal from a state, provide an electronic copy of the appeal to the PAAB via FEMA’s shared workspace SharePoint site. As noted in FEMA’s Recovery Directorate Appeals Manual, this step enables PAAB staff to identify and track appeals issues and trends in development across all FEMA regions. The roles and responsibilities for both first-and second-level appeals are defined in FEMA’s SOPs. For example, certified appeals analysts are responsible for reviewing incoming appeals for completeness, researching and drafting appeal decisions, and generating RFIs. Lead appeals analysts are the first-line reviewers of appeal decisions and RFIs, and provide guidance on PA program and policy issues, coordinate appeals assignments, and review work of appeals analysts. Further, appeals coordinators are responsible for receiving incoming appeals, tracking the processing of those appeals, updating the appeal status, and processing other appeals-related correspondence and reports. Prior Reviews Examining the PA Appeals Process We have identified a number of issues related to FEMA’s management of the PA appeals program in our prior audit work, as has DHS’s OIG. In our 2008 review of FEMA’s administration of the PA program following Gulf Coast Hurricanes Katrina and Rita, we identified challenges related to applicants’ experience with appeal processing delays and that FEMA often did not make decisions on appeals within the 90-day statutory time frame. Other challenges identified were that FEMA did not inform some applicants of the status of their appeal, or, in some cases, assure them of the independence of the FEMA officials making appeal decisions. Specifically, some applicants perceived there to be a conflict of interest because the PA program staff responsible for reviewing appeals was the same staff that had made the PA project decision that was being appealed. We did not make recommendations to FEMA to address these challenges in our 2008 review, but rather described the challenges as part of the status of overall Gulf Coast hurricane recovery efforts. In 2011, DHS’s OIG conducted a review of FEMA’s PA appeals process and made a number of recommendations aimed at improving aspects of the process, including the timeliness of appeals processing, appeals staffing, and the accuracy of appeals data. As in our 2008 review, the OIG identified appeal processing delays occurring at both FEMA regional offices and at headquarters. For example, the report found that appeals were left open for long periods of time and that some regional offices as well as FEMA headquarters took more than 90 days to issue a decision on first- and second-level appeals. Further, the OIG review found that staffing approaches employed by individual regional offices contributed to processing delays and varying processing timeframes. For example, the management and processing of first-level appeals varied by FEMA regional office in that some regional offices assigned staff specifically to review appeals, while other offices assigned staff to appeals processing as part of their other responsibilities within the PA Program, such as determining eligibility for PA assistance. Further, second-level appeals were processed by various offices within FEMA headquarters, and FEMA had not established guidelines to complete work within a specific timeframe. Moreover, the OIG review found inaccuracies with FEMA’s system for tracking appeal processing times for second-level appeals, resulting in unreliable information being reported to FEMA management regarding compliance with the 90-day statutory time frame. Lastly, the OIG reported that some applicants had been unable to obtain information on the status of their appeals and that FEMA did not provide meaningful feedback to resolve applicants’ inquiries. Weaknesses Exist in FEMA’s Oversight of Data Quality, but Corrected FEMA Data Showed Fluctuations in Appeal Inventory and Delays in Processing Our review of FEMA data that track first- and second-level appeals showed weaknesses in the agency’s data quality practices that affect program oversight. For example, we found that FEMA regional offices do not track first-level appeals data consistently or update this data regularly, resulting in missing data entries. Further, we found that FEMA’s appeal tracking process does not ensure data quality, limiting FEMA’s ability to use the data for making decisions on and improvements to the PA appeals process. During our review, we discussed with FEMA officials the discrepancies we found with these appeals data. FEMA officials acknowledged these data quality issues and provided us with corrected data to address these discrepancies for our analysis in this report. Our analysis of the corrected FEMA data showed that, between January 2014 and July 2017, FEMA received over 1,400 first- and second-level appeals with amounts in dispute totaling about $1.5 billion. Across all years, first- level appeals accounted for the majority of appeals, though the number of appeals fluctuated widely each year. Over the same period, only a small percentage of first-and second-level appeals were processed within the 90-day statutory time frame. Weaknesses in FEMA’s Tracking and Data Quality Practices Affect Program Oversight To administer and oversee the PA appeals program, FEMA collects and tracks information on first- and second-level appeals. Based on FEMA’s SOP, the agency uses this information to identify trends throughout the appeals process and identify areas in need of improvement. Specifically, PAAB uses two Excel spreadsheets for collecting and analyzing first- and second-level appeals data. The spreadsheet for collecting second-level appeals data is updated and maintained by PAAB, while the spreadsheet for first-level appeals is based on input from FEMA’s 10 regional offices. Based on our detailed review of the spreadsheets, they contain numerous data fields on the status and outcomes of first-level appeals, such as the date the regional office received the appeal, the date an RFI was issued, the date the Regional Administrator signed the decision, the amounts being disputed by the applicant, and keyword information regarding the subject of the appeal. PAAB requests that regional offices update appeal information in the first- level appeal spreadsheet as changes occur on an appeal. PAAB then uses this data to assess trends in regional office appeals processing, which it includes in various performance and other internal reports that are shared with FEMA management and used to monitor the program. According to PAAB officials, such information provides valuable support to PAAB as well as the PA program by sharing information about filings, progress, and PA program decision making. However, while PAAB’s tracking efforts help maintain visibility over and provide some monitoring of the appeals processing, we found that data fields for first-level appeals were not consistently reported or updated and that PAAB has no processes to ensure the quality of these data. As a result, data on first- level appeals may not have the accuracy needed for effective reporting and oversight efforts. FEMA Regional Offices Do Not Track Appeals Information Consistently or Update First- Level Appeal Information Regularly Our review of first-level appeals data showed that, between January 2014 and July 2017, regional offices did not consistently report first-level appeal information for a number of the key data fields in the PAAB first- level appeal tracking spreadsheet. Specifically, we found missing entries for the majority of the spreadsheet’s 50 data fields. For example, we found that about one-third of the time, regional offices had not completed the data field for amounts being disputed by the applicant for pending appeals or indicated whether or not money was in dispute in the appeal. We also found that the regional offices had generally not entered the date that the regional appeal staff had completed an initial review of the appeal—99 percent of entries were missing for this field. In another example, the data field that captures keywords was missing in over 33 percent of data entries. PAAB officials told us that keywords are an important tool for understanding the root causes of an appeal. Further, we found a number of missing data entries for key dates for one regional office in particular. Specifically, this office had not recorded entries for any of the data fields related to key dates in the appeal process, such as the date the first-level appeal was assigned to an appeals analyst, the date the appeal was reviewed by the Regional Administrator, and the date the first-level appeal decision was sent to and received by the applicant. PAAB officials told us that PAAB uses these dates to calculate appeal processing times as part of its effort to evaluate trends in appeal information and identify potential areas for improvement, including timeliness. However, officials from this regional office told us the office does not consistently update information in the PAAB first-level appeal tracking spreadsheet and does not consider it a priority. Rather, the office considers the actual processing of first-level appeals a priority. In addition, our analysis of first-level appeals data also showed that there was limited standardization of recording entries within fields. For example, officials in one of the three regional offices in our review told us that, in some instances, they combine first-level appeals that involve direct administrative costs and record them as a single appeal. However, the other two regional offices in our review told us they do not combine individual appeals that involve direct administrative costs. Rather, they count each as a separate appeal. The lack of standardization in the way appeals are counted could result in some types of appeals being over- or under-reported. More specifically, these inconsistencies may affect PAAB’s ability to compare appeal processing capacity between regional offices and accurately report the regions’ performance. FEMA’s Appeal Tracking Process Does Not Ensure Data Quality PAAB officials acknowledged inconsistencies in first-level appeals reporting, but noted that under FEMA’s SOP, the regional offices are responsible for entering first-level appeal information. According to PAAB officials, this responsibility is emphasized during training sessions with appeal staff. However, we found that FEMA has no automated data entry checks for information the regions enter into PAAB’s first-level appeal tracking spreadsheet and does not monitor data fields for missing or conflicting data. Regional offices do not have a means for electronically uploading first-level appeal information to PAAB and must manually input data into the spreadsheet. PAAB’s process then simply confirms receipt of the information through an email exchange with the regional office staff who manually input the information. PAAB officials told us that they rely on regional office appeal staff to confirm and validate the first-level appeals data that are provided to PAAB for internal reporting. However, PAAB has no independent and consistent method of verifying the accuracy of the appeals data reported to it by the regional offices. PAAB officials also noted that there is no systematic process or method to identify these errors and generate an error report. Moreover, another limitation that we identified in the spreadsheet used by the regional offices is that it is not clear what blank data fields represent— that is, whether data does not exist or whether data that exists were not recorded. PAAB officials acknowledged that blank data fields in the first- level appeal tracking spreadsheet created reporting challenges, such as whether the data field was not applicable to a particular appeal, the appeal staff for a particular region did not collect this information, or existing information was not recorded. We also identified a number of other data entries that were erroneously recorded as first-level appeals. Specifically, the information entered related to requests for adjustments to PA project funding and should not have been entered into the tracking spreadsheets as appeals. Standards for Internal Control in the Federal Government advises management to process data into quality information that is appropriate, current, complete, accurate, accessible, and provided on a timely basis. Additionally, management should evaluate processed information, make revisions when necessary so that the information is quality information, and use the information to make informed decisions. By developing and implementing processes and procedures to ensure a uniform and consistent approach for tracking first-level appeals data and better integrating regional trackers with PAAB’s own first-level appeals tracker, PAAB will have greater assurance that it is collecting the comprehensive and complete appeals processing performance information it needs from the regional offices. Further, by identifying data discrepancies and other anomalies in its data queries and the resulting datasets, PAAB may be able to identify overall weaknesses in its data recording process, thereby allowing it to more accurately report on first-level appeals information. Without obtaining quality appeals data, FEMA will not be able to identify existing gaps in its appeals information and address areas in need of improvement, such as meeting statutory timeframes. Corrected FEMA Data Showed Fluctuations in Appeal Inventory After we shared our concerns about the appeals data with FEMA officials, they corrected the errors in their data and provided us a corrected data set to use for our analysis in this report. Based on our analysis of this corrected data we determined that, from January 2014 to July 2017, FEMA received over 1,445 first- and second-level appeals with amounts in dispute totaling about $1.5 billion. Across all years, first-level appeals accounted for the majority of appeals, though the number of appeals fluctuated widely between years. (See figure 3.) FEMA officials told us that the number of appeals they received has varied year to year and that increases or decreases in appeals are largely a function of the number of and severity of disaster events. That is, the greater the number of disasters declared and the more extensive the damage, the greater the number of PA program grants FEMA may issue to applicants, which in turn, may affect the likelihood that an applicant will appeal a FEMA decision regarding a grant. FEMA issued a decision on 953 of the appeals it received between January 2014 and July 2017. As shown in table 1, another 349 appeals were pending and awaiting a decision as of July 2017. The remaining 143 appeals were withdrawn by the applicant during the appeals process. Our analysis of the corrected FEMA data also found that, for appeals received between January 2014 and July 2017, total first- and second- level pending and decided appeals involved amounts in dispute totaling over $1.3 billion (excluding the 143 appeals that were withdrawn by the applicant during the appeals process). As shown in figure 4, at least a third of both first-and second-level pending and decided appeals (35 percent and 44 percent, respectively) involved amounts in dispute that ranged from $1 to $99,999. Less than 10 percent of both first- and second-level pending and decided appeals (9 percent and 8 percent, respectively) did not involve monetary amounts in dispute. In rendering a final decision on an appeal, FEMA can grant, partially grant, or deny the appeal. Our analysis showed that FEMA granted nearly a third of the 779 first-level appeals filed, awarding applicants over $85 million. As shown in figure 5, FEMA also partially granted about 19 percent of first-level appeals filed, which involved amounts in dispute totaling over $63 million. Further, figure 5 shows that over one-third of the 174 second-level appeals were either granted or partially granted. Specifically, FEMA granted about 26 percent of second-level appeals filed, awarding over $43 million, while the agency partially granted about 7 percent of second-level appeals filed, involving amounts in dispute totaling almost $19 million. FEMA Exceeded Statutory Processing Times Our analysis of the corrected FEMA appeal data showed that, on average, FEMA took more than three times the 90-day statutory time frame to process an appeal, which includes rendering a decision. Specifically, for first- and second-level appeals that FEMA received between January 2014 and July 2017 and that FEMA decided during the same period, FEMA’s average processing time was 297 days. The processing time for decided first-level appeals averaged 293 days, while the processing time for decided second-level appeals averaged 313 days. Further, as shown in figure 6, only a small percentage of decided first-and second-level appeals (9 and 11 percent, respectively) were processed within the 90-day statutory time frame. For pending appeals, we found that, at the time of our analysis in July 2017, FEMA had taken on average, more than three times the 90-day statutory time frame for rendering decisions. Specifically, as of July 2017, FEMA had not rendered a decision on 349 appeals, which had an average processing time of 299 days. As of July 2017, the processing time for pending first-level appeals averaged 306 days, while the processing time for pending second-level appeals averaged 267 days. Figure 7 shows the ranges of processing times as of July 2017 for both first-and second level pending appeals. Officials from PAAB and the three regional offices in our review acknowledged that they experienced challenges processing appeals within the 90-day statutory time frame. They told us that issuing RFIs to the applicant can contribute to lengthy processing delays. According to PAAB officials, issuing an RFI may contribute to long processing periods if the information relates to a complex appeal—for example, an appeal involving multiple engineering issues. An appeal decision can also be delayed if FEMA issues an RFI because an applicant submitted incomplete documentation to support an appeal. Under FEMA regulation, these requests do not count against processing times and the 90-day time frame in which FEMA can render a decision on an appeal. However, our analysis of the corrected FEMA data showed that FEMA exceeded its statutory time frames even when it did not issue an RFI. Specifically, between January 2014 and July 2017, FEMA issued an RFI in about 59 percent—or 560—of the 953 first- and second-level appeals for which it rendered a decision. In 48 percent (267) of those decided appeals, FEMA had issued the RFI after the 90-day time frame had elapsed. FEMA did not issue RFIs for about 41 percent (393) of decided first- and second-level appeals. In 78 percent (305) of those appeals, FEMA’s processing time still exceeded the 90-day statutory time frame. State emergency management officials from five of our six selected states told us that they experienced long wait times for first- and second-level appeal decisions and that FEMA rarely processed appeals within the 90- day time frame required by statute. State emergency management officials further told us that such delays adversely affect applicants, such as municipalities and localities, which may wait prolonged periods to resolve project eligibility and costs related to rebuilding efforts. Delays in FEMA’s decision making may also result in additional costs to both the state and the applicant, according to these officials. For example, the state may pursue funding from an applicant if FEMA decides to deobligate funds from the applicant for PA projects that have already been completed. As discussed earlier in this report with respect to the PA process, FEMA may do this if it finds that the applicant did not meet certain PA project requirements. In these instances, the applicant may appeal FEMA’s decision, but the state may need to begin administrative proceedings against the applicant to recover or offset the deobligated funds. One state emergency manager told us that some applicants withdrew their appeals because of the prolonged delays in receiving a final decision. According to state emergency management officials, delays in FEMA’s appeal decisions can create significant challenges for local government entities, such as counties and school districts. Officials from one state provided an example of a rural school district that sought PA funding to bus displaced children who had been left homeless from damage caused by Hurricane Irene. According to relevant federal and state documents these officials provided us, these children had been moved to shelters outside of their school district and needed transportation to be able to attend school. The school district applied to FEMA for transportation costs associated with hiring an additional bus driver to bus the children to the schools in the district. FEMA denied the school district’s request, based on its interpretation of the Stafford Act and the eligibility of costs related to emergency public transportation. The district subsequently filed a first-level appeal in November 2015. FEMA took over a year to issue a decision and, in December 2016, denied the district’s first-level appeal. State management officials told us that incurring these unanticipated transportation costs while waiting for FEMA to decide the appeal has a major effect on the school district and the community as a whole, and can lead to the elimination of school programs or staff. The school district subsequently filed a second-level appeal in February 2017. FEMA denied the appeal in August 2017. State emergency management officials we interviewed provided an additional example wherein a small town had applied for PA grant funding to rebuild a retaining wall and roadway following damage caused by Hurricane Irene. According to relevant federal and state documents officials provided us, the overflowing banks of a tributary caused a retaining wall, which protected a nearby roadway, to wash away. The roadway, which provided access to residential properties near the tributary, was significantly damaged, due to the overflow. The town requested funding to repair the roadway and to replace and extend the retaining wall another 250 feet beyond the original wall in order to protect the roadway from future flood events. FEMA approved the PA funding to repair the roadway. However, FEMA denied the town’s application for PA assistance to extend the wall beyond its original length. In doing so, FEMA concluded that the proposed work was ineligible for assistance because it significantly changed the retaining wall’s predisaster configuration and that such a change constituted an improved project, making it ineligible under FEMA regulations and policy. The town then filed a first-level appeal in April 2014. More than 2 years later—in June 2016—FEMA denied the town’s first-level appeal, upholding FEMA’s original determination. The town subsequently filed a second-level appeal in September 2016. Over a year later, PAAB was still reviewing the appeal. FEMA Has Taken Steps to Improve Appeals Processing, but Faces Challenges with Its Appeals Workforce FEMA has taken a number of steps to improve its management of the appeals process and respond to issues raised by us and the DHS OIG related to processing delays. As we presented earlier in this report, our 2008 review, and DHS’s subsequent 2011 OIG review, identified a number of organizational and procedural issues related to processing delays, staff independence, and communications with applicants. Responding to these issues, FEMA created the PAAB within the Recovery Directorate at FEMA Headquarters in late 2013, adding an auditing component to the Branch in 2014. PAAB then established a core of full-time staff at FEMA headquarters that were specifically assigned to process second-level appeals. At the same time, through the Recovery Directorate, each of FEMA’s 10 regional offices was assigned full-time staff for processing first-level appeals. Prior to PAAB, second- level appeals were processed by various offices within FEMA headquarters, while the management and processing of first-level appeals varied by FEMA regional office. Some regional offices assigned staff specifically to review appeals, while other offices assigned staff to appeal processing as part of their other responsibilities within the PA Program, such as determining eligibility for PA assistance. In standing up PAAB, FEMA also established an SOP that describes the organizational structure of PAAB, as well as its responsibilities and the roles of its staff. The SOP also addresses procedures related to PAAB’s responsibility for managing the entire PA appeals program. These responsibilities include reporting on appeal processing performance, providing training to appeals staff, and identifying PA appeal process and policy improvements. FEMA later issued a regional SOP that included procedures to help regional offices reduce the number of appeals that exceeded statutory time frames. These procedures reflected an ongoing effort to leverage internal resources when regional offices exceed processing capacity. Specifically, a regional office can submit a request to PAAB for assistance from analyst staff from other regions or from PAAB to assist with processing first-level appeals. PAAB may then temporarily assign an appeals analyst from PAAB or from another regional office to assist the regional office making the request. For example, one regional office official told us his office had requested assistance with 10 first-level appeals and PAAB was able to accommodate the request by assigning 8 of the 10 appeals to another region for processing. According to a senior PAAB official, this procedure allows FEMA to maximize use of its national appeal processing capacity. As of October 2017, PAAB had transferred 77 appeals from overwhelmed regional offices to those with capacity to process additional appeals. Further, FEMA procedures now require that a conflict check be performed to determine whether the analyst was involved with a PA project determination that is substantively related to the appeal. If a conflict is identified, options include disqualifying the appeals analyst from working on the appeal, or requesting the appeal be transferred to another regional office or PAAB for processing. State emergency management officials from five of the six states in our review told us that they believed that issues related to the independence of appeals staff had been addressed and were no longer an issue. PAAB also took steps to improve communication with applicants by creating an online second-level appeal tracking spreadsheet—accessible through the Internet—intended to provide applicants with information on the status of second-level appeals. The spreadsheet includes, among other things, the date the appeal was received by FEMA headquarters, the date that an RFI was sent to the applicant, whether the appeal was “under review,” whether a final decision had been granted, and the date any final decision was signed. FEMA also took steps to increase its staffing levels. In January 2015, FEMA’s Recovery Directorate completed a workforce analysis and determined that additional appeals analysts were needed to address capacity issues that were resulting in growing inventories of first-level appeals. At the time, FEMA concluded that, in addition to its 23 on-board appeals analysts, an additional 29 appeals analysts were needed to support the existing, as well as anticipated, appeal inventory increases across FEMA’s 10 regional offices. The Recovery Directorate requested and was subsequently authorized the additional appeals analyst positions, which, when filled, would provide the PA appeal program with a total of 52 first-level appeals analysts. With the exception of Region I, FEMA planned to provide each of the remaining 9 regional offices with at least 1 additional appeals analyst. Regional offices with the heaviest workloads, such as Region II and Region IV, would be allocated more appeals analysts. FEMA took steps to fill these positions over the next 2 years, and by June 2017, FEMA had filled 47 of the 52 positions. Despite efforts to improve its management of the appeals process, FEMA faces a backlog of both first- and second-level appeals among the three selected FEMA regional offices as well as PAAB. According to officials in PAAB and the three regional offices in our review, workforce challenges contribute to delays in processing PA appeals, even with the improvements described above. PAAB and the three regional offices in our review identified the following workforce challenges that contributed to PA appeal processing delays. Staff vacancies, inexperience, and turnover: Despite FEMA’s efforts to increase its appeals analyst staffing level—an effort that began in 2015—two of the three regional offices in our review had a number of vacancies for these positions through June 2017. PAAB and regional officials told us that such vacancies, which occurred over a prolonged period, contributed to appeal processing delays. FEMA data on appeals analyst staffing show that FEMA took nearly 2 years to fill the additional appeals analyst positions across its 10 regional offices. For example, in 1 of the regional offices in our review, 3 of the 8 appeals analyst positions were vacant through 2016 and were not filled until July 2017. Further, officials in this regional office told us that the current staffing level of 8 appeals analysts was inadequate to keep pace with the region’s increasing appeal inventory. Similarly, 6 of PAAB’s 11 appeals analyst positions were vacant from August 2015 to October 2016. By July 2017, PAAB had filled all but 2 appeals analyst positions. PAAB officials told us the appeals analyst staffing level consisting of 52 positions was a preliminary estimate and that this staffing level has not been adequate in regions with heavy workloads and appeal inventories. PAAB officials also acknowledged the potential benefits of having an appeals analyst staffing plan, but stated that they are not yet prepared to update the workforce assessment for PAAB and the regional offices, nor do they have plans to do so until full staffing is achieved. These officials also told us that they are still working to achieve the staffing levels developed in 2015 and are taking steps to address staffing challenges through more targeted hiring and use of career ladder positions. Further, PAAB staffing data showed that almost half of PAAB’s staff had less than 1 year of experience. PAAB officials told us that prior vacancies and a large number of inexperienced staff have contributed to processing delays and second-level appeal backlogs. PAAB officials also told us that retaining trained appeals analysts has been challenging due to limited career advancement opportunities within the appeals analyst position. These officials told us that although not required, individuals who typically apply for an appeals analyst position possess a law degree, and that once hired, some of them apply for attorney positions within PAAB or in various offices within FEMA or DHS. For example, PAAB staffing data showed that within 18 months of being hired by PAAB, four PAAB appeals analysts applied for and were subsequently hired as attorney-advisors within PAAB or other FEMA departments. Then those appeals analyst positions were vacant until the next round of hiring. Regional officials told us it has been challenging to find qualified applicants with the specialized skillset of an analyst position. They told us that, ideally, an appeals analyst should be an expert in the PA program and possess a nuanced understanding of the legal issues associated with the program’s requirements. Regional officials told us that, because of this specialized skillset, they look to recruit PA appeals analysts from other FEMA regional offices who may have an interest in relocating or are seeking a promotion. However, while recruiting appeals analysts from other regions may assist individual offices, it does not address FEMA’s goal of achieving its staffing levels. Delays in training appeals staff: FEMA requires that PA appeals analysts undergo a certification course that includes 3 days of training on processing appeals. The appeals analyst certification course, delivered through PAAB, covers both procedural steps of processing appeals as well as the policy and legal issues raised by the PA program, and ensures that trainees can prepare a well- written appeal response. After completing the course, an analyst in training must pass a test to demonstrate proficiency in reviewing and analyzing appeals and preparing appeal decisions. To this end, the analyst must analyze a mock appeal—based on facts similar to those presented in a previously decided appeal— and draft an appeal decision. FEMA policy states that only certified staff can serve as appeals analysts and must be recertified every 2 years. However, some appeals analysts in the regional offices in our review had not yet undergone the certification process, but were nonetheless working in an appeals analyst capacity under the supervision of certified analysts. PAAB procedures also state that a trainee analyst cannot assume work on an appeal without being supervised by a certified analyst. For example, in one regional office, four of the office’s nine appeals analysts had been working in their positions for between 6 months to a year before they received appeals analyst certification training. According to regional officials, this increased the supervisory workload on the remaining five appeals analysts within the region and the lack of timely training and certification of appeals analysts affect the efficient processing of appeals and can lead to delays in FEMA issuing appeal decisions. Deployment of appeals staff to disaster response: According to PAAB officials, while PA appeals analysts are considered “dedicated” positions, these analysts can be deployed at any time to provide assistance on a disaster, such as working with grant applicants to document damages or assisting applicants in developing project proposals to request PA grants. Officials from two of the three FEMA regional offices in our review told us that these deployments contributed to processing delays because, given limited resources, assigning staff to continue work on the appeal is not always possible. In one regional office, five of the nine PA appeals analysts were deployed in late 2016 to do recovery work related to damage from Hurricane Matthew. These deployments lasted approximately 30 to 90 days and left the regional office understaffed. Further, one regional office official told us that maintaining continuity in processing an appeal can be difficult for those analysts who are deployed because they must pick up where they left off on their assigned appeals upon their return. A senior PAAB official told us that regional appeals analyst staff have been deployed to assist with response and recovery efforts as a result of the catastrophic damage from Hurricanes Harvey, Irma, and Maria. As a result, these analysts have not been available to process first-level appeals. This official further told us that PAAB staff, including analyst staff—while not deployed—have been assigned to support disaster operations. For example, one staff member was assigned to support site inspector training, while two others were assigned to stand National Response Coordination Center watch. Further, one staff member was assigned to support training and contract review functions and the remaining staff members were assigned as call takers for the PA Grants Manager and Grants Portal hotline. To help overcome staffing shortages, according to FEMA documents, all three regional offices in our review staffed assistance from PAAB at various times during the past 2 years. However, officials from two of the three regional offices in our review told us that, based on their experiences, requesting staff from PAAB or other offices had a number of limitations. Specifically, because the originating regional office is ultimately responsible for the appeal, its staff must continue to oversee the appeal, including such responsibilities as tracking the appeal, corresponding with the applicant and the state as needed, and reviewing and approving the appeal decision. One regional office official told us that this arrangement was not helpful and only added an additional layer of complexity that delayed processing. Another regional official told us that the quality of the borrowed staff’s work was not consistent. This official further stated that, because offices are not able to select the analysts that would be assigned to work on their appeals, he was reluctant to use staff from other regional offices. According to leading human capital practices, the key to an agency’s success in managing its programs is sustaining a workforce with the necessary knowledge, skills, and abilities to execute a range of management functions that support the agency’s mission and goals. Achieving such a workforce depends on having effective human capital management through developing human capital strategies. Such strategic workforce planning includes the agency assessing current and future critical skill needs by, for example, analyzing the gaps between current skills and future needs, and developing strategies for filling the gaps identified in workforce skills or competencies. Standards for Internal Control in the Federal Government also states that agencies should continually assess their needs so that they are able to obtain a workforce that has the required knowledge, skills, and abilities to achieve their organization’s goals. Further, as we have previously reported in our work on strategic workforce planning, such staffing assessments should be based on valid and reliable data. However, FEMA has not developed a workforce staffing plan to identify hiring, training, and retention needs of appeals staff across PAAB and the regional offices. PAAB officials told us that they are still working to achieve the staffing levels developed in 2015 and are taking steps to address staffing challenges related to retention through more targeted hiring and use of career ladder positions. In the absence of a workforce plan for the PA appeals staff, FEMA will likely continue to experience workforce challenges including vacancies in key appeals analyst positions, appeals staff turnover, training delays, and understaffing due to disaster deployment. These challenges will likely continue to contribute to delays in FEMA’s processing and issuing first- and second-level PA appeals decisions. FEMA Established Goals and Measures to Assess Second- Level Appeal Processing, but Did Not Do So for First- Level Appeals FEMA officials have acknowledged the importance of establishing goals and measures to assess the performance of the PA appeals program. In particular, for fiscal year 2016, FEMA’s Recovery Directorate established two performance goals for PAAB’s processing of second-level appeals. The first goal was aimed at reducing the inventory of second-level appeals by 20 percent. The second goal was aimed at processing at least 30 percent of second-level appeals received in 2016 within 90 days of receiving the appeal, in order to comply with FEMA statutory time frames. FEMA internal documents showed that these two performance goals were intended to reduce the second-level appeal inventory, and, at the same time, promote a standard of timely second-level appeal processing for PAAB. According to PAAB officials, various factors beyond PAAB’s control prevented PAAB from meeting these performance goals. These factors included an unanticipated surge in the number of second-level appeals in 2016, as well as increased vacancies due to staff turnover in PAAB analyst positions in 2016. Recognizing these factors, PAAB developed a revised goal that focused on the number of appeals an analyst could process per month. According to PAAB officials, focusing the revised goal on analyst production controlled for external factors that tended to affect overall processing, such as surges in appeal submissions and staff turnover. PAAB officials told us that their proposed production goal was not accepted by the Recovery Directorate for 2016, but that PAAB adopted the revised goal for individual performance plans for PAAB appeals analyst staff. In contrast, although first-level appeals represent the majority of FEMA’s appeal inventory, FEMA has not developed goals and measures to assess the performance of first-level appeals processing across regional offices. PAAB collects various data from all 10 regional offices on first- level appeals, such as the number of first-level appeals being processed, as well as processing timeliness (i.e., appeals that exceeded time limits) and key words that can help identify various appeal subject-matter categories. PAAB then aggregates this data, which it publishes on a quarterly and weekly basis in internal reports that it shares with FEMA management. However, FEMA has not established goals to assess performance against the information that PAAB collects. According to FEMA officials, while the Recovery Directorate established goals and measures for second-level appeals, it is not responsible for developing goals and measures to assess performance within the regional offices. These officials told us further that some Regional Administrators have established goals and measures for first-level appeals within their regional offices, while others have not. For management to effectively monitor a program, Standards for Internal Control in the Federal Government state that it should create goals and measures to determine if a program is being implemented as intended. In addition, the quality of the program’s performance should be assessed over time and monitoring efforts should be evaluated to assure they help meet goals. Further, Congress enacted the GPRA Modernization Act of 2010 (GPRAMA) to focus and sustain attention on agency performance and improvement by requiring that federal agencies establish outcome- oriented goals and measures to assess progress towards those goals. Specifically, agencies, like DHS, are required to monitor progress towards the achievement of goals, report on that progress, and address issues identified. Without consistent performance measures across FEMA regional offices to help assess progress and identify deficiencies in appeals processing, DHS and its subcomponent agencies like FEMA may have difficulty providing accurate reporting on the effectiveness of current efforts to process first-level appeals and on the factors that contribute to ongoing appeal processing delays. Conclusions Although FEMA has made efforts to improve its management of the PA appeals process, these efforts have been hampered by a number of issues including weaknesses in FEMA’s appeals tracking data and its ability to ensure the quality of this data. FEMA corrected its appeals data for purposes of this report once we pointed out data discrepancies, but FEMA does not have a process to ensure data quality issues are permanently addressed. As a result, these weaknesses will persist. By implementing procedures to consistently track appeals data and ensure the quality of these data, FEMA will be in a better position to accurately report on appeal processing performance and make informed decisions about the appeals process. FEMA also faces a variety of workforce challenges that have contributed to appeals processing delays. These challenges include staffing vacancies, lack of experienced staff, high rates of staff turnover, delays in training appeals staff, and the deployment of appeals analysts for disaster response, all of which have contributed to processing delays. Addressing these challenges by identifying the hiring, training, and retention needs of its appeals offices through strategic workforce planning could help FEMA better position itself to reduce its appeals backlog and better respond to PA appeals. Further, although FEMA has established goals and measures for its second-level appeals processing, it has not done so for first-level appeals. By establishing goals and measures to assess the performance of its first-level appeals process, DHS and FEMA will be able to better evaluate the efficiency and effectiveness of its efforts to reduce the PA appeal backlog and improve appeal processing times. Recommendations for Executive Action We are making the following four recommendations to FEMA: The Assistant Administrator for Recovery should design and implement the necessary processes and procedures to ensure a uniform and consistent approach for tracking first-level appeals data to better integrate regional trackers with PAAB’s own first-level appeals tracker. (Recommendation 1) The Assistant Administrator for Recovery should design and implement the necessary controls to ensure the quality of the first-level appeals data collected at and reported from the regional offices to PAAB. (Recommendation 2) The Assistant Administrator for Recovery should develop a detailed workforce plan that documents steps for hiring, training, and retaining key appeals staff. The plan should also address staff transitions resulting from deployments to disasters. (Recommendation 3) The Assistant Administrator for Recovery should work with Regional Administrators in all 10 regional offices, to establish and use goals and measures for processing first-level PA appeals to monitor performance and report on progress. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to the Secretary of the Department of Homeland Security and the Administrator of the Federal Emergency Management Agency for review and comment. DHS provided written comments, which are reproduced in appendix II. In its comments, DHS concurred with our recommendations and described actions planned to address them. FEMA also provided technical comments, which we incorporated as appropriate. Additionally, we provided excerpts of the draft report to state emergency management officials in the selected six states we included in our review. We incorporated their technical comments as appropriate. Regarding our first recommendation, that FEMA design and implement the necessary processes and procedures to ensure a uniform and consistent approach for tracking first level-appeal data, DHS stated that FEMA’s PAAB will develop guides and checklists for the regions to ensure data uniformity and consistency and that PAAB will update its data review process, and develop additional content highlighting the importance of data integrity and accuracy. DHS estimated that this effort would be completed by July 31, 2018. Regarding our second recommendation, that FEMA design and implement the necessary controls to ensure first-level appeal data quality, DHS stated that PAAB will include content within the certified appeal analyst training highlighting the importance of data integrity and that first- level appeal data will be reviewed by PAAB on a quarterly basis. DHS estimated that this effort would be completed by February 28, 2019. Regarding our third recommendation, that FEMA develop a detailed workforce plan for hiring, training and retaining key appeals staff, DHS stated that by December 31, 2018, PAAB will produce a workload flow assessment on second-level appeals staffing and determine whether appeal timeliness issues still exist. If PAAB determines that significant response timeliness issues on second-level appeals still exist after most PAAB appeal analyst staff have at least one year of experience, a detailed PAAB workforce plan will be completed and finalized by December 31, 2019. PAAB will also complete an assessment of first-level appeal inventory and timeliness issues. If PAAB determines that significant regional response inventory and timeliness issues on first-level appeals still exist, FEMA will create a working group to prepare a detailed regional workforce plan. DHS estimated that this effort would be completed by December 31, 2019. Regarding our fourth recommendation that FEMA work with Regional Administrators to establish and use performance goals and measures for processing first-level appeals, DHS stated that PAAB has begun developing a methodology for establishing, measuring, and reporting on first-level appeals processing goals and performance progress, and that PAAB would work with the regions to complete and finalize this methodology. DHS estimated that this effort would be completed by August 31, 2018. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we are sending copies of this report to the Secretary of Homeland Security and interested congressional committees. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report reviews aspects of the Federal Emergency Management Agency’s (FEMA) management of the Public Assistance (PA) appeals process. The objectives of this review were to determine: (1) the extent to which FEMA ensures quality in its data on appeals and what FEMA data show about its appeals inventory and timeliness for appeals decisions; (2) what steps FEMA has taken to improve its management of the appeals process and what challenges, if any, remain; and (3) the extent to which FEMA has developed goals and measures to assess the appeal program’s performance. To address the first objective, we obtained and analyzed data from FEMA on all first- and second-level appeals that the agency received between January 2014 and July 2017. For first-level appeals, FEMA provided us data on appeals received between January 1, 2014, and July 12, 2017, while FEMA provided us data on second-level appeals received between January 1, 2014, and July 6, 2017. We focused on this time frame because it contained the most complete and available data on each type of appeal at the time of our review. We identified various discrepancies in the first-level appeals data, which we discussed with knowledgeable FEMA staff. Examples of these discrepancies, which we present in this report, included missing data, erroneous data entries, and inconsistent recording of data. In response to our discussions, FEMA provided us with corrected data to address the identified discrepancies. After obtaining the corrected data, we concluded the appeals data from FEMA were sufficiently reliable to provide information on PA appeals that we present in this report. We also obtained and analyzed FEMA policies and procedures related to tracking appeals data, such as FEMA’s policies and procedures related to regional offices, and evaluated them using Standards for Internal Control in the Federal Government. We analyzed the corrected data to determine FEMA’s appeal inventory— that is, the number of first-and second-level appeals that were pending and decided, including any amounts in dispute or amounts awarded, and appeal outcomes for appeals that FEMA decided. From the total number of appeals received, we excluded four second-level appeals that had been remanded or rescinded. We determined the processing times for first- and second-level decided appeals by calculating, for each appeal, the number of calendar days between the date that FEMA received the appeal and the date that FEMA rendered a decision on the appeal. We then calculated the average number of calendar days to determine average processing times for first- and second-level decided appeals. We determined the processing time for pending first-level appeals by calculating, for each appeal, the number of calendar days between the date FEMA received the appeal and July 12, 2017. Similarly, we determined the processing time for pending second-level appeals by calculating, for each appeal, the number of calendar days between the date FEMA received the appeal and July 6, 2017. We then calculated the average number of calendar days to determine average processing times for pending first-and second-level appeals. We compared processing times for first- and second-level appeals against FEMA’s 90-day statutory time frame to determine the number of calendar days by which FEMA exceeded the time frame. We also determined the number of first- and second-level appeals in which FEMA issued an RFI and those in which FEMA did not issue an RFI. For the first- and second-level appeals in which FEMA issued an RFI, we compared the date the appeal was received to the date that FEMA issued the RFI. We used the first RFI in cases where FEMA issued multiple RFIs. We then determined whether FEMA had issued the RFI within 90 calendar days. For the first- and second-level appeals in which FEMA did not issue an RFI, we compared the date the appeal was received to the date that FEMA issued a decision. We then determined whether FEMA had issued a decision after 90 calendar days. We also obtained and analyzed FEMA policies and procedures and program directives governing appeal data collection and evaluated them against Standards for Internal Control in the Federal Government. To address the first and second objectives, we also administered semistructured interviews to officials from 3 of FEMA’s 10 regional offices (Regions II, IV, and VI) with the highest number of first- and second-level pending appeals. We asked these officials about their efforts to process and track appeals, what improvements had been made regarding how PA appeals are processed, as well as what challenges they believed remained in processing PA appeals since 2013.To select these offices, we obtained data from FEMA on first- and second-level appeals that were pending a decision, as of October 31, 2016. Collectively, these appeals represented 69 percent of all pending first- and second-level appeals FEMA had received as of October 31, 2016. We focused on this time frame because it contained the most recent data for selecting FEMA regional offices at the time of our review. To obtain additional perspective on what, if any, challenges remain in FEMA’s management of the appeals process, we also interviewed state emergency management officials in six states (two states in each of the corresponding 3 FEMA regional offices). (See table 2.) The information obtained from the FEMA regional offices and the state emergency management offices cannot be generalized nationwide. However, the information obtained from these officials provides insight into the issues FEMA encountered during the appeal process. To additionally address the second objective, we reviewed our past report and Department of Homeland Security Inspector General reports on the PA appeals program. We also reviewed FEMA documentation, such as policy directives, internal staffing requests, appeals analyst position descriptions, and other internal memoranda. We used these sources to identify what steps FEMA had taken to improve its management of the appeals process since 2013. We also used this information to supplement our understanding of the challenges the Public Assistance Appeals Board (PAAB) and regional officials raised during our interviews discussed above. To address the third objective, we analyzed a series of FEMA internal performance reports issued between November 29, 2013, and February 15, 2017. Developed by PAAB and provided to FEMA management on a quarterly basis, these reports included aggregate information on PA appeals inventory, such as the number of first- and second-level pending appeals, the number of appeals processed within statutory timeframes, the number of pending appeals that are beyond the statutory timeframe, and common appeal issues based on keywords entered by analysts responsible for processing appeals. We also analyzed internal documents, such as briefs and newsletters, which provided detail on specific appeal decisions as well as the status of the appeals inventory. Further, we analyzed FEMA’s Strategic Plans for fiscal years 2008 to 2013 and fiscal years 2014 to 2018 to identify objectives, measures, and overall agency-wide goals. We assessed the information in these documents against leading practices in measuring agency performance and against federal standards for internal control. For all three objectives, we reviewed relevant legislation and FEMA standard operating procedures that govern both FEMA headquarters and regional offices. We also interviewed officials in PAAB and FEMA’s Recovery Directorate. Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Brenda Rabinowitz (Assistant Director), Anthony Bova (Analyst-in-Charge), Joseph Fread, and Sherrice Kerns made key contributions to this report. Jehan Chase, Chris Currie, Robert Gebhart, Chris Keisling, Donna Miller, Kathleen Padulchick, Amanda Parker, Erik Shive, and Walter Vance also provided assistance.
Why GAO Did This Study In both 2016 and 2017, 15 separate U.S. disasters resulted in losses exceeding $1 billion each. FEMA provides PA grants to state and local governments to help communities recover from such disasters. If applicants disagree with FEMA's decision on their PA grant application, they have two chances to appeal: a first-level appeal to be decided by the relevant FEMA regional office and, if denied, a second-level appeal to be decided within FEMA's Recovery Directorate. Each is subject to a 90-day statutory processing timeframe. GAO was asked to review FEMA's appeals process. This report examines: (1) the extent to which FEMA ensures the quality of its appeals data and what these data show about PA appeals inventory and timeliness; (2) what steps FEMA has taken to improve its management of the appeals process and what challenges, if any, remain; and (3) the extent to which FEMA developed goals and measures to assess program performance. GAO analyzed FEMA policies, procedures, and data on appeals and interviewed officials from headquarters and from regional offices with the highest number of pending appeals. GAO also spoke to state officials from the two states within each of the three regions with the highest number of pending appeals. What GAO Found Weaknesses in the quality of Federal Emergency Management Agency's (FEMA) Public Assistance (PA) appeals data limit its ability to oversee the appeals process. For example, FEMA's data are inaccurate and incomplete because regional offices do not consistently track first-level appeals and FEMA does not have processes to ensure data quality. When GAO discussed these weaknesses with FEMA officials, they acknowledged them and provided GAO with corrected data for January 2014 through July 2017. GAO's analyses of the corrected data show fluctuations in the appeal inventory from year to year depending on the number of disasters declared and delays in processing. For example, as shown in the figure, only 9 percent of first-level and 11 percent of second-level appeals were processed within the 90-day statutory timeframe. FEMA has taken steps to improve its management of the appeals process—including issues that GAO and the Department of Homeland Security's Office of Inspector General identified in 2008 and 2011. For example, FEMA increased its appeal staffing levels and developed standard operating procedures. Despite these efforts, FEMA continued to face a number of workforce challenges that contributed to processing delays, such as staff vacancies, staff turnover, and delays in training. FEMA has not developed a workforce staffing plan to identify hiring, training, and retention needs across its headquarters and regional offices, though FEMA officials acknowledge the potential benefits of having such a plan and stated that they are focused on filling vacancies. In the absence of a workforce plan, FEMA will continue to experience workforce challenges that could further contribute to delays in processing appeals. FEMA has not established goals and measures for assessing first-level appeals processing performance, but has done so for second-level appeals. FEMA views establishing these first-level goals and measures as the responsibility of its regional offices. Without goals and measures, FEMA is limited in its ability to assess the efficiency and effectiveness of its overall appeals process and identify and address weaknesses that may lead to delays in making appeal decisions. What GAO Recommends GAO is making four recommendations, including that FEMA implement a consistent approach for tracking appeals and ensuring data quality, develop a workforce plan, and develop measurable goals for processing first-level appeals. FEMA concurred with all four recommendations.
gao_GAO-19-105
gao_GAO-19-105_0
Background Cybersecurity incidents continue to impact federal entities and the information they maintain. According to OMB’s 2018 annual FISMA report to Congress, agencies reported 35,277 information security incidents to DHS’s U.S. Computer Emergency Readiness Team (US-CERT) in fiscal year 2017. As shown in figure 1, these incidents involved threat vectors, such as web-based attacks, phishing attacks, and the loss or theft of computer equipment, among others. These incidents and others like them can pose a serious challenge to economic, national, and personal privacy and security. The following examples highlight the impact of such incidents: In March 2018, the Department of Justice reported that it had indicted nine Iranians for conducting a massive cybersecurity theft campaign on behalf of the Islamic Revolutionary Guard Corps. According to the department, the Iranians allegedly stole more than 31 terabytes of documents and data from more than 140 American universities, 30 U.S. companies, and 5 federal government agencies, among other entities. In March 2018, a joint alert from DHS and the Federal Bureau of Investigation stated that, since at least March 2016, Russian government actors had targeted U.S. government entities and critical infrastructure sectors, including the energy, nuclear, water, aviation, and critical manufacturing sectors. In June 2015, the Office of Personnel Management reported that an intrusion into its systems had affected the personnel records of about 4.2 million current and former federal employees. Then, in July 2015, the agency reported that a separate but related incident had compromised its systems and the files related to background investigations for at least 21.5 million individuals. Federal Law and Policy Prescribe the Federal Approach and Strategy for Securing Information Systems The federal approach and strategy for securing information systems is prescribed by federal law and policy. FISMA sets requirements for effectively securing federal systems and information. In addition, the Federal Cybersecurity Enhancement Act of 2015 requires protecting federal networks through the use of federal intrusion prevention and detection capabilities. Further, Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure, directs agencies to manage cybersecurity risks to the federal enterprise by, among other things, using the NIST Framework for Improving Critical Infrastructure Cybersecurity (cybersecurity framework). The Federal Information Security Modernization Act of 2014 Sets Requirements for Securing Federal Systems and Information FISMA was enacted to improve federal cybersecurity and clarify government-wide responsibilities. The law is intended to provide for improved oversight of federal agencies’ information security programs. Specifically, the law provides a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. The law is also intended to ensure the effective oversight of information security risks, including those throughout civilian, national security, and law enforcement agencies. FISMA assigns OMB and DHS oversight roles in ensuring federal agencies’ compliance with the law. Among other things, FISMA requires OMB to develop and oversee the implementation of policies, principles, standards, and guidelines on information security in federal agencies, except with regard to national security systems. The law also assigns OMB the responsibility of requiring agencies to identify and provide information security protections commensurate with assessments of risk to their information and information systems. The law further requires DHS to administer the implementation of agency information security policies and practices for non-national security information systems, in consultation with OMB, by developing, issuing, and overseeing implementation of binding operational directives; monitoring agency implementation of information security policies and practices; and convening meetings with senior agency officials to help ensure their effective implementation of information security policies and practices, among other things. FISMA assigned to NIST the responsibility for developing standards and guidelines that include minimum information security requirements. To this end, NIST has issued several publications to provide guidance for agencies in implementing an information security program. For example, NIST Special Publication (SP) 800-53 provides guidance to agencies on the selection and implementation of information security and privacy controls for systems. FISMA also assigns to the head of each executive branch agency, responsibility for providing information security protections commensurate with the risk and magnitude of harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of information systems used or operated by an agency or by a contractor of an agency or other organization on behalf of an agency. The law also delegates to the agency chief information officer (CIO), or comparable official, the authority to ensure compliance with FISMA requirements. The CIO is responsible for designating a senior agency information security officer whose primary duty is information security. In addition, the law requires agencies to develop, document, and implement an agency-wide information security program to secure federal information systems. Specifically, these information security programs are to provide risk-based protections for the information and information systems that support the operations and assets of the agency. Further, FISMA requires agencies to comply with DHS binding operational directives, OMB policies and procedures, and NIST federal information processing standards. FISMA also has reporting requirements for OMB and federal agencies. Specifically, OMB is to report annually, in consultation with DHS, on the effectiveness of agency information security policies and practices, including a summary of major agency information security incidents and an assessment of agency compliance with NIST standards. Further, the law requires agencies to report annually to OMB, DHS, certain congressional committees, and the Comptroller General of the United States on the adequacy and effectiveness of their information security policies, procedures, and practices, as well as their compliance with FISMA. The Federal Cybersecurity Enhancement Act of 2015 Articulates Requirements for Protecting Federal Networks through the Use of Federal Intrusion Prevention and Detection Capabilities The Federal Cybersecurity Enhancement Act of 2015, among other things, sets forth authority for enhancing federal intrusion prevention and detection capabilities among federal entities. The act contains several provisions for DHS and OMB. Specifically, the act requires that DHS deploy, operate, and maintain capabilities to prevent and detect cybersecurity risks in network traffic traveling to or from an agency’s information system. DHS is to make these capabilities available for use by any agency. In addition, the act requires DHS to improve intrusion detection and prevention capabilities, as appropriate, by regularly deploying new technologies and modifying existing technologies. The act also requires OMB and DHS, in consultation with appropriate agencies, to review and update government-wide policies and programs to ensure appropriate prioritization and use of network security monitoring tools within agency networks, and to brief appropriate congressional committees. The Executive Order on Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure Directs Agencies to Use the Cybersecurity Framework for Managing Risks In May 2017, the President signed Executive Order 13800, which sets policy for managing cybersecurity risk as an executive branch enterprise. Specifically, it outlines actions to enhance cybersecurity across federal agencies and critical infrastructure to improve the nation’s cyber posture and capabilities against cybersecurity threats. To this end, the order states that the President will hold executive agency heads accountable for managing agency-wide cybersecurity risk and directs each executive agency to use the NIST cybersecurity framework to manage those risks. The cybersecurity framework, which provides guidance for cybersecurity activities, is based on five core security functions: Identify: Develop an organizational understanding to manage cybersecurity risk to systems, people, assets, data, and capabilities. Protect: Develop and implement appropriate safeguards to ensure delivery of critical services. Detect: Develop and implement appropriate activities to identify the occurrence of a cybersecurity event. Respond: Develop and implement appropriate activities to take action regarding a detected cybersecurity incident. Recover: Develop and implement appropriate activities to maintain plans for resilience and to restore capabilities or services that were impaired due to a cybersecurity incident. According to NIST, these five functions should be performed concurrently and continuously to address cybersecurity risk. In addition, when considered together, they provide a high-level, strategic view of the life cycle of an organization’s management of cybersecurity risk. Within the five functions are 23 categories and 108 subcategories that include controls for achieving the intent of each function. Appendix II provides a description of the cybersecurity framework categories and subcategories of controls. GAO Has Reported on Challenges Related to Establishing a Comprehensive Cybersecurity Strategy In February 2013, we reported that the government had issued a variety of strategy-related documents that addressed priorities for enhancing cybersecurity within the federal government, as well as for encouraging improvements in the cybersecurity of critical infrastructure within the private sector. However, we noted that no overarching cybersecurity strategy had been developed that articulated priority actions, assigned responsibilities for performing them, and set time frames for their completion. Accordingly, we recommended that the White House Cybersecurity Coordinator in the Executive Office of the President develop an overarching federal cybersecurity strategy that included all key elements of the desirable characteristics of a national strategy. These characteristics would include, among other things, milestones and performance measures for major activities to address stated priorities; cost and resources needed to accomplish stated priorities; and specific roles and responsibilities of federal organizations related to the strategy’s stated priorities. Since that time, the executive branch has made progress toward outlining a federal strategy for confronting cyber threats. For example, in September 2018, we reported that recent executive branch initiatives that identify cybersecurity priorities for the federal government provide a good foundation toward establishing a more comprehensive strategy. Nevertheless, we pointed out that additional efforts were needed to address all of the desirable characteristics of a national strategy that we recommended. Specifically, recently issued executive branch strategy documents did not include key elements of desirable characteristics that can enhance the usefulness of a national strategy as guidance for decision makers in allocating resources, defining policies, and helping to ensure accountability. For example, these strategy documents did not generally include: milestones and performance measures to gauge results; resources needed to carry out the goals and objectives; and clearly defined roles and responsibilities for key agencies, such as DHS, the Department of Defense, and OMB. Ultimately, we determined that a more clearly defined, coordinated, and comprehensive approach to planning and executing an overall strategy would likely lead to significant progress in furthering strategic goals and lessening persistent weaknesses. Subsequent to our September 2018 report, the President issued the National Cyber Strategy on September 20, 2018. The strategy builds upon Executive Order 13800 and describes actions that federal agencies and the administration are to take to, among other things, secure federal information systems. For example, the strategy states that the administration is expected to further enable DHS to secure federal department and agency networks, to include ensuring that DHS has appropriate access to agency information systems for cybersecurity purposes and can take and direct action to safeguard systems. In addition, the strategy states that the administration plans to continue with its existing efforts underway to transition agencies to shared services and infrastructure and that DHS is to have appropriate visibility into those services and infrastructure to improve cybersecurity posture. DHS Offers Federal Agencies Capabilities Intended to Detect and Prevent Intrusions to Federal Information Systems DHS’s Network Security Deployment (NSD) division manages cybersecurity programs that are intended to improve the cybersecurity posture of the federal government. Among these programs, NCPS provides a capability to detect and prevent potentially malicious network traffic from entering agencies’ networks. In addition, the Continuous Diagnostics and Mitigation (CDM) program provides tools to agencies intended to identify and resolve cyber vulnerabilities on an ongoing basis. DHS’s National Cybersecurity Protection System Is Intended to Detect and Prevent Cyber Intrusions Operated by DHS’s US-CERT, NCPS is intended to detect and prevent cyber intrusions into agency networks, analyze network data for trends and anomalous data, and share information with agencies on cyber threats and incidents. Deployed in stages, this system, operationally known as EINSTEIN, has provided increasing capabilities to detect and prevent potential cyberattacks involving the network traffic entering or exiting the networks of participating federal agencies. Table 1 provides an overview of the EINSTEIN deployment stages to date. In January 2016, we reported the projected total life-cycle cost of the program was approximately $5.7 billion through fiscal year 2018. In addition, according to the Federal CIO, Congress appropriated $468 million in fiscal year 2017 and $402 million in fiscal year 2018 for NCPS. In that report, we also noted that NCPS was partially, but not fully, meeting most of its stated system objectives. Although the system’s intrusion detection capabilities provided the ability to detect known patterns of malicious activity on agency networks, it was limited in its capabilities to identify potential threats using anomaly-based detection. We also reported that although DHS had developed metrics for measuring the performance of NCPS, the metrics did not gauge the quality, accuracy, or effectiveness of the system’s intrusion detection and prevention capabilities. The department had also identified needs for future capabilities, but had not defined requirements for the capability to detect threats entering and exiting cloud service providers. Further, DHS had not considered specific vulnerability information for agency information systems in making risk- based decisions about future intrusion prevention capabilities. Accordingly, we made nine recommendations to DHS to, among other things, enhance the NCPS capabilities for meeting its objectives and better define requirements for future capabilities. DHS agreed with each of our nine recommendations and indicated that it would take steps to address them. DHS’s Continuous Diagnostics and Mitigation Program Provides Agencies with Tools and Services Intended to Secure Agency Systems DHS’s CDM program provides federal agencies with tools and services that have the intended capability to automate network monitoring, correlate and analyze security-related information, and enhance risk- based decision making at agency and government-wide levels. These tools include sensors that perform automated scans or searches for known cyber vulnerabilities, the results of which can feed into a dashboard that, at an agency level, is intended to alert network managers and enable the agency to allocate resources based on the risk. Summary data from each participating agency’s dashboard is expected to be transmitted to the Federal Dashboard where the data can be used to inform decisions about cybersecurity risks across the federal government. There are four phases of CDM implementation: Phase 1—involves deploying products to automate hardware and software asset management, configuration settings, and common vulnerability management capabilities. According to the Cybersecurity Strategy and Implementation Plan, DHS purchased phase 1 tools and integration services for all participating agencies in fiscal year 2015. DHS plans to have all phase 1 tools deployed at participating agencies by the end of the second quarter of fiscal year 2019. Phase 2—intends to address privilege management and infrastructure integrity by allowing agencies to monitor users on their networks and to detect whether users are engaging in unauthorized activity. According to the Cybersecurity Strategy and Implementation Plan, DHS was to provide agencies with additional phase 2 capabilities throughout fiscal year 2016, with the full suite of CDM phase 2 capabilities delivered by the end of that fiscal year. However, according to the OMB FISMA Annual Report to Congress for Fiscal Year 2017, the CDM program began deploying Phase 2 tools and sensors during fiscal year 2017. DHS plans to have all phase 2 tools deployed at participating agencies by the end of fiscal year 2019. Phase 3—includes detection capabilities that are intended to assess agency network activity and identify any anomalies that may indicate a cybersecurity compromise. Full operating capability for phases 1, 2, and 3 is planned to be achieved by the end of fiscal year 2022. Phase 4—intends to provide tools to (1) protect data at rest, in transit, and in use; (2) prevent loss of data; and (3) manage and mitigate data breaches. According to CDM program officials, phase 4 has not been approved and no tools have been selected. NIST Recommends That Federal Agencies Deploy Intrusion Detection and Prevention Capabilities An approach for protecting systems against cybersecurity compromise is for federal agencies to build successive layers of defense mechanisms at strategic points in their information technology infrastructures. This approach, commonly referred to as defense in depth, entails implementing a series of protective mechanisms so that if one mechanism fails to detect and prevent an attack, another will provide a backup defense. By utilizing defense in depth, federal agencies can reduce the risk of a successful cyberattack by implementing intrusion detection and prevention capabilities. NIST has developed guidelines for protecting agency information systems using intrusion detection and prevention capabilities. For example, NIST SP 800-53 recommends that agencies strategically deploy capabilities and perform monitoring of their systems to include observation of events occurring on their network and at the external boundary of their network. In addition, NIST SP 800-94 provides agencies with guidance in designing, implementing, configuring, securing, monitoring, and maintaining such capabilities. As part of their defense-in-depth approach and, as recommended by the NIST guidelines, agencies can deploy the following list of capabilities, among others, on their networks to detect and prevent an attack: Protecting email from intrusions: According to OMB, email, by way of phishing attacks, remains one of the most common threat vectors across the government. Methods for protecting email include encryption, false email alerts, and anti-spear-phishing training. Monitoring cloud services: Cloud vendors provide services to agencies, including Software as a Service, Platform as a Service, and Infrastructure as a Service. As agencies increasingly rely on cloud services, monitoring traffic to and from their cloud service providers helps to ensure that agencies detect malicious traffic. Using host-based intrusion prevention: Host-based intrusion prevention systems provide defense at an individual system or device level by protecting against malicious activities. Host-based capabilities include memory-based protection and application whitelisting. Monitoring external and internal traffic: Agencies can monitor external and internal traffic, including: encrypted traffic, traffic between workstations and servers on the network, and direct connections to outside entities such as universities. Monitoring traffic helps to ensure that agencies detect malicious activity. Using security information and event management: A security information and event management capability produces real-time alerts and notifications of significant security events. Security alerts and notifications can provide the agency with better situational awareness regarding possible intrusion activity. Selected Agencies Were Not Effectively Implementing the Federal Government’s Approach and Strategy to Securing Information Systems According to inspectors general, agency CIOs, and OMB reports on federal information security practices, many agencies were not effectively implementing the federal government’s approach and strategy to securing information systems as of fiscal year 2017. Agencies’ inspectors general determined that most of the 23 civilian CFO Act agencies did not have effective agency-wide information security programs. They also reported that agencies did not have effective information security controls in place, leading to deficiencies in internal control over financial reporting. In addition, the CIOs demonstrated that, during fiscal years 2016 and 2017, most agencies had not met all targets for the cybersecurity CAP goal for improving cybersecurity performance. Further, based on FISMA metrics reported for fiscal year 2017, OMB determined that 13 of the 23 agencies were managing risks to their enterprise, while the other 10 agencies were at risk of ineffectively identifying, protecting, detecting, responding to, and if necessary, recovering from cyber intrusions. Figure 2 summarizes agencies’ efforts to implement the government’s approach and strategy for securing information systems as of fiscal year 2017. Appendix III includes a table that provides an additional overview of the effectiveness of each agency’s implementation of the government’s approach and strategy to securing information systems. Inspectors General Determined That Most Selected Agencies Did Not Have Effective Information Security Programs or Controls in Place as of Fiscal Year 2017 Inspectors general determined that more than half of the 23 civilian CFO Act agencies did not have effective agency-wide information security programs as of fiscal year 2017. Further, in agency financial statement audit reports for fiscal year 2017, inspectors general reported that, despite improvements being made in information security practices, most of the civilian CFO Act agencies continued to exhibit deficiencies in information security controls. As a result of these deficiencies, inspectors general reported material weaknesses or significant deficiencies in internal control over financial reporting. Inspectors General Indicate That Few Agencies Had Effective Information Security Programs FISMA requires inspectors general to determine the effectiveness of their respective agencies’ information security programs. To do so, FISMA reporting instructions direct inspectors general to provide a maturity rating for agency information security policies, procedures, and practices related to the five core security functions established in the NIST cybersecurity framework, as well as for the agency-wide information security program. The ratings used to evaluate the effectiveness of agencies’ information security programs are based on a five-level maturity model, as described in table 2. According to this maturity model, Level 4 (managed and measurable) represents an effective level of security. Therefore, if an inspector general rates the agency’s information security program at Level 4 or Level 5, then that agency is considered to have an effective information security program. For fiscal year 2017, the inspectors general for 6 of the 23 civilian CFO Act agencies reported that their agencies had an effective agency-wide information security program. More specifically, for the 5 core security functions, most inspectors general reported that their agency was at Level 3 (consistently implemented) for the identify, protect, and recover functions, and at Level 2 (defined) for the detect and respond functions, as shown in figure 3. Inspectors general report on the effectiveness of agencies’ information security controls as part of the annual audits of the agencies’ financial statements. The reports resulting from these audits include a description of information security control deficiencies related to the five major control categories defined by the Federal Information System Controls Audit Manual (FISCAM)—access controls, configuration management, segregation of duties, contingency planning, and security management. The reports also identify the inspectors general’s designation of information security as a significant deficiency or material weakness in internal control over financial reporting systems. For fiscal year 2017, inspectors general continued to identify information security control deficiencies in each of the five major control categories across the 23 civilian CFO Act agencies. The number of agencies with deficiencies in the access control and contingency planning information security control categories decreased between fiscal years 2016 and 2017, according to the inspectors general. Nevertheless, the inspectors general reported that agencies continued to exhibit deficiencies in these two control categories. In addition, the number of agencies with deficiencies in the security management and segregation of duties control categories increased from the prior year. The number of agencies reported as having deficiencies in the configuration management control category remained the same. Figure 4 shows the number of agencies that reported deficiencies in each of the information security control categories for fiscal years 2016 and 2017. Overall, inspectors general for the 23 civilian CFO Act agencies reported progress in agencies’ information security practices for fiscal year 2017. Specifically, during that time, 17 inspectors general designated information security as either a significant deficiency (11) or material weakness (6) in internal control over financial reporting systems for their agencies. This is a decrease from the previous fiscal year when 19 inspectors general designated information security as a significant deficiency (12) or material weakness (7). Most Agencies Reported Not Meeting All Targets for the Cybersecurity Cross- Agency Priority Goal in Fiscal Years 2016 and 2017 Reporting instructions contained in the fiscal year 2017 FISMA metrics directed CIOs to assess their agencies’ progress toward achieving outcomes that strengthen federal cybersecurity. To do this, CIOs evaluated their agencies’ performance in reaching targets for specific FISMA reporting metrics. According to the reporting instructions, certain metrics were selected to represent the administration’s cybersecurity CAP goal. These selected metrics allowed CIOs to evaluate their agencies progress in meeting targets for that goal. The cybersecurity CAP goal for fiscal years 2015 through 2017 was to improve cybersecurity performance by having an ongoing awareness of information security, vulnerabilities, and threats impacting the operating information environment; ensuring that only authorized users have access to resources and information; and implementing technologies and processes that reduce the risk of malware. The cybersecurity CAP goal consisted of three priority areas with a total of nine performance indicators. Each of the nine performance indicators had an expected level of performance, or target, for implementation. Table 3 shows the three priority areas and related performance indicators and targets of the cybersecurity CAP goal for fiscal years 2015 through 2017. According to agency CIO assessments for fiscal year 2017, 6 of the 23 agencies met all 9 targets for the cybersecurity CAP goal. More specifically, 8 agencies met all four targets for the information security continuous 16 agencies met the two targets for the identity, credential, and access management priority area; and 17 agencies met all three targets for the anti-phishing and malware defense priority area. In addition, CIOs reported that agencies were making progress in meeting the targets for the nine performance indicators for fiscal years 2016 and 2017, with increases in the number of agencies meeting the targets within each of the three priority areas. However, although the number of agencies that met the targets in individual priority areas showed a net increase, not all agencies maintained their status. For example, the CIO for one agency reported meeting all three targets for the anti-phishing and malware defense priority area in fiscal year 2016, but reported that the agency only met two of the three targets in fiscal year 2017. Figure 5 shows the number of agencies that reported meeting each of the targets within the individual cybersecurity CAP goal priority areas for fiscal years 2016 and 2017. Although the CIOs for only six agencies reported meeting each of the targets associated with all nine performance indicators for the three cybersecurity CAP goal priority areas, the CIOs at an additional eight agencies reported meeting each target for two of the three priority areas. Specifically, one CIO reported that its agency met each of the targets for the (1) information security continuous monitoring and (2) identity, credential, and access management priority areas; another CIO reported that its agency met each of the targets for the (1) information security continuous monitoring and (2) anti-phishing and malware defense priority areas; and the CIOs at six other agencies met each of the targets for the (1) identity, credential, and access management and (2) anti-phishing and malware defense priority areas. In fiscal year 2018, the President’s Management Agenda replaced the three cybersecurity-focused CAP goal priority areas with updated performance indicators, most of which are to be met by 2020: 1. the manage asset security priority area is similar to the information security continuous monitoring priority area from the previous CAP goal and has a focus on understanding the assets and users on agency networks. In addition to hardware asset and software asset management, this priority area includes performance indicators for authorization and mobile device management. 2. the limit personnel access priority area focuses on issues of access management. This area includes performance indicators for using automated access management and managing access for privileged network and high-impact system users. The privileged network access management performance indicator is a continuation of the identity, credential, and access management priority area of the previous cybersecurity CAP goal. Therefore, agencies are expected to complete this metric by the end of the fiscal year 2018 FISMA reporting year. 3. the protect networks and data priority area, which is similar to the anti-phishing and malware defense priority area from the previous cybersecurity CAP goal, has three new performance indicators: intrusion detection and prevention, exfiltration and enhanced defenses, and data protection. Appendix IV describes the updated cybersecurity-focused CAP priority areas and performance indicators in more detail. OMB Determined That 13 of the 23 Civilian CFO Act Agencies Were Managing Cybersecurity Risk In Executive Order 13800, the President directed OMB, in coordination with DHS, to assess and report to the executive branch on the sufficiency and appropriateness of federal agencies’ processes for managing cybersecurity risks. For these risk management assessments, OMB leveraged the FISMA metrics reported by agency CIOs and inspectors general for fiscal year 2017. The metrics addressed domains that correspond with the five core security functions identified in the cybersecurity framework. Table 4 lists these domains and their relationship to the core functions. Based on OMB’s evaluation of these domains, agency risk management processes related to the five core security functions and overall agency enterprise fell into one of the following three rating categories: managing risk: required cybersecurity policies, procedures, and tools are in use and the agency actively manages cybersecurity risks; at risk: some essential policies, processes, and tools are in place to mitigate overall cybersecurity risk, but significant gaps remain; and high risk: key fundamental cybersecurity policies, processes, and tools are either not in place or not deployed sufficiently. For fiscal year 2017, OMB reported that not all agencies were managing risk. When considering each of the five core security functions, OMB reported that most of the 23 agencies were at risk or at high risk with regard to the identify and protect core security functions. Less than half of the 23 agencies were at risk with regard to the detect, respond, and recover core security functions. Overall, OMB determined that 13 agencies were managing risk and that the remaining 10 agencies were at risk of not effectively identifying, protecting, detecting, responding to, and if necessary, recovering from cyber intrusions. Figure 6 shows OMB’s risk management assessment ratings by core security function across the 23 agencies for fiscal year 2017. DHS and OMB Facilitated the Use of Intrusion Detection and Prevention Capabilities to Secure Federal Agency Systems, but Further Efforts Remain DHS and OMB, as required by law and policy, have taken various actions to facilitate the agencies’ use of intrusion detection and prevention capabilities to secure federal systems. For example, DHS has developed an intrusion assessment plan, deployed NCPS to offer intrusion detection and prevention capabilities to agencies, and is providing tools and services to agencies to monitor their networks through its CDM program. However, NCPS still had limitations in detecting certain types of traffic and agencies were not sending all appropriate traffic through the system. Further, CDM was behind at meeting planned implementation dates, and agencies have requested additional training and guidance for these services. OMB has taken steps to improve upon agencies’ capabilities, but has not completed a policy and strategy to do so, or fully reported on its assessment of agencies’ capabilities. DHS Has Taken Actions to Facilitate the Use of Intrusion Detection and Prevention Capabilities and to Make Improvements to Those Capabilities The Federal Cybersecurity Enhancement Act of 2015 requires DHS, in coordination with OMB, to develop and implement an intrusion assessment plan to proactively detect, identify, and remove intruders in agency information systems on a routine basis. The act also requires that the plan be updated, as necessary. In December 2016, DHS documented its Intrusion Assessment Plan. In the plan, DHS outlined tools, platforms, resources, and ongoing work that the department provides, and that are intended to help agencies detect, identify, and remove intruders on their networks and systems. The intrusion assessment plan also outlines a defense-in-depth strategy, which utilizes multiple layers of cybersecurity and deploys multiple capabilities in combination, to secure agencies’ networks and information systems. For example, the plan calls for DHS to implement NCPS to provide a perimeter defense for the networks of federal civilian executive branch agencies, while the agencies are to deploy their own intrusion detection and prevention capabilities inside their networks. DHS submitted its intrusion assessment plan to OMB in January 2017. DHS Has Worked to Improve NCPS, but Agencies Did Not Route All Traffic through Intrusion Detection and Prevention Capabilities Offered by this System The Federal Cybersecurity Enhancement Act of 2015 also requires DHS to deploy, operate, and maintain a capability to detect cybersecurity risks and prevent network traffic associated with such risks from transiting to or from an agency information system. In addition, the act requires that DHS make regular improvements to intrusion detection and prevention capabilities by deploying new technologies and modifying existing technologies. Further, the act requires agencies to use this capability on all information traveling between their information systems and any information system other than an agency information system. DHS developed NCPS, operationally known as EINSTEIN, to provide the capabilities to detect and prevent potentially malicious network traffic from entering agency networks. Consistent with recommendations we made to DHS in January 2016, DHS has taken actions to improve these capabilities and has other actions underway. For example, the department determined that enhancing NCPS’s current intrusion detection approach to include functionality that would detect deviations from normal network behavior baselines would be feasible. In addition, according to DHS officials, the department was operationalizing functionality intended to identify malicious activity in network traffic otherwise missed by signature-based methods. determined that developing enhancements to current intrusion detection capabilities to facilitate the scanning of Internet Protocol Version 6 (IPv6) traffic would be feasible. According to DHS officials, the department has developed plans to fully support IPv6 for several of its NCPS intrusion detection capabilities. Further, the department has developed implementation schedules and begun roll-out of the enhancements. updated the tool it uses to manage and deploy intrusion detection signatures to include a mechanism to clearly link signatures to publicly available, open-source information. developed clearly defined requirements for detecting threats on agency internal networks and at cloud service providers to help better ensure effective support of information security activities. According to DHS officials, the department was also continuing pilot activities with cloud service providers to enhance protections of agency assets. developed processes and procedures for using vulnerability information, such as data from the CDM program as it becomes available, to help ensure the department is using a risk-based approach for the selection/development of future NCPS intrusion prevention capabilities. Nevertheless, NCPS continues to have known limitations in its ability to identify potential threats. For example: NCPS does not have the ability to effectively detect intrusions across multiple types of traffic. Specifically, DHS determined that developing enhancements to current intrusion detection capabilities to facilitate the scanning of traffic related to supervisory control and data acquisition (SCADA) control systems would not be feasible. However, according to DHS officials, the department is exploring capabilities that are intended to provide critical, cross-sector, real-time visibility into critical infrastructure companies that utilize SCADA systems. In addition, DHS determined that the scanning of encrypted traffic would not be feasible. Nevertheless, according to its officials, the department performed research on potential architectural, technical, and policy mitigation strategies that could provide both the protection and situational awareness for encrypted traffic. The department has actions under way to continue its research in this area. DHS does not always explicitly ask agencies for feedback or confirmation of receipt of NCPS-related notification. While the department had drafted a standard operating procedure related to its incident notification process, the policy did not instruct DHS analysts specifically to include a solicitation of feedback from agencies within the notification. Further, US-CERT could not provide any information regarding the timetable for when these procedures would take effect. Metrics for NCPS, as provided by DHS, do not provide information about how well the system is enhancing government information security or the quality, efficiency and accuracy of supporting actions. Without the deployment of comprehensive measures, DHS cannot appropriately articulate the value provided by NCPS. While the department had taken actions to develop new measures, these measures did not provide a qualitative or quantitative assessment of the system’s ability to fulfill the system’s objectives. NSD did not provide guidance to agencies on how to securely route their information to their Internet service providers. Without providing network routing guidance, NSD has no assurance that the traffic it sees constitutes all or only a subset of the traffic the customer agencies intend to send. As shown in table 5, as of October 2018, the department had implemented five of the nine recommendations and was in the process of implementing the remainder. However, until DHS completes implementation of the remaining recommendations, the effectiveness of NCPS’s intrusion detection and prevention capabilities may be hindered. In addition, the 23 civilian CFO Act agencies had implemented NCPS capabilities to varying degrees. In a March 2018 report, OMB reported that 21 (about 91 percent) of the 23 agencies had implemented the first two iterations of the NCPS capabilities. In addition, 15 (about 65 percent) of the 23 agencies had implemented all three NCPS capabilities, as shown in table 6 below. However, agencies did not route all network traffic for all information traveling between their information systems and any information system other than an agency information system through NCPS sensors. For example, officials at 13 of 23 agencies stated that not all of their agency external network traffic flowed through NCPS. To illustrate, officials at one agency estimated that 20 percent of their external network traffic did not flow through the system. In addition, 4 of the agencies in our review previously cited several challenges in routing all of their traffic through NCPS intrusion detection sensors, including capacity limitations of the sensors, agreements with external business partners that use direct network connections, interagency network connections that do not route through Internet gateways, use of encrypted communications mechanisms, and backup network circuits that are not used regularly. NSD officials stated that agencies are responsible for routing their traffic to the intrusion detection sensors, and DHS does not have a role in that aspect of NCPS implementation. As a result, potential cyberattacks may not be detected or prevented for a portion of the external traffic at federal agencies. As noted above, we previously recommended that DHS work with agencies to better ensure the complete routing of information to NCPS sensors. DHS Has Taken Steps to Provide Advanced Network Security Tools, but Has Not Met Planned Implementation Dates The Federal Cybersecurity Enhancement Act of 2015 requires DHS to include, in the efforts of the department to continuously diagnose and mitigate cybersecurity risks, advanced network security tools to improve the visibility of network activity and to detect and mitigate intrusions and anomalous activity. According to DHS officials, the department is addressing the requirement to improve the visibility of network activity by including advanced network security tools as a part of CDM phase 3. In April 2018, we testified that DHS had previously planned to provide 97 percent of federal agencies with the services they needed for CDM phase 3 in fiscal year 2017. In addition, according to OMB’s annual FISMA report for fiscal year 2017, the CDM program was to continue to incorporate additional capabilities, including phase 3, in fiscal year 2018. However, DHS now expects initial operational capabilities to be in place for phase 3 in fiscal year 2019. The department has awarded contracts of approximately $3.26 billion to support its Dynamic and Evolving Federal Enterprise Network Defense (also known as DEFEND) aspect of the CDM program, which is to include phase 3. DEFEND also is to provide coverage for existing agency deployments. According to DHS documentation, the task orders associated with DEFEND are to be issued between the second quarter of fiscal year 2018 and the second quarter of fiscal year 2024. Agencies Indicated the Need for Additional Training and Guidance Related to NCPS and CDM FISMA requires that DHS provide operational and technical assistance to agencies in implementing policies, principles, standards, and guidelines on information security. Toward this end, DHS has available training and guidance related to the implementation of the capabilities of NCPS (i.e., EINSTEIN) and CDM. Specifically: According the DHS officials, the department offers training and guidance to agencies on EINSTEIN 1 implementation. For example, DHS established a program in which the Software Engineering Institute will provide training and mentoring to agencies looking to enhance their understanding of, and proficiency with, the EINSTEIN 1 capability (e.g., network traffic information). NCPS program officials stated that agencies can use this service, which is available at no charge to them, on an unlimited basis as long as the requests relate to EINSTEIN 1. According to the officials, training and guidance related to EINSTEIN 2 and EINSTEIN 3 Accelerated is limited because DHS intentionally restricts the amount of data provided to agencies. According to DHS officials, the department also offers training and guidance to assist agencies with the implementation and use of resources associated with the CDM program, including webinars, guides, and computer-based training. The DEFEND contracts that the department awarded also include a mechanism for agencies to procure specialized tailored training, such as on the use of CDM tools. The department also offers customer advisory forums every other month that agencies are invited to attend. According to CDM program officials, the program’s governance, among other topics, is commonly discussed during these forums. Further, the department provides agencies with guidance, such as various governance documents, best practices, and frequently asked questions, through a web portal that is made available by OMB. In addition, US-CERT offers the CDM training program, which is to provide CDM implementation resources. Nevertheless, most agencies told us that they wanted DHS to provide more training and guidance as it relates to their implementation of the capabilities made available by NCPS and CDM. Specifically, Officials from 16 of 23 agencies reported that they wanted to receive additional training on NCPS capabilities. For example, officials at 5 agencies stated that they would like to receive training related to using network traffic information, understanding alerts, or implementing capabilities for cloud services. The officials also stated that they wanted training specific to agency security personnel. Officials from 19 of 23 agencies stated that they wanted to receive additional guidance related to NCPS’s capabilities, but not all of the 19 provided specific details. For example, officials from at least 3 agencies stated that they wanted additional guidance such as, “how to” documents, descriptions of architecture details, or guidance documents that explain NCPS’s capabilities so that agencies can gauge the gap between the security that the system provides and the security being provided by their own agency’s capabilities. Officials from 21 of 23 agencies reported that they wanted to receive additional training on implementing CDM at their agencies. For example, officials from 7 agencies suggested that additional training on the use of the tools would be beneficial. Officials from 22 of 23 agencies stated that they wanted additional guidance as it relates to CDM implementation. For example, officials from one agency stated that they would like examples of best practices and successful deployments. These requests for additional training and guidance demonstrate that agencies are either unaware of the available training and guidance, or that the training may not meet their needs. Until DHS coordinates with agencies to determine if additional training and guidance are needed, agencies may not be able to fully realize the benefits of the capabilities provided by the NCPS and CDM programs. OMB Took Actions to Oversee Agency Implementation of Intrusion Detection and Prevention Capabilities and Report to Congress, but Did Not Fully Complete Required Actions OMB Did Not Submit the Intrusion Assessment Plan to Congress or Fully Describe the Plan’s Implementation in Other Reports Although OMB took steps to report on agencies’ implementation of intrusion detection and prevention capabilities, it did not report on all required actions. For example, the office did not submit DHS’s intrusion plan to Congress as required by the Federal Cybersecurity Enhancement Act of 2015. In addition, OMB provided various reports to Congress that described agencies’ intrusion detection and prevention capabilities, but the reports did not always include all information required by the act. Further, OMB developed a draft policy and strategy that were intended to improve agency capabilities, but it had not finalized these documents. The Federal Cybersecurity Enhancement Act of 2015 requires OMB to submit the intrusion assessment plan developed by DHS to the appropriate congressional committees no later than 6 months after the date of enactment of the act. The act also required OMB to submit to Congress a description of the implementation of the intrusion assessment plan and the findings of the intrusion assessments conducted pursuant to the intrusion assessment plan no later than 1 year after the date of enactment of the act, and annually thereafter. Although DHS developed and documented an intrusion assessment plan, which described a defense-in-depth approach to security, OMB did not submit the plan to Congress, as called for in the act. Even though DHS submitted the plan to OMB in January 2017, OMB had not submitted it to Congress as of October 2018 (21 months after DHS submitted the plan and 28 months past the due date). On the other hand, OMB did submit its own reports to Congress which generally described elements of the implementation of DHS’s intrusion assessment plan and intrusion assessment findings. In September 2017, OMB issued its analysis of agencies’ implementation of intrusion detection and prevention capabilities, or more specifically, agencies’ implementation of the various versions of NCPS. In addition, the office’s annual FISMA report, issued most recently in March 2018, generally covered elements of the intrusion assessment plan. OMB personnel within the Office of the Federal CIO believed that these two reports, along with a process the office had initiated to validate incidents across the government, addressed the requirement for OMB to submit to Congress a description of the implementation of the intrusion assessment plan and the findings of the intrusion assessments conducted pursuant to the plan. However, the September 2017 and March 2018 reports did not address other elements described in DHS’s intrusion assessment plan. For example, OMB did not describe agency roles associated with segmenting their networks, identifying key servers based on threat and impact, ensuring all applications are appropriately tracked and configured, and categorizing and tagging data based on threat and impact. While OMB has provided important information to congressional stakeholders through its own reports, until it submits the plan and addresses all elements described in DHS’s intrusion assessment plan, it will continue to be remiss in providing timely and sufficiently detailed information regarding the intrusion assessment plan to congressional stakeholders to support their oversight responsibilities. OMB Submitted Its Analysis of Agencies’ Application of Intrusion Detection and Prevention Capabilities, but Did Not Include the Degree to Which the Capabilities Had Been Applied The Federal Cybersecurity Enhancement Act of 2015 also required that OMB submit an analysis of agencies’ application of the intrusion detection and prevention capabilities to Congress no later than 18 months after the date of enactment of the act, and annually thereafter. OMB was to include a list of federal agencies and the degree to which each agency had applied the intrusion detection and prevention capabilities in this analysis. As discussed previously in this report, OMB issued its analysis of agencies’ implementation of intrusion detection and prevention capabilities in September 2017. However, the analysis did not include the degree to which agencies had applied the intrusion detection and prevention capabilities. For example, the analysis did not reflect that not all agencies were using this capability on all information traveling between their systems and any system other than an agency system, as required by the act. Until OMB includes the degree to which agencies have applied intrusion detection and prevention capabilities in its analysis, it cannot provide congressional stakeholders with an accurate portrayal of the extent to which the capabilities are detecting and preventing potential intrusions. The Federal Chief Information Officer Reported on Intrusion Detection and Prevention Capabilities, but Did Not Address All Elements Required by the Federal Cybersecurity Enhancement Act of 2015 The Federal Cybersecurity Enhancement Act of 2015 further required that the Federal Chief Information Officer, within OMB, submit a report to Congress no earlier than 18 months after the date of enactment, but no later than 2 years after that date, assessing the intrusion detection and intrusion prevention capabilities that DHS made available to agencies. The act required that the report address (1) the effectiveness of DHS’s system used for detecting, disrupting, and preventing cyber-threat actors, including advanced persistent threats, from accessing agency information and agency information systems; (2) whether the intrusion detection and prevention capabilities, continuous diagnostics and mitigation, and other systems deployed are effective in securing federal information systems; (3) the costs and benefits of the intrusion detection and prevention capabilities, including as compared to commercial technologies and tools, and including the value of classified cyber threat indicators; and (4) the capability of agencies to protect sensitive cyber threat indicators and defensive measures if they were shared through unclassified mechanisms for use in commercial technologies and tools. In a report issued in September 2018 (about 8 months past the required due date), the Federal Chief Information Officer provided Congress an assessment of intrusion detection and intrusion prevention capabilities across the federal enterprise. The report pointed out, among other things, that agencies did not possess or properly deploy capabilities to detect or prevent intrusions or minimize the impact of intrusions when they occur. In addition, the report acknowledged the need to improve the effectiveness of intrusion detection and intrusion prevention capabilities and stated that OMB would track performance through the CAP goal and annual FISMA reports. However, the report did not address all of the requirements specified in the act. For example, the report did not address whether DHS’s system (i.e., NCPS) was effective in detecting advanced persistent threats. In addition, the report did not include a comparison of the costs and benefits of the intrusion detection and prevention capabilities versus commercial technologies and tools, or the value of classified cyber threat indicators. Further, the report did not address the capability of agencies to protect sensitive cyber threat indicators and defensive measures. Until OMB updates the Federal CIO report to address all of the requirements specified in the act, it will continue to be remiss in providing timely and sufficiently detailed information, such as that related to costs and benefits, among other elements in the act, to congressional stakeholders to support their oversight responsibilities. OMB Initiated Plans for Improving Agencies’ Implementation of Intrusion Detection and Prevention Capabilities, but Has Not Completed a Policy and Strategy In addition to OMB’s responsibilities in the Federal Cybersecurity Enhancement Act of 2015, OMB has initiated plans for further improving agencies’ intrusion detection and prevention capabilities. In response to a tasking in Executive Order 13800, the Director of the American Technology Council coordinated the development of a report to the President from the Secretary of DHS, the Director of OMB, and the Administrator of the General Services Administration, regarding the modernization of federal information technology (IT). The report, Report to the President on Federal IT Modernization, identified actions that OMB should take for (1) prioritizing the modernization of high-risk, high-value assets and (2) modernizing the Trusted Internet Connection (TIC) program and NCPS to improve protections, remove barriers, and enable commercial cloud migration. For example, OMB was to take the following actions subsequent to the December 13, 2017 report issuance date: Within 60 days: Update a TIC policy to address challenges with agencies’ perimeter-based architectures, such as the modernization of NCPS. In addition, introduce a “90 day sprint” during which approved projects would pilot proposed changes in TIC requirements. Within 90 days: Update the annual FISMA and CAP goal metrics to focus on those critical capabilities that were most commonly lacking among agencies and focus oversight assessments on high-value assets. Within 120 days: In conjunction with DHS, develop a strategy for optimally realigning resources across agencies to reduce the risk to high-value assets and respond to cybersecurity incidents for those assets. OMB has taken steps toward implementing several, but not all, of these actions. For example, it introduced a “90 day sprint” and, according to knowledgeable OMB staff, the outcomes of this action are directly informing changes in TIC requirements. In addition, OMB updated the annual FISMA and CAP goal metrics by including several metrics that focus on high-value assets. The updated FISMA and CAP goal metrics went into effect in April 2018. However, while OMB had taken steps toward updating the TIC policy and developing a strategy for optimally realigning resources, the policy and strategy were in draft and had not yet been finalized as of October 2018. The agency did not specify a time frame for finalizing the policy and strategy. Until OMB finalizes the TIC policy and the strategy for optimally realigning resources, the enhancements offered through the policy and strategy are unlikely to be realized. Selected Agencies Had Not Consistently Implemented Capabilities to Detect and Prevent Intrusions FISMA requires agencies to provide information security protections to prevent unauthorized access to their systems and information. Officials from the 23 selected agencies reported to us that they generally took steps to meet this requirement by augmenting the tools and services provided by DHS with their own intrusion detection and prevention capabilities. However, agencies did not consistently implement five key capabilities specified by DHS and NIST guidance. In addition, most of the agencies did not fully implement any of the phases of DHS’s CDM program that is intended to improve their capabilities to detect and prevent intrusions. Few Agencies Had Fully Implemented Required Email Protections Binding Operational Directive (BOD) 18-01 instructs agencies to enhance email security. These enhancements include enabling encrypted email transmission, ensuring that receiving mail servers know what actions the agency would like taken when an email falsely claims to have originated from the agency, and removing certain insecure protocols, among others. The final deadline for implementing all BOD 18-01 requirements was October 16, 2018. Additionally, NIST SP 800-53 Revision 4 recommends that security awareness training include training on how to recognize and prevent spear-phishing attempts. As of September 2018, only 2 of the 23 agencies reported implementing all of the email requirements. For the remaining 21 agencies: 9 agencies stated that their agency had plans to implement all enhancements by the October 2018 deadline, 1 agency was uncertain whether it would meet the deadline, and 11 stated they would not be able to meet the deadline. By contrast, the majority of agencies (22 of 23) reported that they had trained staff on spear-phishing exercises, as recommended by NIST SP 800-53 Revision 4. Officials at the remaining agency told us that the agency planned to have spear-phishing exercises in fiscal year 2019. Such training should help ensure that phishing will be a less effective attack vector against the majority of agencies. While agencies benefit from secure protocols and spear-phishing training, implementing the remaining BOD 18-01 email requirements would provide additional protection to agency information systems. Agencies Informed GAO That They Often Had Not Implemented Four Key Capabilities NIST recommends that federal agencies deploy intrusion detection and prevention capabilities. These capabilities include monitoring cloud services, using host-based intrusion prevention systems, monitoring external and internal network traffic, and using a security information and event management (SIEM) system. However, in our semi-structured interviews of the 23 agencies, officials told us that they often had not implemented many of these capabilities. Such inconsistent implementation exposes federal systems and the information they contain to additional risk. As part of their continuing oversight efforts, OMB and DHS can use the information below to work with agencies to identify obstacles and impediments affecting the agencies’ abilities to implement these capabilities. Less Than Half of the Selected Agencies That Used Cloud Services Monitored Their Cloud-Related Traffic NIST SP 800-53 Revision 4 states that agencies should monitor and control communications at the external boundary of the network. However, as of June 2018, fewer than half of the agencies that used cloud computing services were monitoring cloud traffic. Specifically: 10 of 22 agencies that used Infrastructure as a Service were monitoring inbound and outbound Infrastructure as a Service traffic, 7 of 21 agencies that used Platform as a Service were monitoring inbound and outbound Platform as a Service traffic, and 10 of 23 agencies that used Software as a Service were monitoring inbound and outbound Software as a Service traffic. Without monitoring traffic to and from cloud service providers, agencies risk a greater chance of malicious cloud activity detrimentally affecting agency information security. Several Selected Agencies Had Not Fully Deployed Host- Based Capabilities NIST SP 800-53 Revision 4 states that agency internal monitoring may be achieved by utilizing intrusion prevention capabilities. These capabilities include using host-based intrusion prevention systems to provide defense at an individual system or device level by protecting against malicious activities. Host-based capabilities include memory-based protection and application whitelisting. As of June 2018, officials at the 23 agencies reported the following to us: 16 agencies used host-based intrusion prevention capabilities, 15 agencies used memory-based protection, and 8 agencies used host-based application whitelisting. Until host-based intrusion protections are fully deployed, agencies will be at greater risk of malicious activity adversely affecting agency operations. Not All Selected Agencies Monitored External and Internal Traffic NIST SP 800-53 Revision 4 also states that agencies should monitor and control communications at the external boundary of the network and at key internal boundaries (e.g., network traffic). NIST guidance also stated that an agency should deploy monitoring devices strategically within the network to detect essential information. However, the agencies in our review did not always monitor external and internal traffic. For example, of the 23 agencies: 5 reported that they were not monitoring inbound or outbound direct connections to outside entities. 11 reported that they were not persistently monitoring inbound encrypted traffic. 8 reported that they were not persistently monitoring outbound encrypted traffic. In addition, 13 agencies reported they were not using a network-based session capture solution. Of the 10 agencies that reported using this solution, officials from 2 agencies stated that they were not capturing workstation-to-workstation connections. Without thorough monitoring of external and internal traffic, agencies will have less assurance that they are aware of compromised or potentially compromised traffic within their network. Most Agencies Reported Using a Security Information and Event Management Capability, but Did Not Always Use this Capability to Analyze Potential Threats NIST SP 800-53 Revision 4 states that agencies should establish enhanced monitoring capabilities. Such capabilities should include automated mechanisms that collect and analyze incident data for increased threat and situational awareness. According to NIST, a security information and event management (SIEM) system analyzes data from different sources and identifies and prioritizes significant events. Sources of data used by SIEM systems include logs from database systems, network devices, security systems, web applications, and workstation operating systems. Of the 23 agencies that we reviewed, 21 reported using a SIEM capability. Over half of the agencies employing a SIEM used one or more of their logs to match against known vulnerabilities and advanced persistent threats, as well as to create real-time alerts. For example, of the 21 agencies: 14 agencies reported collecting database logs, but only 7 agencies reported using the logs to match against known vulnerabilities and persistent threats and to create real-time alerts; 20 agencies reported collecting network logs, but only 13 agencies reported using them to match against known vulnerabilities and persistent threats and to create real-time alerts; 21 agencies reported collecting security logs, but only 13 reported using them to match against known vulnerabilities and persistent threats and to create real-time alerts; 15 agencies reported collecting web application logs, but only 9 agencies reported using them to match against known vulnerabilities and persistent threats and to create real-time alerts; and 13 agencies reported collecting workstation logs, but only 8 agencies reported using them to match against known vulnerabilities and persistent threats and to create real-time alerts. Only 5 agencies collected all 5 types of logs and used them to match against known vulnerabilities and persistent threats and to create real- time alerts. By not fully using SIEM capabilities, agencies will have less assurance that relevant personnel will be aware of possible weaknesses or intrusions. Agencies Are in the Process of Implementing DHS’ CDM Program, but Most Agencies Have Not Fully Implemented Any of the Program Phases To further enhance their intrusion detection and prevention capabilities, the 23 civilian CFO Act agencies were in the process of implementing DHS’s CDM program. As previously discussed, Phase 1 of the program involves deploying products to automate hardware and software asset management, configuration settings, and common vulnerability management capabilities. Phase 2 intends to address privilege management and infrastructure integrity by allowing agencies to monitor users on their networks and to detect whether users are engaging in unauthorized activity. Phase 3 is intended to assess agency network activity and identify any anomalies that may indicate a cybersecurity compromise. As of June 2018, most agencies had not fully implemented any of the three phases. As shown in Figure 7, 15 agencies had partially implemented phase 1, 21 had partially or not yet begun to implement phase 2, and none of the agencies had fully implemented phase 3. Agencies’ implementation status has been affected, at least in part, due to delays in DHS’s deployment of the program phases. As a result, federal systems will remain at risk until the program is fully deployed. Conclusions Many agencies have not effectively implemented the federal approach and strategy for securing information systems. For example, the inspectors general for 17 of the 23 selected agencies reported that their agencies had not effectively implemented their information security programs and had significant information security deficiencies associated with internal control over financial reporting. In addition, CIOs for 17 agencies reported not meeting all nine targets for the cybersecurity cross- agency priority goal. Further, OMB determined that that only 13 of the 23 agencies were managing risks to their overall enterprise, while the other 10 agencies were at risk. Until agencies more effectively implement the government’s approach and strategy, federal systems will remain at risk. DHS and OMB have initiatives underway that are intended to further improve agencies’ security posture. However, although DHS had provided training and guidance for NCPS and CDM, agencies expressed the need for more. In addition, OMB had also not finalized its policy and strategy aimed at addressing challenges with perimeter security and protecting high value assets, respectively. OMB had also not provided useful information to Congress, such as a description of agencies’ implementation of DHS’s intrusion assessment plan, the degree to which agencies are using NCPS, a complete analysis of agencies’ implementation of DHS’s intrusion assessment plan, or the costs and benefits of using commercial tools. Although agencies’ officials reported various efforts underway to enhance their agency’s intrusion detection and prevention capabilities, implementation efforts across the federal government were not consistent. OMB and DHS can use the information provided in this report to work with agencies to identify obstacles and impediments affecting the agencies’ abilities to implement these capabilities. Recommendations for Executive Action We are making a total of nine recommendations, including two to DHS and seven to OMB. Specifically: The Secretary of DHS should direct the Network Security Deployment division to coordinate further with federal agencies to identify training and guidance needs for implementing NCPS and CDM. (Recommendation 1) The Secretary of DHS should direct the appropriate staff to work with OMB to follow up with agencies to identify obstacles and impediments affecting their abilities to implement intrusion detection and prevention capabilities. (Recommendation 2) The Director of OMB should submit the intrusion assessment plan to the appropriate congressional committees. (Recommendation 3) The Director of OMB should report on implementation of the defense- in-depth strategy described in the intrusion assessment plan, including all elements described in the plan. (Recommendation 4) The Director of OMB should update the analysis of agencies’ intrusion detection and prevention capabilities to include the degree to which agencies are using NCPS. (Recommendation 5) The Director of OMB should direct the Federal CIO to update her report to Congress to include required information, such as detecting advanced persistent threats, a comparison of the costs and benefits of the capabilities versus commercial technologies and tools, and the capability of agencies to protect sensitive cyber threat indicators and defense measures. (Recommendation 6) The Director of OMB should establish a time frame for finalizing the Trusted Internet Connections policy intended to address challenges with agencies’ perimeter-based architectures and issue it when finalized. (Recommendation 7) The Director of OMB should establish a time frame for finalizing the strategy for realigning resources across agencies to protect high- value assets and issue it when finalized. (Recommendation 8) The Director of OMB should direct the Federal CIO to work with DHS to follow-up with agencies to identify obstacles and impediments affecting their abilities to implement intrusion detection and prevention capabilities. (Recommendation 9) Agency Comments and Our Evaluation We provided a draft of this report to OMB and the 23 civilian CFO Act agencies, including DHS, covered by our review. In response, OMB provided comments via email, and DHS and three other agencies (the Department of Commerce, Social Security Administration, and U.S. Agency for International Development) provided written comments, which are reprinted in appendices V through VIII, respectively. The 19 remaining agencies (the Departments of Agriculture, Education, Energy, Health and Human Services, Housing and Urban Development, the Interior, Justice, Labor, State, Transportation, the Treasury, and Veterans Affairs; as well as the Environmental Protection Agency, General Services Administration, National Aeronautics and Space Administration, National Science Foundation, Nuclear Regulatory Commission, Office of Personnel Management, and Small Business Administration) stated via email that they had no comments. In its comments, which the OMB liaison provided to GAO via email on December 7, 2018, OMB did not state whether it agreed or disagreed with the seven recommendations that we made to it. Rather, according to the liaison, OMB agreed with the facts in our draft report, but found that the report did not reflect the agency’s rationale for not submitting the DHS intrusion assessment plan to Congress and a report on the implementation of the plan, as required by the Federal Cybersecurity Enhancement Act of 2015. The liaison stated that OMB is working closely with DHS to provide strategic direction in assessing gaps in, and modernizing, the manner in which intrusion detection and prevention capabilities are delivered to the federal government. Further, in a subsequent email on December 10, 2018, OMB said it believes the Federal CIO’s September 2018 report to Congress, along with data provided in OMB’s fiscal year 2017 FISMA report to Congress, achieves the outcomes sought by the Federal Cybersecurity Enhancement Act of 2015 and demonstrates OMB's continuous engagement with DHS across the evolution of the intrusion detection and prevention program. As stated in our report, we acknowledge that OMB has provided important information to congressional stakeholders through its reports. However, OMB’s reports did not cover all outcomes described in the act. For example, as we pointed out, these reports did not fully address implementation of the defense-in-depth strategy described in DHS’s intrusion assessment plan. In addition, although OMB reported on several elements required by the Federal Cybersecurity Enhancement Act of 2015, it did not report on all of the required elements. For example, the reports did not address whether DHS’s NCPS was effective in detecting advanced persistent threats. The reports also did not include a comparison of the costs and benefits of the intrusion detection and prevention capabilities versus commercial technologies and tools, or the value of classified cyber threat indicators. Further, the reports did not address the capability of agencies to protect sensitive cyber threat indicators and defensive measures. Accordingly, we maintain that our recommendations for OMB to report on required elements in the Federal Cybersecurity Enhancement Act of 2015 are warranted. In addition, OMB suggested that we revise our recommendations to the agency to include a shared responsibility with DHS to help drive desired outcomes. However, six of the seven recommendations we are making to OMB are related to specific OMB responsibilities cited in either the Federal Cybersecurity Enhancement Act of 2015 or the Report to the President on Federal IT Modernization. As such, we believe the recommendations are appropriately addressed to OMB. Furthermore, our recommendations do not prevent OMB from working with DHS to implement them. Our seventh recommendation to OMB—to work with DHS to follow up with agencies to identify obstacles and impediments affecting their abilities to implement intrusion detection and prevention capabilities—includes a shared responsibility with DHS. OMB also provided technical comments, which we incorporated into the report, as appropriate. Subsequent to providing initial comments on our draft report, OMB issued a memorandum intended to provide a strategy for realigning resources across agencies to protect high-value assets. This action addresses our recommendation 8, which called for the Director of OMB to establish a time frame for finalizing the strategy for realigning resources across agencies to protect high-value assets, and to issue the strategy when finalized. In its comments, DHS stated that it concurred with the two recommendations we made to the department. DHS stated that it expects to implement the recommendations in 2019. The Department of Commerce commented that the report was reasonable and that the department agreed with the findings and recommendations. In its comments, the Social Security Administration stated that protecting its networks and information is a critical priority. According to the agency, it continued to make improvements in fiscal year 2018, such as improvements and progress in securing applications, leveraging the cloud, managing its assets and vulnerabilities, strengthening its network and incident response capabilities, improving its security training, and enhancing the overall effectiveness of its cybersecurity program. Finally, the U.S. Agency for International Development commented that its inspector general had improved the agency’s capability maturity ratings for core security functions in fiscal year 2018. The agency also pointed out that it was the only selected agency in which fiscal year 2017 indicators of effectiveness in implementing the federal approach and strategy for securing information systems were all positive (as noted in Appendix III). We are sending copies of this report to appropriate congressional committees, the Director of OMB, the heads of the 23 civilian CFO Act agencies and their inspectors general, and other interested congressional parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX. Appendix I: Objectives, Scope, and Methodology The Federal Cybersecurity Enhancement Act of 2015, which was enacted December 18, 2015, included a provision for GAO to report on the effectiveness of the federal government’s approach and strategy for securing agency information systems, including intrusion detection and prevention capabilities. The objectives of our review were to assess: (1) the reported effectiveness of selected agencies’ implementation of the federal government’s approach and strategy to securing agency information systems; (2) the extent to which the Office of Management and Budget (OMB) and the Department of Homeland Security (DHS) have facilitated the use of intrusion detection and prevention capabilities to secure federal agency information systems; and (3) the extent to which selected agencies reported implementing intrusion detection and prevention capabilities. Selected agencies for our review were the 23 civilian agencies covered by the Chief Financial Officers Act of 1990 (CFO Act). We did not include the Department of Defense because the Federal Cybersecurity Enhancement Act of 2015 only pertains to civilian agencies. Because we focused our work on the 23 civilian agencies, results from these reviews are not generalizable to the entire federal government. To assess the reported effectiveness of agencies’ implementation of the federal government’s approach and strategy to securing agency information systems, we described the federal government’s approach and strategy by summarizing the Federal Information Security Modernization Act of 2014 (FISMA), Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure, and the National Institute of Standards and Technology’s (NIST) Framework for Improving Critical Infrastructure Cybersecurity (cybersecurity framework). assessed the reported effectiveness of agencies’ implementation of the approach and strategy by reviewing annual reports from OMB and the inspectors general (IG) of the 23 civilian CFO Act agencies regarding the reported implementation of FISMA for fiscal year 2017. We described the IG reported maturity levels, including the Office of Inspectors General FISMA Reporting Metrics definition of “effectiveness.” These maturity levels are based on security domains aligned with the five core functions in NIST’s cybersecurity framework. We also summarized IG reported conclusions on the effectiveness of agencies’ information security programs for fiscal year 2017. reviewed the fiscal year 2016 and 2017 financial statement audit reports for each of the 23 civilian CFO Act agencies to identify the extent to which any significant deficiencies or material weaknesses related to information security over financial systems had been reported and to identify information security control weaknesses reported by the IGs. identified whether agencies had met the targets for the cybersecurity- focused cross-agency priority goal for fiscal years 2016 and 2017 by examining agency-reported performance metrics for fiscal years 2016 and 2017. evaluated OMB’s agency risk management assessment ratings to make a determination on how agencies were managing risk to their enterprise. These conclusions were based on FISMA metrics, and are aligned with the five core security functions defined in the cybersecurity framework. interviewed knowledgeable OMB officials and staff to obtain their views on the reported effectiveness of the federal government’s approach and strategy to securing agency information systems. To assess the extent to which OMB and DHS have facilitated the use of intrusion detection and prevention capabilities to secure federal agency information systems, we determined the extent OMB and DHS fulfilled their requirements described in the Federal Cybersecurity Enhancement Act of 2015 by collecting and reviewing artifacts from OMB and DHS and comparing them to the provisions outlined in the act. We also interviewed knowledgeable officials from OMB and DHS regarding their efforts to fulfill their requirements described in the act. determined the effectiveness of corrective actions taken by DHS to address nine previously reported recommendations we made in our report related to NCPS. Specifically, we collected appropriate artifacts and assessed the artifacts against the criteria used in that report, and determined the extent to which the actions taken by DHS met the intent of the recommendations, and we met with DHS staff responsible for the remediation activities and obtained their views of the status of actions taken to address the recommendations. held semi-structured interviews with knowledgeable officials from the 23 civilian CFO Act agencies. During these interviews, we obtained the agency’s views on whether they need more training and guidance from DHS for NCPS and CDM. We also interviewed knowledgeable officials and staff at DHS to obtain their views on how DHS had improved the intrusion detection and prevention capabilities it provides to federal agencies. We also interviewed DHS officials to obtain their views on the training and guidance that the department makes available to agencies. To assess the extent to which selected agencies reported implementing intrusion detection and prevention capabilities, we described the reported intrusion detection and prevention capabilities implemented by the 23 civilian CFO Act civilian agencies by summarizing implemented intrusion detection and prevention capability information obtained from the semi-structured interviews at the 23 civilian CFO Act agencies described above; identifying the extent to which the 23 civilian CFO Act agencies were in compliance with DHS’s binding operating directive (BOD) pertaining to enhanced email and web security (BOD 18-01) by collecting and summarizing Cyber Hygiene Trustworthy Email reports from the 23 agencies and determining the extent to which the agencies had taken required actions to implement the BOD. During the semi-structured interviews, we also obtained the agency’s views and experiences with other programs and services provided by DHS, including the extent to which agencies had implemented the tools offered by the department’s Continuous Diagnostics and Mitigation (CDM) program. To determine the reliability of submitted data and obtain clarification about agencies’ processes to ensure the accuracy and completeness of data used in their respective FISMA reports, we analyzed documents and conducted interviews with officials from 6 of the 23 civilian CFO Act agencies. To select these six agencies, we sorted agency fiscal year 2017 information technology budget data from highest to lowest amount and then divided the data into three tiers: high spending, medium spending, and low spending. We then randomly selected two agencies from each of the three tiers. The selected agencies were the Departments of Agriculture, Commerce, Housing and Urban Development, Transportation, and Veterans Affairs, and the U.S. Agency for International Development. While not generalizable to all agencies, the information we collected and analyzed about the six selected agencies provided insights into various processes in place to produce FISMA reports. Based on this assessment, we determined that the data were sufficiently reliable for the purposes of our reporting objectives. We conducted this performance audit from December 2017 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Cybersecurity Framework The National Institute of Standards and Technology established the cybersecurity framework to provide guidance for cybersecurity activities within the private sector and government agencies at all levels. The cybersecurity framework consists of five core functions: identify, protect, detect, respond, and recover. Within the five functions are 23 categories and 108 subcategories that define discrete outcomes for each function, as described in table 7. Appendix III: Reported Effectiveness of Agencies’ Implementation of the Federal Approach for Securing Information Systems The federal approach and strategy for securing information systems is prescribed by federal law and policy, including the Federal Information Security Modernization Act of 2014 and the presidential executive order on Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure. Accordingly, federal reports describing agency implementation of this law and policy, and reports of related agency information security activities, indicated the effectiveness of agencies’ efforts to implement the federal approach and strategy. Table 8 summarizes the reported effectiveness of the 23 civilian Chief Financial Officers Act of 1990 agencies to implement the government’s approach and strategy to securing information systems. Appendix IV: Updated Cybersecurity- Focused Cross-Agency Priority Goal The President’s Management Agenda identifies cross-agency priority (CAP) goals to target areas where multiple agencies must collaborate to effect change. The agenda issued in fiscal year 2018 established an information technology modernization goal that includes a cybersecurity objective with specific priority areas and performance indicators. This cybersecurity-focused goal is intended to drive progress in the government’s efforts to modernize information technology to increase productivity and security. Figure 8 describes the 3 updated cybersecurity- focused cross-agency priority areas and 10 performance indicators. Each federal agency is expected to meet one of the 10 new performance indicators by the end of fiscal year 2018 and the remainder by 2020. Appendix V: Comments from the Department of Homeland Security Appendix VI: Comments from the Department of Commerce Appendix VII: Comments from the Social Security Administration Appendix VIII: Comments from the U.S. Agency for International Development Appendix IX: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the individual named above, Jeffrey Knott (assistant director), Daniel Swartz (analyst-in-charge), David Blanding, Chris Businsky, Kristi Dorsey, Di’Mond Spencer, Priscilla Smith, and Edward Varty made key contributions to this report. West Coile, Franklin Jackson, and Chris Warweg also provided assistance.
Why GAO Did This Study Federal agencies are dependent on information systems to carry out operations. The risks to these systems are increasing as security threats evolve and become more sophisticated. To reduce the risk of a successful cyberattack, agencies can deploy intrusion detection and prevention capabilities on their networks and systems. GAO first designated federal information security as a government-wide high-risk area in 1997. In 2015, GAO expanded this area to include protecting the privacy of personally identifiable information. Most recently, in September 2018, GAO updated the area to identify 10 critical actions that the federal government and other entities need to take to address major cybersecurity challenges. The federal approach and strategy for securing information systems is grounded in the provisions of the Federal Information Security Modernization Act of 2014 and Executive Order 13800. The act requires agencies to develop, document, and implement an agency-wide program to secure their information systems. The Executive Order, issued in May 2017, directs agencies to use the National Institute of Standards and Technology's cybersecurity framework to manage cybersecurity risks. The Federal Cybersecurity Enhancement Act of 2015 contained a provision for GAO to report on the effectiveness of the government's approach and strategy for securing its systems. GAO determined (1) the reported effectiveness of agencies' implementation of the government's approach and strategy; (2) the extent to which DHS and OMB have taken steps to facilitate the use of intrusion detection and prevention capabilities to secure federal systems; and (3) the extent to which agencies reported implementing capabilities to detect and prevent intrusions. To address these objectives, GAO analyzed OMB reports related to agencies' information security practices including OMB's annual report to Congress for fiscal year 2017. GAO also analyzed and summarized agency-reported security performance metrics and IG-reported information for the 23 civilian CFO Act agencies. In addition, GAO evaluated plans, reports, and other documents related to DHS intrusion detection and prevention programs, and interviewed OMB, DHS, and agency officials. What GAO Found The 23 civilian agencies covered by the Chief Financial Officers Act of 1990 (CFO Act) have often not effectively implemented the federal government's approach and strategy for securing information systems (see figure below). Until agencies more effectively implement the government's approach and strategy, federal systems will remain at risk. To illustrate: As required by Office of Management and Budget (OMB), inspectors general (IGs) evaluated the maturity of their agencies' information security programs using performance measures associated with the five core security functions—identify, protect, detect, respond, and recover. The IGs at 17 of the 23 agencies reported that their agencies' programs were not effectively implemented. IGs also evaluated information security controls as part of the annual audit of their agencies' financial statements, identifying material weaknesses or significant deficiencies in internal controls for financial reporting at 17 of the 23 civilian CFO Act agencies. Chief information officers (CIOs) for 17 of the 23 agencies reported not meeting all elements of the government's cybersecurity cross-agency priority goal. The goal was intended to improve cybersecurity performance through, among other things, maintaining ongoing awareness of information security, vulnerabilities, and threats; and implementing technologies and processes that reduce malware risk. Executive Order 13800 directed OMB, in coordination with the Department of Homeland Security (DHS), to assess and report on the sufficiency and appropriateness of federal agencies' processes for managing cybersecurity risks. Using performance measures for each of the five core security functions, OMB determined that 13 of the 23 agencies were managing overall enterprise risks, while the other 10 agencies were at risk. In assessing agency risk by core security function, OMB identified a few agencies to be at high risk (see figure at the top of next page). DHS and OMB facilitated the use of intrusion detection and prevention capabilities to secure federal agency systems, but further efforts remain. For example, in response to prior GAO recommendations, DHS had improved the capabilities of the National Cybersecurity Protection System (NCPS), which is intended to detect and prevent malicious traffic from entering agencies' computer networks. However, the system still had limitations, such as not having the capability to scan encrypted traffic. The department was also in the process of enhancing the capabilities of federal agencies to automate network monitoring for malicious activity through its Continuous Diagnostics and Mitigation (CDM) program. However, the program was running behind schedule and officials at most agencies indicated the need for additional training and guidance. Further, the Federal CIO issued a mandated report assessing agencies' intrusion detection and prevention capabilities, but the report did not address required information, such as the capability of NCPS to detect advanced persistent threats, and a cost/benefit comparison of capabilities to commercial technologies and tools. Selected agencies had not consistently implemented capabilities to detect and prevent intrusions into their computer networks. Specifically, the agencies told GAO that they had not fully implemented required actions for protecting email, cloud services, host-based systems, and network traffic from malicious activity. For example, 21 of 23 agencies had not, as of September 2018, sufficiently enhanced email protection through implementation of DHS' directive on enhanced email security. In addition, less than half of the agencies that use cloud services reported monitoring these services. Further, most of the selected 23 agencies had not fully implemented the tools and services available through the first two phases of DHS's CDM program. Until agencies more thoroughly implement capabilities to detect and prevent intrusions, federal systems and the information they process will be vulnerable to malicious threats. What GAO Recommends GAO is making two recommendations to DHS, to among other things, coordinate with agencies to identify additional needs for training and guidance. GAO is also making seven recommendations to OMB to, among other things, direct the Federal CIO to update the mandated report with required information, such as detecting advanced persistent threats. DHS concurred with GAO's recommendations. OMB did not indicate whether it concurred with the recommendations or not.
gao_GAO-18-162
gao_GAO-18-162_0
Background Unmanned Systems Unmanned systems provide DOD with capabilities for conducting a range of military operations, including environmental sensing and battlespace awareness; chemical, biological, radiological, and nuclear detection; counter-improvised explosive device capabilities; port security; precision targeting; and precision strike. DOD’s unmanned systems operate in different warfighting “domains” ranging from air, land, and maritime environments. As shown in figure 1, DOD categorizes its unmanned systems into five groups by domain (i.e., aerial and maritime, including surface and underwater) and other attributes of size and capability. Group 1 UASs weigh fewer than 20 pounds and operate below 1,200 feet in altitude, whereas group 5 UASs weigh more than 1,320 pounds and operate above 18,000 feet. Similarly, USVs are categorized in five groups, increasing in size and capability from very small to extra-large, and UUVs are categorized in four groups—small, medium, large, and extra-large. Organizational Roles and Responsibilities for Evaluating Workforces Various offices within the Office of the Secretary of Defense and the Department of the Navy have roles and responsibilities for evaluating the appropriate mix of personnel for the Navy’s and the Marine Corps’ total workforces. According to Section 129a of Title 10 of the U.S. Code, which governs DOD’s general policy for total force management, the Secretary of Defense is required to establish policies and procedures for determining the most appropriate and cost efficient mix of military, federal civilian, and contractor personnel to perform the missions of the department. Section 2463 of Title 10 mandates the Under Secretary of Defense for Personnel and Readiness (USD(P&R)) to devise and implement guidelines and procedures to ensure consideration is given to using DOD civilian employees to perform new functions and functions that are performed by contractors and could be performed by civilian employees. DOD policies also establish roles and responsibilities for the USD(P&R): DOD Directive 1100.4 establishes departmental policy concerning workforce management, including multiple responsibilities for the USD(P&R) (e.g., reviewing the workforce management guidelines and practices of DOD components for compliance with established policies and guidance). DOD Instruction 1100.22 implements policy set forth under DOD Directive 1100.4; assigns responsibilities; and prescribes procedures for determining the appropriate mix of military, federal civilian, and contractor personnel. The instruction assigns to the USD(P&R) the responsibility for overseeing the instruction’s implementation and working with component heads to ensure that they establish policies and procedures consistent with this instruction. DOD Instruction 7041.04 states that DOD’s USD(P&R), the Comptroller, and the Director of Cost Assessment and Program Evaluation are responsible for developing a DOD-wide cost model for estimating and comparing the full costs of DOD workforce and contract support. Section 129a of title 10 of the U.S. Code directs the Secretary of Defense to delegate responsibility for the implementation of policies and procedures established by the Secretary to the Secretaries of the military departments. In accordance with this delegation, the Secretary of the Navy has overall responsibility for requirements determination, planning, programming, and budgeting for policies and procedures for determining the appropriate and cost-effective mix of personnel. DOD policies establish the following roles and responsibilities for the military department Secretaries, including the Secretary of the Navy and heads of other DOD components: DOD Directive 1100.4 requires the component heads to designate an individual with full authority for workforce management, to include responsibility for, among other things, developing annual personnel requests to Congress considering the advantages of converting from one form of support (active or reserve military servicemembers, federal civilians, or private sector contractors) to another for the performance of a specified function, consistent with section 129a of the U.S. Code. DOD Instruction 1100.22 establishes that the component heads should require that their designated workforce authority issue implementing guidance requiring the use of the instruction when determining workforce mix for current, new, or expanded missions. Secretary of the Navy Instruction 5430.7R assigns authority for workforce management in the Department of the Navy, including workforce mix issues, to the Assistant Secretary of the Navy for Manpower and Reserve Affairs. Navy and Marine Corps Processes for Determining and Staffing Personnel Requirements Concurrently with a weapon system’s development through DOD’s acquisition process, the Navy and the Marine Corps determine the numbers and types of personnel and skills required for their unmanned systems. The personnel requirements development process generally begins with the program manager from a Navy systems command (e.g., Naval Air Systems Command for Navy and Marine Corps aircraft and Naval Sea Systems Command for ships and submarines) that is responsible for supervising the management of assigned acquisition programs. The program manager and systems command utilize Navy policies and other inputs to formulate initial requirements. In doing so, the program manager coordinates any Navy personnel requirements with the Office of the Chief of Naval Operations and other entities such as the Navy Personnel Command and commands that will operate and maintain the systems, such as the U.S. Fleet Forces Command and the Commander, Naval Air Forces. For Marine Corps aircraft systems, the program manager from the Naval Air Systems Command coordinates with Marine Corps headquarters entities, such as the Deputy Commandant for Aviation and the Deputy Commandant for Combat Development and Integration. The program manager and systems command calculate the cost of personnel as part of a system’s total life cycle cost. The program manager validates personnel requirements as program changes dictate and at a minimum annually, over a system’s lifecycle. The Navy and the Marine Corps staff the units that will operate and maintain their unmanned systems by filling the required positions to the extent possible based on the number of positions funded and the number of trained and qualified personnel available to fill them. This staffing process is managed by the Navy Personnel Command and in the Marine Corps by the Deputy Commandant for Manpower and Reserve Affairs. The Navy and the Marine Corps Have Not Evaluated Using Federal Civilian Employees and Private Sector Contractors as Workforce Alternatives for Unmanned System Operators The Navy and the Marine Corps are in the process of rapidly growing their portfolios of unmanned systems, but have not evaluated the use of alternative workforces—specifically the use of federal civilian employees and private sector contractors as unmanned system operators. DOD Directive 1100.4 states that authorities should consider all available sources when determining workforce mix, including federal civilians and contractors, and personnel shall be designated as federal civilians except in enumerated circumstances. According to DOD Instruction 1100.22, the initial steps in planning for personnel requirements include determining categories of eligible personnel (e.g., military servicemembers, federal civilian employees, or private sector contractors). These determinations are based on whether activities to be performed are “military essential” (the activity must be performed by a military servicemember), “inherently governmental” (the activity could be performed by a military servicemember or a federal civilian employee), or “commercial” (the activity could be performed by military servicemembers, federal civilians, or private sector contractors). Military servicemembers and federal civilians must be considered before the services may consider using contractors to perform a function. In the absence of workforce alternative analyses, the services have decided to rely solely on military servicemembers as operator workforces for all of their unmanned systems, including the eight systems we reviewed in detail. For all eight case studies, Navy and Marine Corps officials told us that their decisions to rely on servicemembers as operators were based on the pre-existing force structure made up of personnel who were already trained in related mission areas. For seven of the eight selected systems, the officials stated that they did not evaluate the use of federal civilians or contractors in their determinations for using military personnel for their operator workforces. In the case of an eighth system, the MQ-4 Triton UAS, the Navy evaluated using contractor personnel, but did so without first considering the use of federal civilian employees as DOD policy requires. In a 2009 analysis for the Triton, the Navy concluded that comparisons between the cost-effectiveness of using military personnel and federal civilian employees were beyond the expertise of the working group that performed the analysis. Ultimately, the Navy decided to use military personnel as Triton operators. According to senior-level officials from OUSD(P&R), there are concerns within the department about the level of consideration the military services have applied to workforce mix alternatives for unmanned system operators. As a result, OUSD(P&R) and other entities from the Office of the Secretary of Defense commissioned the Institute for Defense Analyses to conduct a study, which was published in June 2016, on alternative staffing strategies to enable DOD to accomplish UAS-related missions more cost-effectively. The study found that staffing alternatives exist for each service and could produce cost savings. According to the Institute for Defense Analyses’ report, the use of enlisted personnel for a portion of the Navy’s and the Air Forces’ UAS operator workforces offers the potential for savings, as could the use of limited duty officers or warrant officers. The Institute for Defense Analyses also reported that federal civilian employees of DOD could generate the most substantial savings of the options studied if they were used in combination with military servicemembers as UAS operators responsible for the launch and recovery of air vehicles. OUSD(P&R) officials stated that this latter approach would free up military servicemembers to fill key positions for supporting military readiness in other areas of operations that are military personnel essential, and better leverage the services’ limited military personnel end strengths. In September 2016, OUSD(P&R) issued a proposal for an additional study of UAS staffing options that stated that the Department of the Navy’s workforce mix determination (i.e., relying on military servicemembers as operators) is “immature and infeasible” and that any recommended approaches should also be applied to unmanned maritime systems. OUSD(P&R) has also commissioned a study to clarify circumstances in which military servicemembers should be considered essential for certain positions, which is expected to be complete by the end of fiscal year 2018. OUSD(P&R) officials stated that they plan to continue their efforts to expand awareness of these studies and of the available workforce mix alternatives for UAS operators with military service officials. On the basis of our discussions with Navy and Marine Corps workforce planners, key reasons for not evaluating workforce alternatives for unmanned system operators were that planners did not believe it was necessary, and they did not believe that federal civilian employees or private sector contractors were viable workforce alternatives to military servicemembers for such roles and functions. For example, officials cited concerns that federal civilians cannot serve aboard Navy ships or provide rapid deployment capability. However, officials from OUSD(P&R) told us that these concerns are inaccurate, noting that federal civilian employees have deployed on Navy ships. Further, we note that DOD’s Expeditionary Civilian Workforce comprises federal civilian employees across DOD components who are available to deploy within 120 days of notice to meet urgent requirements. DOD officials responsible for the Expeditionary Civilian Workforce program stated that such personnel are intended to be predictable, reliable, and effective so that the military services will source them and the combatant commands can depend upon them. Further, service workforce planners stated that relevant service-level guidance is unclear on when and how such personnel can and should be considered for performing in operational roles and in deployable positions. The Navy’s and the Marine Corps’ policies do not provide details about the types of operational roles specific to a service, including those related to unmanned system operators, that could be filled with federal civilians or private sector contractors, nor do the policies provide guidance on the limitations and benefits of using these personnel sources, such as those identified in DOD-commissioned reports and our prior work. For example, military personnel can be the most costly of the three personnel categories and shortages exist in certain functions that have been deemed military essential and are in high demand, such as fighter pilots. On the other hand, federal civilians and private sector contractors can be cost-effective and may augment military servicemembers on a short-term basis if needed (see table 1). Federal internal controls standards emphasize the importance of having clear, updated policies that align with an organization’s mission and goals. Officials from the Office of the Secretary of the Navy for Manpower and Reserve Affairs agreed that the cited service policies do not provide the sort of detail and clarity that could aid planners and decision makers with determining eligible personnel categories for their workforces and weighing the benefits and limitations thereof. Clarifying their respective workforce planning policies could help workforce planners better understand when, where, and how federal civilians or contractors may serve in operational roles (e.g., from shore or from underway naval vessels) and what the benefits and limitations are. The use of military servicemembers, and not federal civilians or private sector contractors, as unmanned system operators may indeed be the most appropriate and cost-effective workforce option for the Navy and the Marine Corps. However, the services will not have certainty about the basis for such decisions without first clarifying workforce planning policies and then applying the revised policies to evaluate the use of all personnel resources available to them for future unmanned systems. The Navy and the Marine Corps Have Not Fully Developed Personnel Requirements for One of Eight Selected Unmanned Systems or Updated Related Policies and Goals The Navy and the Marine Corps have efforts underway to develop requirements for operators, maintainers, and other support personnel needed for selected unmanned systems. According to Navy information, personnel requirements for three systems are sufficient and the sufficiency of requirements for four other systems is yet undetermined. However, the Navy and the Marine Corps have not updated personnel requirements and the related cost estimate for the RQ-21 Blackjack UAS based on deployment data. Furthermore, the Department of the Navy has not fully evaluated and updated policies or clarified goals that may inform future personnel requirements development and updates to requirements. The Navy and the Marine Corps Developed Personnel Requirements for Selected Unmanned Systems but Have Not Updated the RQ-21 Blackjack UAS Requirements and Cost Estimate The Navy and the Marine Corps have efforts underway to develop requirements for operators, maintainers, and other support personnel needed for selected unmanned systems, commensurate with each system’s maturity in DOD’s acquisition process. The USVs associated with the littoral combat ships, the Snakehead Large Displacement UUV, and the MQ-25 Stingray UAS are in earlier phases of both acquisition and personnel requirements development and, according to Navy information, the precise number of required personnel will be determined and updated as the systems progress through acquisition. On the other hand, the MK 18 UUVs, MQ-8 Fire Scout UAS, MQ-4 Triton UAS, and RQ-21 Blackjack UAS have matured the furthest through DOD’s acquisition process. The Navy and the Marine Corps have identified personnel requirements, and service officials told us they have reviewed their sufficiency as units have trained and deployed with the systems. Although future modifications to personnel requirements for the MK 18 UUVs, the MQ-8 Fire Scout, and the MQ-4 Triton may be needed as their inventories and the pace of deployments increase, Navy officials told us the numbers of operators are appropriate at this time to meet mission objectives based on available deployment data and feedback from operators. For the RQ-21 Blackjack UAS, however, Navy and Marine Corps headquarters and command entities disagree with unit-level officials about the sufficiency of the personnel requirements. Marine Corps UAS squadrons have identified a requirements shortfall of 13 to 21 personnel per detachment to support each RQ-21 Blackjack UAS. The UAS squadrons have established that a total of 22 personnel are necessary to form a detachment sufficiently sized to support operations with the UAS. Marine Corps unit-level officials told us that this personnel requirement is based on the numbers needed to conduct training and deployments since the first Blackjack system was delivered in 2015, for which 22 to 30 personnel have been needed per detachment to meet mission requirements. In contrast, higher level command and service headquarters entities in the Navy and the Marine Corps have established a requirement of nine Marine Corps personnel per detachment, including three enlisted UAS operators and one UAS officer along with maintenance and support personnel. Squadron officials stated to the Navy and the Marine Corps in their written rebuttal of the 9-person requirement that 13 more personnel are needed to support operations for 10 to 12 hours per day, or up to 24 hours a day for 10-day surges in operations, and to comply with naval aviation maintenance procedures. Marine Corps officials also told us that the squadrons believe these additional personnel are essential for supporting the workload and levels of supervision they believe are necessary to operate and maintain an RQ- 21 Blackjack UAS and avoid mishaps and damage to the aircraft during recovery. DOD policy directs that personnel requirements should be driven by workload and established at the minimum levels necessary to accomplish mission and performance objectives. In addition, according to a Navy instruction, personnel requirements must be validated as program changes dictate and at a minimum annually, over a system’s lifecycle to determine if a personnel update is required. The Navy instruction also identifies guidelines for average weekly working hours and personnel availability for different tasks, which are key elements in the calculation of personnel requirements. The instruction states that routinely exceeding these guidelines to meet workloads should be avoided because it can adversely affect unit morale, retention and safety. With respect to the RQ-21 Blackjack UAS, Marine Corps officials stated that the concept of operations has changed for the service’s vision of employing the system to support Marine Expeditionary Units and that the 9-person detachment requirement was based on the outdated concept of operations. As a result, Marine Corps officials told us that the personnel requirements for the squadrons that operate them are too low to support the workloads associated with the systems and service headquarters- level decision makers have not yet updated them based on the most current and enduring concept of operations for the system. Marine Corps officials stated that efforts are underway to review the differences in personnel requirements deemed necessary by squadrons and headquarters-level entities as training and deployments continue, which is a positive step. However, according to the program office, the personnel requirements were not changing at the time of this report. Until the Navy and the Marine Corps update the personnel requirements for the RQ-21 Blackjack based on the most current and enduring concept of operations and deployment data, the services will lack current information about the number of operators needed for the squadrons that operate the RQ-21 Blackjack. In addition, the Navy and the Marine Corps have not updated the life cycle cost estimate for the RQ-21 Blackjack UAS to include additional personnel that Marine Corps squadrons have needed for current operations and expect to need for future operations and deployments. The program office estimated the total Marine Corps personnel cost for the RQ-21 Blackjack based on detachments of 9 personnel each at approximately $371 million over the program’s expected 19-year life cycle—nearly 20 percent of the Marine Corps’ life cycle cost for the program. However, this estimate may be too low because Marine Corps squadrons have reported that they need up to 21 more personnel per detachment to support the workload associated with the system, as discussed previously. DOD guidance requires that components determine a weapon system program’s life cycle cost by planning for the many factors needed to support the system, including personnel. Decision makers use this information to determine whether a new program is affordable and the program’s projected funding and personnel requirements are achievable. In addition, the Office of Management and Budget’s Capital Programming Guide indicates that to keep the cost analyses for capital assets, such as weapon systems, current, accurate, and valid, cost estimating should be continuously updated based on the latest information available as programs mature. The Navy and the Marine Corps have updated the life cycle cost estimate for the RQ-21 Blackjack to account for changing assumptions, such as the expected usage rate of spare parts for system repairs, but not for additional Marine Corps personnel that squadrons have reportedly needed for deployments. Without updating the cost estimate as appropriate after updating personnel requirements, the Navy and the Marine Corps may not have current information about the Marine Corps’ RQ-21 Blackjack UAS lifecycle cost and affordability. The Department of the Navy Has Made Positive Steps but Has Not Fully Evaluated and Updated Policies or Clarified Goals for Informing Future Personnel Requirements The Navy Has Modified Some UAS Policies but Has Not Fully Evaluated and Updated Policies to Inform Future Personnel Requirements The Department of the Navy has made some positive steps but has not fully evaluated and updated its aviation policies for operation and maintenance of certain UAS to inform the development of future personnel requirements. According to officials from the Navy Manpower Analysis Center, correctly determining personnel workload and the related numbers of personnel required for operation and maintenance is especially critical for UAS units because of the safety risks associated with operating in shared airspaces and over populated areas. These officials also stated that naval aviation policies—which apply to manned aircraft and UAS—affect the workload of operators and maintenance personnel and the numbers required to achieve a squadron’s mission and meet the standards prescribed in the policies. For example, the Naval Air Training and Operating Procedures Standardization manual contains provisions for pilot fatigue and hours they can fly compared with the hours they must rest. Further, the Naval Aviation Maintenance Program instruction prescribes standards for performing and documenting quality assurance steps for maintenance tasks, among other things. Our review of these selected policies found that some naval aviation standards have been modified to account for UAS separately from manned aircraft, and to some extent between UAS of different sizes and capabilities. The Naval Air Training and Operating Procedures Standardization manual was updated in 2016 with a new chapter for UAS policies and operations. The Naval Aviation Maintenance Program instruction has been updated to specify that UAS of groups 3, 4, and 5 will always be governed by the policy similar to manned aircraft, with a few exceptions, such as compass calibration. Notwithstanding these updates, Marine Corps headquarters- and unit- level officials told us that the policies have not been fully reviewed and updated to account for differences in UAS of varying sizes and capabilities, especially group 3 UAS, which are those systems weighing 55 to 1,320 pounds. According to these officials, applying certain procedures and standards from these policies equally across different sizes of UAS is problematic for group 3 UAS in particular, which includes the RQ-21 Blackjack. The officials stated that the application of such standards affects workloads and personnel levels in a way that prevents squadrons from accomplishing their missions as efficiently as possible. Specifically, they stated that upholding current naval aviation standards is one key reason—the other being changes to the concept of operations for the RQ-21 Blackjack—for having staffed up to 21 more personnel per RQ-21 Blackjack detachment than the 9-person requirement discussed earlier in this report. Applying naval aviation operating and maintenance standards equally across different sizes of UAS may not align with the Marine Corps’ concept of operations, which states that all UAS are intended to be recovered by landing or capture even though they may be expendable. Each RQ-21 Blackjack system includes five air vehicles, more than one of which could be unavailable for assigned missions at the same time. For example, Marine Corps officials told us that damage to RQ-21 Blackjack air vehicles can be caused by weather, a deficiency with the air vehicle itself, a crash landing, or a combination of factors, and up to three air vehicles could be unavailable at a time. These officials told us that holding the RQ-21 Blackjack to maintenance standards designed for other non-expendable aircraft may not be efficient because their application has a limited effect on mishap rates relative to the additional personnel needed to uphold the standards. Moreover, in discussion groups we held with Marine Corps UAS operator personnel, operators mentioned that mishap investigations performed to existing standards sideline operators from training pending the investigation’s outcome. Such standards also apply to the Navy’s larger, non-expendable UAS like the MQ-8 Fire Scout and the MQ-4 Triton. According to DOD Directive 1100.4, existing policies, procedures, and structures should be periodically evaluated to ensure efficient and effective use of personnel resources. Further, federal internal controls standards emphasize the importance of having clear, updated policies that align with an organization’s mission and goals. Such goals could include the Department of the Navy’s goal to accelerate the development and fielding of unmanned systems, and the Marine Corps’ emphasis on reducing operator workload and providing effective and efficient support to mission execution and decision making. For example, the Marine Corps’ UAS concept of operations envisions a future in which one UAS operator will perform multiple functions as opposed to the current approach in which multiple Marines are necessary for a single mission. We found that the Navy has taken a preliminary step to further evaluate what policy changes may be needed to support unmanned systems by establishing an advisor position for this purpose within the Naval Innovation Advisory Council. The advisor is responsible for making recommendations to the Secretary of the Navy and other senior leaders to streamline policy and remove roadblocks that hinder innovation, among other things. In addition, the program manager for the RQ-21 Blackjack and the Marine Corps’ Deputy Commandant for Combat Development and Integration are supporting a research effort through the Naval Postgraduate School to improve the efficiency and effectiveness of naval aviation maintenance procedures for group 3 UAS, according to a Marine Corps official who is leading this effort. While these are positive steps, the time frames for making such policy changes have not been identified. In addition, we did not find evidence that the Navy has taken or planned related steps such as determining whether future reductions to personnel requirements could be accomplished, and any associated cost savings, or benefits to UAS operations if policies were further updated to account for UAS of different sizes and capabilities. The Navy has thus far prioritized the evaluation and modification of acquisition-related policies to expedite the delivery of unmanned systems to units, consistent with a 2015 memorandum from the Secretary of the Navy. Unless the Navy and the Marine Corps prioritize updating policies for operating and maintaining UAS of different sizes and capabilities they may miss opportunities to effectively and efficiently use personnel resources as system inventories grow. The Department of the Navy Lacks Clear Overarching Goals for Informing Future Unmanned System Personnel Requirements The Department of the Navy also lacks clear overarching goals for informing future unmanned system personnel requirements and the level of priority that should be assigned to these systems and the units that operate them for the purpose of personnel resourcing decisions. While DOD’s Unmanned Systems Integrated Roadmap, FY2013-2038 stated that the department must strive to reduce the number of personnel required to operate and maintain its unmanned systems, the Department of the Navy has not affirmed this goal or communicated any other personnel goals for its unmanned system development. Department of the Navy documents we reviewed for unmanned systems expressed goals that are less directly related to personnel requirements, to include expanding the range of operations and reducing costs and risks to personnel safety and mission success. As previously mentioned, the Navy has prioritized the evaluation and modification of acquisition-related policies to expedite the delivery of unmanned systems to units, consistent with a 2015 memorandum from the Secretary of the Navy. Navy and Marine Corps officials we spoke with who are responsible for the RQ-21 Blackjack and other case study systems we reviewed told us they did not believe the Department of the Navy has a clear and overarching goal for unmanned system personnel requirements either now or over the long-term. For example, officials stated that they did not know if the Department of the Navy expects that fewer personnel should be needed to operate and support unmanned systems than the numbers of personnel required for other types of systems. Without such clarity about personnel-related goals and priority levels, some officials expressed concern that using the term “unmanned” systems conveys expectations that technological advances can substantially reduce personnel requirements in the near term, and that funding for related personnel resources are a lower priority than those for other system types. For example, a senior Navy personnel official told us that the Navy’s past goals and related efforts to reduce personnel required for its ship crews—an initiative referred to as optimal manning—makes them cautious about whether the same goals and efforts will be adopted for unmanned systems and could produce similar, undesirable effects on readiness. Navy officials at three commands also stated they are concerned that resources for unmanned system personnel over future years may not keep pace with the increasing inventories of the systems if a lower priority is assigned to them in budget decisions in the absence of goals and clarity over priorities. The Navy’s Commander, Submarine Forces, identified a personnel shortfall for supporting increased UUV inventories as its second-highest personnel priority for the Navy’s fiscal year 2019 budget deliberations to help underscore to headquarters entities the importance of personnel resources for such systems. According to Navy officials, the Navy has since authorized the requested addition of 66 personnel to the command to augment the sole unit that will operate the Snakehead Large Displacement UUV along with increasing inventories of other types of UUVs. Federal internal controls standards state that an agency’s management should define goals clearly to enable the identification of risk. By applying this standard to the Department of the Navy’s acquisition and operations of unmanned systems, such goals could include whether or not unmanned systems should require fewer personnel resources than manned counterparts. Until the Secretary of the Navy clarifies overarching goals for unmanned system personnel requirements and resource priority levels and communicates them to requirements planners and budget decision makers, the services will be hampered in developing future personnel requirements and identifying risks as system inventories grow and operations expand. The Navy and the Marine Corps Have Developed Staffing Approaches for Unmanned System Operators, but Face Challenges Meeting Personnel Requirements The Navy and the Marine Corps have developed staffing approaches to select, train, and track unmanned systems operators and to retain some UAS operators by offering special and incentive pays. However, both services face challenges in ensuring that there are sufficient UAS operators to meet personnel requirements. Yet neither service has assessed the commercial drone industry to inform its retention approach for UAS operators. Although Marine Corps UAS operators and officers report low morale and career satisfaction, the Marine Corps has not fully explored the use of human capital flexibilities to address these workforce challenges. The Navy and the Marine Corps Have Developed Staffing Approaches to Select, Train, Track, and Retain Unmanned System Operators In the Navy, unmanned system operations are secondary skills for personnel from related communities. For its UASs in groups 4 and 5, for example, the Navy utilizes personnel from manned aviation communities within the same mission areas, such as MH-60 helicopter pilots and aircrew who are selected and then trained to operate the MQ-8 Fire Scout UAS. Likewise, Navy officials stated that personnel from related communities are selected and trained to operate USVs and UUVs. The Navy is taking steps to track these trained operator personnel by using secondary skill identification codes. According to Navy officials, these identification codes will help personnel managers monitor the inventories of personnel with unmanned system operator qualifications and provide a temporary surge in capability if needed. In contrast to the Navy’s approach, the Marine Corps has a primary career field for operating UAS, including enlisted and officer personnel. The Marine Corps replenishes its UAS operator and officer personnel inventories by selecting from eligible applicant groups. To become UAS operators, enlisted marines must achieve minimum test scores comparable to those required for other high-skill occupations, such as intelligence specialists. Eligible groups include new graduates of recruit training and experienced marines who apply for a lateral transfer from another occupational specialty. UAS officers take a separate test battery and must attain the same minimum scores as other officers who are selected for manned naval aviation training. They are selected from three sources: new graduates of officer training; pilot or flight officer trainees who do not complete their manned aircraft qualification; and experienced officers seeking a transfer from another occupational specialty, including pilots of manned aircraft. Following their selection, enlisted personnel and officers must complete 5 months of Army UAS training courses or 6 months of Air Force UAS training courses, respectively. The Marine Corps then assigns a primary occupation identification code to trained personnel, which facilitates tracking their inventory to help meet requirements. To help retain sufficient numbers of personnel to meet requirements, both the Navy and the Marine Corps have offered special and incentive pays to personnel who operate UASs. Navy personnel who serve as air vehicle operators for the MQ-8 Fire Scout and MQ-4 Triton or as MQ-4 Triton tactical coordinators are eligible for two types of aviation pays based on their qualification as pilots or naval flight officers rather than their UAS assignments—monthly “flight pay” of up to $1,000 and aviation career continuation pay bonuses of $75,000 for a new 5-year contract, as of fiscal year 2017. Marine Corps UAS officers are not offered special and incentive pays, but enlisted operators have been eligible for a selective reenlistment or selective retention bonus since 1998, which ranged from $8,250 up to $19,750 in fiscal year 2017 for qualified marines who committed to an additional 4 years of service. The Navy and the Marine Corps Face Challenges Meeting UAS Operator Personnel Requirements and Have Not Assessed Commercial Competition to Inform Staffing Approaches Navy Faces Challenges Meeting UAS Operator Personnel Requirements Based on our analysis, the Navy faces challenges with meeting personnel requirements for UAS operators although, according to Navy officials, it is too soon to know if personnel shortfalls may arise with unmanned maritime systems because many programs are in early in stages of development. Navy officials told us they have sufficient numbers of personnel to operate the current inventory of UAS, which included 49 MQ-8 Fire Scouts and 2 MQ-4 Tritons as of September 2017. As UAS inventories increase, the Navy has reported growing retention challenges among its pilots and naval flight officers over the past 3 years as the U.S. economy improves and commercial airline hiring increases. Navy aviation and workforce planning officials told us this could affect the ability to fill both its manned aviation and UAS personnel requirements. According to Navy proposals for the Navy’s aviation retention bonus program, future retention shortfalls are expected in the helicopter, maritime patrol and reconnaissance, and E-2 Hawkeye communities, among others. The first two communities are sources of personnel for the MQ-8 Fire Scout and MQ-4 Triton and, according to Navy officials, the latter community is being considered as a personnel source for the MQ- 25 Stingray. In particular, the Navy has reported concerns about the future retention of its maritime patrol and reconnaissance pilots because their experience directly translates to a commercial 737 aircraft. Additionally, the Navy has reported shortages and significant retention issues in meeting requirements for its reserve helicopter and maritime patrol and reconnaissance pilots, communities that the Navy uses to augment its available inventories of active duty pilots who also operate UASs. The Marine Corps Has Not Met Personnel Requirements for UAS Operators Based on our analysis, the Marine Corps has experienced past shortfalls of UAS operators through fiscal year 2017. Since the first fiscal year of available data after the inception of the Marine Corps’ career specialty for UAS officers in 2012, personnel inventories have increased but fallen short of requirements (see fig. 2). For fiscal years 2013 through 2017, the Marine Corps was substantially short of captains, majors, and lieutenant colonels (i.e., O3, O4, and O5 pay grades) to serve as UAS officers. Consistent with this trend, the Marine Corps has designated UAS officer inventories as unhealthy since fiscal year 2013. Marine Corps officials told us these shortfalls could be attributable to the annual growth in requirements for this new community. They also stated that they do not currently anticipate retention challenges for UAS officers. However, according to these officials, their predictions about UAS officer retention for future years are based on data from other longer established career fields as proxies until more UAS officer data are available. For fiscal years 2007 through 2017, inventories of enlisted UAS operators increased in all but one year, but fell short of requirements (see fig. 3) in part due to substantial yearly shortfalls of certain junior enlisted personnel. According to a Marine Corps official, the UAS operator inventory will exceed requirements in fiscal year 2018 because the requirement has decreased by about 60 percent from the previous year. However, the Marine Corps has leveraged lateral personnel transfers from other occupations to meet approximately 33 to 89 percent of its yearly retention quotas for first-term UAS operator reenlistments since fiscal year 2010 (see fig. 3 above). A Marine Corps personnel planning official told us that personnel transfers have been helpful and necessary for meeting retention quotas. However, other Marine Corps officials told us that heavily leveraging transfers shows that the UAS community is not retaining its own experienced operators—that is, UAS operators who have attained proficiency and advanced skills and been deployed. For more senior enlisted UAS operators eligible for a second reenlistment or beyond, the Marine Corps has fallen short of its retention quotas for fiscal years 2015 through 2017. The Navy and the Marine Corps Have Not Assessed Commercial Supply, Demand, and Wages to Inform Staffing Approaches for UAS Operator Requirements Despite the current and future challenges previously discussed, Navy and Marine Corps officials told us that the services have not used information about the commercial drone industry to inform their use of special and incentive pays because they did not believe doing so was needed. Marine Corps officials told us that they have not observed a retention problem for UAS operators and officers and unless they miss retention goals in 3 consecutive years they will not consider changing financial incentives— i.e., increasing bonuses to enlisted UAS operators or offering special and incentive pays to UAS officers. Until such time, pilots who are selected for the UAS career field are informed by the Marine Corps that their flight pay and aviation continuation pay bonus eligibility will be terminated. Another Marine Corps official with knowledge of the UAS community told us that studying the commercial drone industry and the potential effect on retention is timely because the services must program for the necessary resources for financial incentives 2 years in advance of the budget year. They stated that after 3 years of missing retention goals the problem could persist for another 2 years before additional funds were available to increase retention bonuses given the programming and budget cycle. Navy workforce planning officials acknowledged that they are concerned about increasing difficulty in providing sufficient numbers of mid-career pilots to meet the Navy’s aviation requirements over future years, which includes UAS operator requirements. In addition to competition from commercial airlines, Navy officials told us a growing labor market in the commercial drone industry could exacerbate pilot retention challenges for those with secondary qualifications to operate UAS. However, they added that little is known about the demand and available wages in that industry. Likewise, Marine Corps officials told us that past challenges in meeting requirements and retaining experienced operators could persist in future years, and hiring in the commercial drone industry could affect retention. These officials stated that the Air Force could also pose a future retention challenge for the Marine Corps’ UAS operator community. The Air Force offers the potential for higher pay to its UAS operators than the Marine Corps along with larger and more capable types of UAS. The Air Force reported to Congress in July 2017 that its projections of enlisted UAS operator retention indicate that a bonus may be necessary as soon as 2022. During discussion groups we held with Marine Corps UAS operators, enlisted operators cited the potential for higher pay for their skills outside the Marine Corps as a factor that has influenced reenlistment decisions among them or their peers. Operators in one group told us that three of their five RQ-21 Blackjack instructors were former enlisted operators from their squadron who secured employment with the RQ-21 Blackjack’s manufacturer as private sector contractors. DOD’s 2012 Eleventh Quadrennial Review of Military Compensation determined that organizations should assess civilian supply and demand and civilian wages to develop the most cost effective special and incentive pay strategies. We reported in February 2017 that conducting such an assessment is a key principle of effective human capital management by which to evaluate DOD’s special and incentive pay programs. Our report also found that the services do conduct such assessments for aviation, nuclear propulsion, and cybersecurity occupations. Without assessing the commercial drone industry and using such information to inform retention approaches, including the use of special and incentive pays, the Navy and the Marine Corps may not know if their approaches are effectively tailored to ensure a sufficient number of UAS operators are available to meet future requirements. Marine Corps UAS Operators and Officers Report Low Morale and Career Satisfaction, but the Marine Corps Has Not Fully Examined Human Capital Flexibilities to Address These Issues The Marine Corps has experienced workforce challenges with its career field for UAS officers and enlisted operators, including diminished morale and career satisfaction and short periods of time in which operators are trained and available to UAS squadrons before their contract or squadron assignment ends. Results of a 2015 Marine Corps survey of UAS officers showed that about 65 percent of captains and first lieutenants who responded were dissatisfied with their career and about 75 percent of that group cited low job satisfaction as influencing their decision to leave the Marine Corps. UAS officers and enlisted operators in all eight discussion groups we held told us about factors that enhance their morale, including the opportunities to learn and to shape their community and their positive deployment experiences, but they also discussed factors that negatively affect their job satisfaction. UAS operators in all enlisted groups cited the frequency of personnel turnover in the squadron as a source of frustration in developing and retaining expertise with the RQ-21 Blackjack. Officers told us they feel like a lower tier priority in Marine Corps aviation for reasons ranging from the lack of a uniform insignia device akin to those awarded to manned aircraft pilots (i.e., pilot “wings”), to confusion over the strategy and missions for Marine Corps UAS now and in future years. UAS officers also told us they desired assignments to positions outside the UAS squadrons that they believed would enhance their leadership ability, but such positions had not consistently been available to them because they were needed to fill squadron billets. For example, the Marine Corps has limited or restricted UAS officers from applying for in- residence professional military education opportunities in past years because they could not be diverted from billets requiring their qualifications due to inventory shortages. UAS operators and officers spend approximately 2 years or more of their 3-year squadron assignment awaiting and completing training to attain proficiency and advanced skills with the RQ-21 Blackjack UAS. After training and deployment, they may have about 4 months or fewer to impart their knowledge and deployment experience to others in the squadron before they reach the end of their squadron assignment, the end of their service obligation, or both (see fig. 4). According to Marine Corps officials we spoke with, the loss of experienced UAS operators who do not reenlist and are replaced by lateral transfers from other careers results in diminished UAS expertise among mid-career enlisted members in the squadrons. These officials told us that personnel who transfer to the UAS career to replace experienced operators must spend at least 2 years in training for initial qualification and then proficiency on the RQ-21 Blackjack. Moreover, Marine Corps officials told us that a portion of the UAS operators who reenlist past their first contract must fulfill 3-year special duty assignments outside the UAS community. They stated that this exacerbates the diminished squadron expertise and is the reason that some operators leave rather than reenlist in the Marine Corps. Although the Marine Corps has taken steps to address challenges with UAS operator inventories by using special and incentive pays for enlisted operators and limiting opportunities that would divert officers away from squadrons, as previously discussed, it has not fully explored flexibilities for managing its UAS career fields more effectively to help meet requirements. Employing flexibilities to improve job satisfaction could help improve retention of experienced personnel in an already-challenged environment. For example, the Marine Corps has not authorized available aviation special and incentives pays for UAS officers in spite of challenges meeting personnel requirements. As mentioned previously, pilots who are selected for the UAS career field are informed by the Marine Corps that their flight pay and aviation continuation pay bonus eligibility will be terminated. The Marine Corps has incentivized enlisted personnel from certain specialties, such as aircraft maintenance, to both reenlist and to remain in a specified unit as recently as fiscal year 2018, but has not offered this opportunity to UAS operators. By considering longer UAS operator contracts, the Marine Corps could increase the availability of experienced operators to squadrons, where they can pass on their knowledge and skills to junior enlisted personnel. Our prior work has identified that a key principle for effective strategic human capital planning is that organizations should ensure that flexibilities are part of the overall human capital strategy to ensure effective workforce planning. According to Marine Corps officials, they have not taken additional steps to address workforce challenges in part because inventories of UAS operators and officers have grown and squadrons have generally attained readiness goals and accomplished their deployment missions despite personnel shortages. Further, these officials stated that low morale and career satisfaction could be partially caused by the current transition from the RQ-7 Shadow UAS to the RQ- 21 Blackjack, and to the relative newness of the officer career field. Without exploring these or other human capital flexibilities to improve morale and career satisfaction and maximize operators’ availability to squadrons, the Marine Corps may face continued challenges in meeting personnel requirements and the growing demands of expanding operations and increasing UAS inventories. Moreover, as the Marine Corps budgets for additional resources to establish its own school for UAS operator training, flexibilities that could improve retention and maximize operator availability could also help ensure the greatest return on its investment in the UAS operator workforce. Conclusions For almost 20 years we have identified strategic management of human capital as a high-risk area across government in part because of persistent gaps in mission critical skills. With the Navy’s commitment to accelerate the delivery of unmanned systems to the fleet and its budget of nearly $10 billion to develop and procure those systems in fiscal years 2018 through 2022, having sufficient personnel with the appropriate skills at the right time will be critical. To that end, without additional actions to improve their workforce planning the Navy and the Marine Corps may not be positioned to support their expanding unmanned systems operations. Specifically, lacking clear workforce planning policies, decision makers may not know when they should consider using federal civilian employees and private sector contractors as alternatives in determining the most appropriate and cost-effective workforces for their unmanned system operators. With respect to personnel requirements development, until the Marine Corps’ requirements and related cost estimates for the RQ-21 Blackjack UAS are updated, the services will lack current information about the number of operators needed and their affordability. Further, unless the Navy and the Marine Corps prioritize policy updates for operating and maintaining UAS of different sizes and capabilities they may miss opportunities to effectively and efficiently use personnel resources as system inventories grow. Without assessing the commercial drone industry and using that information to inform retention approaches, the Navy and Marine Corps may not know whether special and incentive pays are effectively tailored to ensure a sufficient number of UAS operators are available to meet future requirements. The Marine Corps, in particular, may continue to face challenges in meeting requirements and growing operational demands until it examines additional flexibilities to improve morale and career satisfaction among its UAS operator workforce and maximize the availability of operators serving in its squadrons. Overall, unmanned systems are key to future Navy and Marine Corps operations, but for these systems to be effective the services need to ensure that they take the necessary actions to provide sufficient personnel. Recommendations for Executive Action We are making the following ten recommendations to DOD. The Secretary of the Navy ensures that: The Chief of Naval Operations should clarify workforce planning policies to identify circumstances in which federal civilian employees and private sector contractors may serve in operational roles and what the benefits and limitations are of using federal civilians and private sector contractors as alternative workforces. (Recommendation 1) The Chief of Naval Operations should, after clarifying workforce planning policies, apply the revised policies to evaluate the use of alternative workforces (including federal civilian employees and private sector contractors) for future unmanned system operators. (Recommendation 2) The Commandant of the Marine Corps should clarify workforce planning policies to identify circumstances in which federal civilian employees and private sector contractors may serve in operational roles and what the benefits and limitations are of using federal civilians and private sector contractors as alternative workforces. (Recommendation 3) The Commandant of the Marine Corps should, after clarifying workforce planning policies, apply the revised policies to evaluate the use of alternative workforces (including federal civilian employees and private sector contractors) for future unmanned system operators. (Recommendation 4) The Commander, Naval Air Systems Command, in coordination with the Deputy Commandant of the Marine Corps for Combat Development and Integration, should update the Marine Corps personnel requirements associated with the RQ-21 Blackjack UAS based on the most current and enduring concept of operations and utilize the updated requirements in planning for UAS squadron personnel requirements. (Recommendation 5) The Commander, Naval Air Systems Command, should update the life cycle cost estimate for the RQ-21 Blackjack UAS to make adjustments as appropriate after updating the personnel requirements for the system. (Recommendation 6) The Deputy Chief of Naval Operations for Warfare Systems (N9), in coordination with the Deputy Commandant for Aviation, should prioritize continued efforts to fully evaluate policies for operating and maintaining UAS of different sizes and capabilities, such as group 3 UAS—to include establishing completion time frames, determining whether reductions to personnel requirements could be accomplished, and identifying any associated cost savings and the benefits to the UAS squadrons’ ability to complete missions—and update such policies as needed. (Recommendation 7) The Secretary of the Navy should clarify overarching goals for unmanned systems’ personnel requirements, including related priority levels for resourcing purposes, and communicate them to requirements planners and budget decision makers. (Recommendation 8) The Chief of Naval Personnel and the Deputy Commandant for Manpower and Reserve Affairs should assess civilian supply, demand, and wages in the commercial drone industry and use the results to inform retention approaches, including the use of special and incentive pays for UAS operators. (Recommendation 9) The Deputy Commandant for Aviation and the Deputy Commandant for Manpower and Reserve Affairs should examine the use of additional human capital flexibilities that could improve the career satisfaction and retention of experienced UAS operators and maximize their availability to squadrons. Such flexibilities could include authorizing available special and incentive pays; permitting UAS operators to extend their enlistments to serve longer within squadrons; ensuring the availability of career- and promotion- enhancing opportunities for professional military education; considering the use of a potential insignia device for operators; or extending UAS operator contract lengths. (Recommendation 10) Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix III, DOD concurred with eight of our recommendations and partially concurred with two recommendations. DOD also provided technical comments on the draft report, which we incorporated as appropriate. With regard to our recommendation to assess civilian supply, demand, and wages in the commercial drone industry and use the results to inform retention approaches, DOD partially concurred. DOD stated that it will assess competitive markets, both externally and internally, and then analyze the usage of incentive pays for UAS operators when retention rates and inventory levels of personnel display decreasing trends. DOD added that such analysis would be premature if conducted before initial operational capability is attained for each UAS because retention behaviors and air crew dynamics are not yet established. As noted in our report, the Navy and the Marine Corps have each attained initial operational capability with one UAS (i.e., the MQ-8 Fire Scout B-variant and the RQ-21 Blackjack) and quantities of these and other UAS are expected to increase in future years. Additionally, the Marine Corps has designated UAS officer inventories as unhealthy since fiscal year 2013. Accordingly, we continue to believe that conducting such assessments and using the results are timely and important steps to ensure enough personnel to meet future operator requirements. DOD partially concurred with our recommendation to examine the use of additional human capital flexibilities that could improve the career satisfaction and retention of experienced UAS operators. DOD stated that human capital flexibilities are constantly under review. Further, DOD stated that the UAS community is still in its infancy, but as it continues to grow and become healthier, assignment opportunities and flexibilities will become more prevalent and special and incentive pays will be examined as retention rates dictate. Such efforts would meet the intent of our recommendation if the opportunities and flexibilities DOD considers include other examples cited in our recommendation. That is, we continue to believe that DOD should also consider permitting UAS operators to extend their enlistments to serve longer within squadrons; ensuring the availability of career- and promotion-enhancing opportunities for professional military education; considering the use of a potential insignia device for operators; and extending UAS operator contract lengths. We are providing copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of the Navy, and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Characteristics of Selected Navy and Marine Corps Unmanned Systems Navy MQ-8 Fire Scout Unmanned Aerial System The Navy’s MQ-8 Fire Scout unmanned aerial system (UAS) (B and C variants) is intended to provide real-time imagery and data in support of intelligence, surveillance, and reconnaissance missions for surface, anti- submarine, and mine warfare. The system is part of the surface warfare and mine countermeasures mission packages of the littoral combat ships. The MQ-8 system comprises one or more air vehicles with sensors, a control station, and ship equipment to aid in vertical launch and recovery. According to the program office, the MQ-8C has 90 percent commonality with the previously developed MQ-8B. The primary differences between the two are structural modifications to accommodate the MQ- 8C’s larger airframe and fuel system. Delivery Status and Schedule The manufacturer has delivered 49 aircraft to the Navy as of September 2017 (including 30 B variants and 19 C variants), and 11 more aircraft (C variants) are scheduled to be delivered by fiscal year 2019. The Navy attained initial operational capability with the B variant of the Fire Scout in fiscal year 2014, and plans to attain initial operational capability with the C variant in December 2018, depending on the availability of the littoral combat ship from which it deploys. Operator Personnel Requirements A composite aviation detachment embarked on a littoral combat ship consists of up to 24 personnel, including operator air crews equipped with one MH-60 helicopter and one MQ-8 Fire Scout UAS. An air crew consists of two personnel: one air vehicle operator and one mission payload operator. There is no additive personnel requirement associated with operators of the MQ-8 Fire Scout because these personnel already reside within existing expeditionary MH-60 helicopter squadron detachments. The littoral combat ships’ crew berthing constraints was a key limiting factor in creating the personnel requirements for the number of air crew in a single composite aviation detachment. Navy officials told us that they believe, based on deployment experiences and available data, that the personnel requirements for the MQ-8 Fire Scout are correct, although they stated that the operational tempo has been very limited to date due to problems with the littoral combat ship that have reduced the number of deployments. Operator Staffing Approach MH-60 helicopter pilots and enlisted aircrewmen from expeditionary helicopter squadrons attend 8 and 6 weeks, respectively, of MQ-8 Fire Scout UAS training. During deployments, these personnel serve dual roles as air crew of both the MH-60 and the MQ-8 Fire Scout. MQ-8 Fire Scout air vehicle operators hold primary career designators as Navy helicopter pilots, and after their UAS training they are identified with an additional qualification designator of DY8. According to a senior Navy official, private sector contractors trained 126 air vehicle operators prior to February 2015, and since then Navy has trained another 91 air vehicle operators as of May 2017. MQ-8 Fire Scout mission payload operators have an enlisted rating as a helicopter aircrewman, and after their UAS training they are identified with a Navy enlisted classification code of 8367. According to a senior Navy official, private sector contractors trained 148 mission payload operators through March 2017, and the Navy has trained another 68 mission payload operators since February 2017 (as of May 2017). According to Navy officials, they do not expect that the approach for staffing MQ-8 Fire Scout aircrew to negatively affect accessions or retention in the helicopter community, even when operational tempo increases, but they are continuing to monitor feedback from deployments. Navy MQ-4 Triton UAS The Navy’s MQ-4 Triton UAS is intended to provide persistent maritime intelligence, surveillance, and reconnaissance data collection and dissemination capability in an operating area of a 2,000 nautical miles radius. Based on the Air Force’s RQ-4B Global Hawk air vehicle, the MQ- 4 Triton was formerly known as the Broad Area Maritime Surveillance UAS. Triton UAS sensors can provide detection, classification, tracking, and identification of maritime targets. Additionally, the MQ-4 Triton is designed with a communications relay capability that can link dispersed forces in the theater of operation. The system will cue other Navy assets for further situational investigation and/or attack, and will also provide a battle damage assessment of the area of interest. Tactical-level data analysis will occur in real-time at shore-based mission control systems via satellite communications. The MQ-4 Triton is planned to operate from five shore-based sites worldwide as part of the Navy’s family of maritime patrol and reconnaissance systems. From these sites, five MQ-4 Triton air vehicles will be airborne concurrently, 24 hours a day and 7 days a week (see fig.6). As a precursor to the MQ-4 Triton, the Navy’s RQ-4A Broad Area Maritime Surveillance System-Demonstrator has been continuously deployed to the U.S. Central Command area since January 2009. All four of those planned demonstrator systems have been delivered to the Navy. Planned Quantity The manufacturer has delivered 2 systems to the Navy as of September 2017 and the Navy expects 10 more systems to be delivered through fiscal year 2021. At the time of this report, no air vehicles had yet been delivered to the Navy’s first unmanned patrol squadron; the 2 systems were being utilized for testing. The Navy has estimated that it will attain initial operational capability with the MQ-4 Triton UAS in 2021. Operator Personnel Requirements One of the Navy’s two planned unmanned patrol squadrons (referred to as VUPs) will have 30 mission crews, the other squadron will have 20 mission crews, and both squadrons will have additional launch and recovery operators. A MQ-4 Triton mission crew will consist of four personnel: one air vehicle operator, one tactical coordinator, and two mission payload operators. Future upgrades to the MQ-4 Triton will require a fifth mission crew member to fill a signals intelligence capability operator position. The number of required mission crew members was based in part upon a model that Naval Air Systems Command utilizes to project the number of air crew personnel to support a system. According to Navy officials, the additional personnel requirements for the Navy associated with the establishment of Triton squadrons are offset by realignments of the Maritime Patrol and Reconnaissance Force, including the retirement of the P-3 Orion aircraft and reduction of associated personnel requirements. Navy officials told us that they believe, based in part on experience with the Broad Area Maritime Surveillance - Demonstrator, that the personnel requirements for the MQ-4 Triton are adequate, although they stated that they will continue to review and monitor the requirements for sufficiency in future years as the Navy attains steady state operations with the system’s five continuous orbits. Operator Staffing Approach The Navy’s approach for staffing operator aircrew for the MQ-4 Triton is to utilize a portion of its naval aviators, naval flight officers, and enlisted aircrew whose qualification is on a maritime patrol and reconnaissance force aircraft (e.g., the P-8A Poseidon) and assign them to an unmanned patrol squadron following a sea tour with their primary aircraft. According to Navy officials, the career path for all its aviators generally includes a number of shore duty options following a first deployment. The unmanned patrol squadron assignments will be an additional option for aviators’ first shore tour. The Navy will provide Triton aircrew members with approximately 3 months of training to qualify on the UAS in connection with their unmanned patrol squadron assignment. Air vehicle operators and tactical coordinators who are trained and qualified on the MQ-4 Triton will be identified with an additional qualification designator of DC5. Trained and qualified mission payload operators will be identified with a Navy enlisted classification of 7828. According to Navy officials, they do not expect the approach for staffing MQ-4 Triton aircrew to affect accessions or retention in the maritime patrol and reconnaissance community at this time, but it is too soon to be certain. In the meantime, the officials stated that they will continue to monitor personnel feedback and reassure personnel about the career value of experience in a MQ-4 Triton squadron. In addition, the Navy plans to leverage members of its reserve component to augment the pool of available personnel who can be assigned to its VUP squadrons. Navy MQ-25 Stingray UAS The Navy’s MQ-25 Stingray UAS will be the first UAS to operate from aircraft carriers. According to Navy officials, the MQ-25 Stingray’s primary mission will be to provide a robust refueling capability to extend the range and reach of the carrier air wing and reduce the need for F/A-18E/F Super Hornets to perform refueling missions, freeing them for strike missions, and preserving service life. As a secondary mission, the MQ-25 Stingray will also provide an intelligence, surveillance, and reconnaissance capability. The Navy previously referred to the MQ-25 Stingray as the Carrier Based Aerial Refueling System, a program that followed a restructuring of the former Unmanned Carrier-Launched Airborne Surveillance and Strike program. Planned Quantity The Navy’s initial plan is to purchase 72 MQ-25 Stingray air vehicles. Delivery Status and Schedule No systems have been delivered and a delivery schedule has not been established because the system is still in an early stage of DOD’s acquisition process, with a contract award for system development scheduled for the fourth quarter of fiscal year 2018. The Navy has estimated attaining initial operational capability with the system by the mid-2020s time frame. Operator Personnel Requirements Operator Staffing Approach The Navy has not yet developed a staffing approach for MQ-25 Stingray operators. According to Navy officials involved in establishing plans and requirements for the system, they are considering different options for the systems’ operators, including using enlsited personnel or an approach similar to that used for the MQ-8 Fire Scout operators in which a population of aviation personnel, including pilots, would be identified from a related, existing aircraft community—such as the E-2 Hawkeye aircraft—and provided with UAS qualification training if they were assigned to operate the MQ-25 Stingray in a composite squadron along with their other primary aircraft. According to these officials, at the direction of the Commander of Naval Air Forces, they have considered establishing a new UAS operator career field and surveyed midshipmen at the U.S. Naval Academy to gauge their interest in such a career. Marine Corps RQ-21 Blackjack UAS The Marine Corps’ RQ-21 Blackjack UAS provides units with a dedicated intelligence, surveillance, and reconnaissance capability for tactical commanders in real time by providing actionable intelligence and communications relay for 12-hour continuous operations per day, with a short surge capability of 24-hours of continuous operations for a 10-day period, during any 30-day cycle. An RQ-21 Blackjack system consists of five air vehicles, two ground control stations, multi-mission payloads, one launcher, one recovery system, data links, and support systems. Standard payloads include electro-optical and infrared cameras, communications relay payload, and automatic identification system. Future upgraded capabilities may include command and control integration, weapons integration, heavy fuel engine, laser designator, frequency agile communications relay, digital common data link, and cyclic refresh of the electro-optical and infrared cameras. The RQ-21 Blackjack can be launched and recovered from land or from air-capable ships, including L-class ships (e.g., amphibious transport docks) (see fig. 7). Delivery Status and Schedule The manufacturer has delivered 11 systems to the Marine Corps as of September 2017 and the Marine Corps expects the other 21 planned systems to be delivered through 2022. The Marine Corps attained initial operational capability with the RQ-21 Blackjack in 2016. Operator Personnel Requirements The Marine Corps has three active duty unmanned aerial vehicle squadrons (VMU 1, 2, and 3) and one reserve VMU squadron (VMU 4) that will operate the RQ-21 Blackjack UAS. Each active duty VMU will contain nine detachments and each detachment will comprise 9 personnel—including 1 UAS officer and 3 enlisted UAS operators—and one RQ-21 Blackjack UAS. The Marine Corps Reserve’s VMU 4 will contain three detachments. The Marine Corps’ does not distinguish between requirements for air vehicle operators and mission payload operators for the RQ-21 Blackjack because those functions are performed by the same operator. Operator Staffing Approach The Marine Corps has a primary career field for operating UAS, including enlisted UAS operators and UAS officers. The Marine Corps replenishes its UAS operator and officer personnel inventories by selecting from eligible applicant groups. For enlisted UAS operators, eligible groups include new graduates of recruit training and experienced marines who apply for a lateral transfer from another occupational specialty. UAS officers are selected from three sources: new graduates of officer training; pilot or flight officer trainees who do not complete their manned aircraft qualification; and experienced officers seeking a transfer from another occupational specialty, including pilots of manned aircraft. The Marine Corps requires certain minimum test scores before marines can be selected for UAS training. Enlisted marines must achieve minimum test scores comparable to those required for other high-skill occupations, such as intelligence specialists. Officers take a separate test battery and must attain the same minimum scores as other officers who are selected for manned naval aviation training. Following their selection for UAS training, enlisted personnel must complete 5 months of Army UAS training courses to attain their military occupational specialty as a UAS operator. Officers attend 6 months of Air Force training courses to attain their occupational specialty. The Marine Corps then assigns a primary occupation identification code to trained personnel, which is 7314 for enlisted UAS operators or 7315 for UAS officers. The Marine Corps assigns enlisted personnel and officers to one of its UAS squadrons after they attain their occupational specialty, where they continue their UAS training to attain and maintain proficiency and advanced qualifications. As discussed earlier in this report, Marine Corps UAS squadrons believe that an RQ-21 Blackjack detachment requirement of 9 personnel is not sufficient to meet their workloads. Since 2015, squadrons have staffed their deploying detachments with up to 30 personnel each to support the workload and levels of supervision they believe are necessary to operate and maintain an RQ-21 Blackjack UAS and avoid mishaps and damage to the aircraft during recovery to meet operating and maintenance standards, among other reasons. Navy Mine Countermeasures Unmanned Surface Vehicle and Unmanned Influence Sweep System The Navy’s Mine Countermeasures Unmanned Surface Vehicle (USV) and Unmanned Influence Sweep System will be part of the mine countermeasures mission package of the Navy’s littoral combat ships (see fig. 8). The Mine Countermeasures USV will tow a sonar payload for mine hunting. The Unmanned Influence Sweep System will use the same USV platform to tow an acoustic and magnetic influence sweep payload to clear bottom and moored mines. Both systems will be launched and recovered from littoral combat ships. Planned Quantity For the Mine Countermeasures USV, the projected inventory is 2 systems per mine countermeasures mission package for a total of 48 systems, in addition to systems needed for training. For the Unmanned Influence Sweep System, the projected inventory is 1 per mine countermeasures mission package for a total of 24 payloads, in addition to payloads for training. Delivery Status and Schedule As of September 2017, two Mine Countermeasures USVs were under construction, but neither had been delivered to the Navy. The Navy plans to attain initial operational capability with the Mine Countermeasures USVs in fiscal year 2021. As of September 2017, one Unmanned Influence Sweep System had been constructed and the Navy expects it to be delivered for testing by fiscal year 2018. The Navy plans to attain initial operational capability with the Unmanned Influence Sweep System in fiscal year 2019. Operator Personnel Requirements The Mine Countermeasures USV and Unmanned Influence Sweep System will be operated by littoral combat ship mine countermeasures mission package crews of 20 personnel each. The precise number of operators per system will be determined and updated as the systems progress through acquisition. Operator Staffing Approach According to Navy officials, USV operators associated with the littoral combat ships’ mine countermeasures mission package crews will not be directly accessed and recruited to such positions. Instead, these officials stated that enlisted sailors from related primary career ratings will be assigned to the crews and trained on the USVs along with other systems as part of a longer training pipeline. Upon their completion of training, the Navy plans to identify them with a Navy enlisted classification code of 1206, Littoral Combat Ship Mine Warfare Mission Package Specialist. Navy MK 18 Unmanned Underwater Vehicle Family of Systems The Navy’s MK 18 Unmanned Underwater Vehicle (UUV) family of systems consists of the MK 18 “Mod 1” Swordfish UUV and the MK 18 “Mod 2” Kingfish UUV. The MK 18 Mod 1 Swordfish is a man-portable system that performs autonomous, low-visibility exploration and reconnaissance missions in support of amphibious landings and mine countermeasures operations, among other things. The MK 18 Mod 2 Kingfish UUV is a larger vehicle with increased endurance and depth, and more advanced sensors to improve mine countermeasures capabilities. The Mod 1 Swordfish and the Mod 2 Kingfish operate in very shallow water and shallow water zones, and will be tactically integrated to enable detection of moored and bottom mines at increased standoff and reduced risk to operators and systems that would otherwise be operating in the minefield. The MK 18 systems can be launched and recovered from shore, from rigid hull inflatable boats or from ships (see fig. 9). 41 (25 Mod 1 Swordfish and 16 Mod 2 Kingfish) Delivery Status and Schedule The manufacturer has delivered 33 systems (21 Mod 1 Swordfish and 12 Mod 2 Kingfish) to the Navy as of fiscal year 2017. The Navy attained full operational capability with the first increment of the Mod 1 Swordfish in fiscal year 2007 and expects to attain initial operational capability with the first increment of the Mod 2 Kingfish in fiscal year 2019. Operator Personnel Requirements MK 18 UUVs are operated by platoons within three different Navy units: Explosive Ordinance Disposal Mobile Unit One, Mobile Diving and Salvage Unit Two, and the Naval Oceanography Mine Warfare Center. According to Navy officials, the establishment of such platoons did not generate an additive personnel requirement to those units. The minimal personnel requirement for MK 18 operations includes three UUV operators and a UUV supervisor, along with an officer-in-charge, a boat coxswain, and a boat engineer. Operator Staffing Approach According to Navy officials, the Navy does not directly access or recruit personnel to fill its requirements for operators of the MK 18 UUVs. These officials stated that, instead, enlisted sailors from related primary career ratings, including special warfare boat operator and aerographer’s mate ratings, can be assigned to a unit that operates the UUVs either on their first tour or later in their career on a subsequent assignment. Navy officials also stated that Navy Expeditionary Combat Command is coordinating with the Commander, Submarine Forces, to potentially utilize the Navy enlisted classification code of 9550 for its UUV operators. Navy Snakehead Large Displacement UUV The Navy’s Snakehead Large Displacement UUV will be a long- endurance, off-board system that will conduct reconnaissance and surveillance missions in denied areas and in waters too shallow or otherwise inaccessible for conventional platforms (see fig. 10). The Snakehead Large Displacement UUV will be launched and recovered from submarines and surface ships. Planned Quantity No systems have been delivered to the Navy. The Navy is planning for the first 2 systems to be delivered in fiscal year 2020 and for another 2 systems to be delivered in fiscal year 2023. The Navy will attain initial operational capability with the first phase systems when two of them are delivered and tested on a host platform, a life-cycle sustainment plan is in place, and personnel are trained and equipped to operate and maintain the system from a host platform. Operator Personnel Requirements The Navy plans to field the Snakehead Large Displacement UUVs to UUV Squadron 1. According to Navy officials, the squadron is also testing or operating more than 10 other types of UUVs and expects to receive 2 or more other new types of UUVs through approximately fiscal year 2020, along with the Snakehead. Although Navy officials told us that it is too soon to analyze and determine the numbers of personnel required for the system at the time of this report, they plan to utilize forward-deployed operators to launch and recover the vehicle, an operator to control the vehicle from an operations center on land, and a mission payload operator as needed depending on the mission. The precise number of operators per system will be determined and updated as the systems progress through acquisition. Operator Staffing Approach In staffing personnel to meet requirements for UUV Squadron 1, Navy officials stated that they do not directly access or recruit personnel to fill such positions. Instead, these officials told us that enlisted sailors from related career ratings within the submarine community, such as sonar technicians, are assigned to the squadron generally after they have completed at least one previous assignment and have approximately 5 years of experience in the Navy. According to the officials, once personnel are assigned to the squadron, they receive UUV training to qualify on the systems they will operate, and they will be identified with a Navy enlisted classification code of 9550 for UUV operators. Appendix II: Objectives, Scope, and Methodology This report addresses the extent to which the Navy and the Marine Corps have (1) evaluated workforce alternatives for their unmanned system operators, including the use of federal civilian employees and private sector contractors; (2) developed and updated personnel requirements and related policies and goals that affect requirements for operators, maintainers, and other support personnel for selected unmanned systems; and (3) developed approaches for staffing unmanned system operators to meet personnel requirements and have met those requirements. To address these objectives, we included in the scope of our review the Navy’s and the Marine Corps’ unmanned aerial systems (UAS), unmanned surface vehicles (USV), and unmanned underwater vehicles (UUV) that were programs of record in calendar year 2016. On the basis of Department of the Navy documentation and interviews with knowledgeable officials, we identified 24 such systems. To provide illustrative examples for our first and third objectives and to address the entirety of our second objective, we further narrowed our scope to those systems that had progressed far enough through DOD’s acquisition process to be part of a program of record within the purview of the services’ system commands. Additionally, we narrowed our scope for UASs, in particular, to those categorized as group 3 or above. We omitted smaller group 1 UASs because service officials told us that those systems are fielded in larger numbers as additional capabilities for existing units in accomplishing their missions and entail a small workload for operating and maintaining them relative to UASs of group 3 and above. Group 2 UASs that the Navy and the Marine Corps utilize are contractor-owned and operated, which was outside the scope of our review. From the remaining unmanned systems in our scope, we selected eight case studies to review the services’ evaluations of workforce alternatives, development and updates of personnel requirements and related policies and goals, and staffing approaches: four UASs—the Navy’s MQ-4 Triton, MQ-8 Fire Scout, MQ-25 Stingray, and the Marine Corps’ RQ-21 Blackjack; the two USVs—the Unmanned Influence Sweep System and the Mine Countermeasures USV—associated with the Navy’s littoral combat ships; and two types of the Navy’s UUVs—the MK 18 family of UUV systems and the Snakehead Large Displacement UUV—based on their size and missions. Although the results of the UUV case studies cannot be generalized to all UUVs across the Navy, they illustrate different characteristics of and approaches used for workforce mix, requirements, and staffing for such systems. To address our first objective, we compared any Navy and Marine Corps efforts to evaluate federal civilian employees and private sector contractors as workforce alternatives for operators of all of their unmanned systems, including those from our case study sample, with criteria from (1) DOD Directive 1100.4, Guidance for Manpower Management, which directs, among other things, that authorities consider all available sources when determining workforce mix, and that workforces be designated as federal civilians except in certain circumstances, and (2) DOD Instruction 1100.22, Policy and Procedures for Determining Workforce Mix, which establishes the workforce mix decision process and directs that workforce planning authorities consider all available personnel when determining the workforce mix—that is, the combination of military servicemembers, federal civilians, and private sector contractors. Specifically, we analyzed available documentation for the selected case study systems on any evaluations the services performed of alternative workforces and the related decisions made about eligible personnel categories, and interviewed knowledgeable service officials about factors that informed those evaluations and decisions and any reasons for not evaluating workforce alternatives. We also interviewed officials from the Navy and OUSD(P&R) who are responsible for reviewing workforce and personnel planning documents for Navy and Marine Corps programs to understand any broader DOD or service workforce planning efforts for unmanned systems, and reasons for omitting certain personnel categories from consideration for systems that are in development. We reviewed our prior reports on workforce mix and DOD-commissioned workforce mix studies and interviewed officials from OUSD(P&R) to identify limitations and benefits associated with different categories of personnel, including military servicemembers, federal civilian employees of DOD, and private sector contractors. We reviewed the Navy’s and the Marine Corps’ policies on workforce planning to determine whether those policies provide more detailed guidance or criteria relative to those available in DOD’s policies on circumstances for which alternative personnel sources should be considered or on the limitations and benefits associated with different workforce mix options. We also compared these service-level workforce planning policies with federal internal controls standards that emphasize the importance of having clear, updated policies that align with an organization’s mission and goals. To address our second objective, we reviewed the Navy’s and the Marine Corps’ efforts to develop and update personnel requirements for our selected case study systems, including documentation of steps taken to analyze and determine personnel requirements levels. We interviewed service officials about their views of the sufficiency of those personnel requirements for supporting training and deployment requirements for the selected systems. For any systems that service officials were concerned about the sufficiency of related personnel requirements, we compared documentation of the requirements with DOD Directive 1100.4 and with a Navy instruction. The DOD policy states that personnel requirements should be driven by workload and established at the minimum levels necessary to accomplish mission and performance objectives. Navy Instruction 1000.16L states that personnel requirements must be validated as program changes dictate and at a minimum annually over a system’s lifecycle to determine if a personnel update is required. Further, we reviewed documentation of the life cycle cost estimate for the number of Marine Corps personnel required to operate and maintain the RQ-21 Blackjack, and of UAS squadrons’ position on the sufficiency of those personnel requirements, and compared those documents with DOD guidance requiring that components determine a weapon system program’s life cycle costs by planning for the many factors needed to support the system, including personnel, and with Office of Management and Budget guidance that states that to keep the cost analyses for capital assets, such as weapon systems, current, accurate, and valid, cost estimating should be continuously updated based on the latest information available as programs mature. In addition, we reviewed Navy policies on operating and maintaining UAS and documentation from the Marine Corps about the effect of those policies on UAS squadron personnel workload, and interviewed Navy and Marine Corps headquarters- and unit-level officials about those effects and any efforts underway to review and update policies. We then compared those efforts to review and update policies with DOD Directive 1100.4 stating that existing policies, procedures, and structures should be periodically evaluated to ensure efficient and effective use of personnel resources, and with federal internal controls standards that emphasize the importance of having clear, updated policies that align with an organization’s mission and goals. Finally, we compared goals established in DOD’s Unmanned Systems Integrated Roadmap, FY2013- 2038 and Department of the Navy strategy documents on unmanned systems with federal internal controls standards that state than an agency’s management should define objectives clearly to enable the identification of risk. For our third objective, we reviewed the Navy’s and the Marine Corps’ steps to select, train, and track unmanned system operators to identify any challenges. We reviewed for the selected systems a combination of manpower estimate reports and personnel and training plan documents to identify approaches for staffing operators. We also reviewed personnel and training manuals describing prerequisites for related military qualifications and occupations. We interviewed command- and unit-level officials from the Navy and the Marine Corps to discuss the effectiveness of current staffing approaches for meeting their training and deployment requirements. Focusing on challenges with providing enough personnel to serve as UAS operators in particular, we also reviewed Navy reports on the retention of certain aviation personnel to serve as UAS operators and we reviewed Marine Corps data on its UAS operator inventory and retention levels relative to its requirements and goals. Specifically, we reviewed Navy reports on retention for fiscal years 2015 through 2017 because data from earlier years were less relevant given the lower numbers of UAS inventories. We requested data from the Marine Corps on its inventories of and requirements for enlisted UAS operators for fiscal years 2007 through 2017 and on UAS officers for fiscal years 2013 (the first year of available data) through 2017. We requested retention data—actual numbers of personnel who reenlisted versus annual quotas—on enlisted UAS operators for fiscal years 2010 (the earliest year for which data were available) through 2017. We assessed the reliability of these Marine Corps data by administering questionnaires and interviewing relevant personnel responsible for maintaining and overseeing the systems that supplied the data and manually checking the data for errors or omissions. Through these methods, we obtained information on the systems’ ability to record, track, and report on these data, as well as on the quality control measures in place. We found the inventory and requirements data to be sufficiently reliable for the purposes of describing personnel inventory trends and the sufficiency of operator personnel to meet requirements. We found that the retention data are of undetermined reliability but are reporting them because they are the data of record used by Marine Corps planning officials. We also reviewed Navy and Marine Corps financial incentives for retaining sufficient personnel to serve as UAS operators and compared those approaches with criteria from DOD’s 2012 Eleventh Quadrennial Review of Military Compensation, which established that organizations should assess civilian supply and demand and civilian wages to determine the most cost effective special and incentive pay strategies. Further, we compared the Marine Corps’ efforts to address workforce challenges specific to the Marine Corps’ UAS operator career field with a key principle of strategic human capital planning from our prior work, which states that agencies should ensure that flexibilities are part of their overall human capital strategy. In our prior work, we found that strategic human capital planning is an important component of an agency’s effort to develop long-term strategies for acquiring, developing, and retaining staff needed for an agency to achieve its goals and of an agency’s effort to align human capital activities with the agency’s current and emerging mission. Specifically, we have found that an agency’s efforts to conduct strategic human capital planning should include, among other things, building the capability needed to address administrative, educational, and other requirements important to supporting workforce strategies by ensuring that flexibilities are part of the overall human capital strategy. We focused on workforce challenges in the Marine Corps, in particular, because it has a long-established career field for UAS operators, and the Navy does not yet have a separate career field for any of its unmanned systems operators. We identified workforce challenges within the Marine Corps’ UAS operator career field by reviewing a 2015 Marine Corps-sponsored survey of its pilot and UAS officer workforce. The survey included questions about satisfaction with career and benefits, and intentions to stay in the Marine Corps and the underlying reasons for these. Although officers in ranks of first lieutenant through lieutenant colonel were surveyed, we were unable to include majors and lieutenant colonels in reporting results for UAS officers because the Marine Corps aggregated those officers’ responses with those of majors and lieutenant colonels who operate other types of aircraft. By reviewing the survey methodology and interviewing an official involved in administering the survey and analyzing the results, we determined that the survey results were sufficiently reliable for reporting the perceptions about career satisfaction at a single point in time for UAS operators who answered those questions. In addition, we visited one of three active duty Marine Corps UAS squadrons, which we chose because it had the most deployment experience with the RQ-21 Blackjack UAS. We met with squadron leaders to discuss their views about UAS personnel requirements and staffing approaches. We also conducted eight small group discussions with active duty UAS operators and officers—separately for enlisted personnel and officers—to gain their perspectives on topics such as morale, workload, and career satisfaction. The opinions of Marine Corps UAS operators we obtained during our discussion groups are not generalizable to the population of UAS operators in the Marine Corps. Office of the Secretary of Defense Joint Staff Marine Corps Office of the Deputy Commandant for Aviation Office of the Deputy Commandant for Combat Development and Office of the Deputy Commandant for Manpower and Reserve Affairs Marine Corps Systems Command Marine Unmanned Aerial Vehicle Squadron 2 We conducted this performance audit from September 2016 to February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Comments from the Department of Defense Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, key contributors to this report were Lori Atkinson, (Assistant Director), Melissa Blanco, Tim Carr, Mae Jones, Amie Lesser, Felicia Lopez, Ben Sclafani, Mike Silver, and Paul Sturm. Related GAO Products Department of Defense: Actions Needed to Address Five Key Mission Challenges. GAO-17-369. Washington, D.C.: June 13, 2017. Navy Force Structure: Actions Needed to Ensure Proper Size and Composition of Ship Crews. GAO-17-413. Washington, D.C.: May 18, 2017. High Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. Military Compensation: Additional Actions Are Needed to Better Manage Special and Incentive Pay Programs. GAO-17-39. Washington, D.C.: February 3, 2017. Unmanned Aerial Systems: Air Force and Army Should Improve Human Capital Planning for Pilot Workforces. GAO-17-53. Washington, D.C.: January 31, 2017. Unmanned Aerial Systems: Further Actions Needed to Fully Address Air Force and Army Pilot Workforce Challenges. GAO-16-527T. Washington, D.C.: March 16, 2016. Military Personnel: Army Needs a Requirement for Capturing Data and Clear Guidance on the Use of Military for Civilian or Contractor Positions. GAO-15-349. Washington, D.C.: June 15, 2015. Unmanned Aerial Systems: Actions Needed to Improve DOD Pilot Training. GAO-15-461. Washington, D.C.: May 14, 2015. Air Force: Actions Needed to Strengthen Management of Unmanned Aerial System Pilots. GAO-14-316. Washington, D.C.: April 10, 2014. Human Capital: Additional Steps Needed to Help Determine the Right Size and Composition of DOD’s Total Workforce. GAO-13-470. Washington, D.C.: May 29, 2013. Unmanned Aircraft Systems: Comprehensive Planning and a Results- Oriented Training Strategy Are Needed to Support Growing Inventories. GAO-10-331. Washington, D.C.: March 26, 2010. Human Capital: Key Principles for Effective Strategic Workforce Planning. GAO-04-39. Washington, D.C.: December 11, 2003.
Why GAO Did This Study The Department of the Navy has committed to rapidly grow its unmanned systems portfolio. It currently has at least 24 types of systems and has budgeted nearly $10 billion for their development and procurement for fiscal years 2018-2022. Personnel who launch, navigate, and recover the systems are integral to effective operations. Senate Report 114-255 included a provision for GAO to review the Navy's and the Marine Corps' strategies for unmanned system operators. GAO examined, among other things, the extent to which the Navy and the Marine Corps have (1) evaluated workforce alternatives (such as the use of civilians and contractors) for unmanned system operators and (2) developed and updated personnel requirements and related policies and goals for selected unmanned systems. GAO compared documentation on unmanned systems with DOD policies and conducted discussion groups with unmanned system operators. What GAO Found The Navy and the Marine Corps are rapidly growing their portfolios of unmanned aerial systems (UAS) and unmanned maritime systems and have opted to use military personnel as operators without evaluating alternatives, such as federal civilian employees and private sector contractors. Service officials stated that civilians or contractors are not viable alternatives and policies are unclear about when and how to use them. However, a June 2016 Department of Defense-commissioned study found that alternative staffing strategies could meet the UAS mission more cost-effectively. Military personnel may be the most appropriate option for unmanned systems, but without clarifying policies to identify circumstances in which civilians and contractors may serve in operational roles, the services could continue to make workforce decisions that do not consider all available resources. The Navy and the Marine Corps have sufficient personnel requirements or efforts underway to develop personnel requirements for seven unmanned systems that GAO reviewed (see fig.), but requirements for one system (i.e., the RQ-21 Blackjack UAS) have not been updated. That system's requirements have not been updated because service entities disagree about whether they are sufficient. Since 2015, units have deployed with about two to three times the personnel that headquarters and command officials expected they would need. Marine Corps officials stated that the Blackjack's personnel requirements were based on an outdated concept of operations and are insufficient for supporting workloads. Without updating the personnel requirements for the Blackjack UAS, the services will lack current information about the number of personnel needed. The Department of the Navy has taken positive steps but has not fully evaluated and updated aviation policies that affect personnel requirements for certain UAS and lacks clear goals for informing future requirements for all of its UASs. GAO found that the policies do not fully account for differences between UASs of varying sizes and capabilities. These policies require, for example, that the Blackjack UAS be held to the same maintenance standards designed for larger aircraft and UAS, which in turn affects personnel requirements. Until the Department of the Navy evaluates and updates such policies and clarifies related goals, the services will be hampered in developing and updating future requirements as unmanned system inventories grow and operations expand. What GAO Recommends GAO is making ten recommendations, including that the Navy and the Marine Corps clarify policies to identify circumstances in which civilians and contractors may serve in operational roles and apply the policies to future evaluations; update personnel requirements for one UAS; and evaluate and update policies and goals to inform future personnel requirements. DOD concurred with eight recommendations and partially concurred with two. As discussed in the report, GAO continues to believe that all ten are warranted.
gao_GAO-19-110
gao_GAO-19-110_0
Background Reclamation and the Title XVI Program As Interior’s primary water management agency, Reclamation’s mission has been to manage, develop, and protect water and water-related resources in 17 western states since 1902. Reclamation has led or provided assistance in the construction of most of the large dams and water diversion structures in the West for the purpose of developing water supplies for irrigation, municipal water use, flood control, and habitat enhancement, among others. Reclamation is organized into five regions—Great Plains, Lower Colorado, Mid-Pacific, Pacific Northwest, and Upper Colorado—and the agency’s central office in Denver provides technical and policy support. Each regional office oversees the water projects, including Title XVI projects and studies, located within its regional boundaries. The types of projects eligible under the Title XVI program include, among others, construction of water treatment facilities, pipelines to distribute reused water, and tanks and reservoirs to store reused water. The Title XVI program is one of several programs under Interior’s WaterSMART (Sustain and Manage America’s Resources for Tomorrow) Program. The WaterSMART program is implemented by Reclamation and the U.S. Geological Survey within Interior. According to an Interior document, the WaterSMART program focuses on identifying strategies to help ensure sufficient supplies of clean water for drinking, economic activities, recreation, and ecosystem health. Reclamation carries out its portion of the WaterSMART program by administering grants, including Title XVI grants for water reuse, conducting research, and providing technical assistance and scientific expertise. Reclamation offers three types of grants to project sponsors under the Title XVI program: construction projects, which are projects to plan, design, or construct infrastructure for the treatment and distribution of reused water; feasibility studies, which are documents that generally identify specific water reuse opportunities, describe alternatives, and incorporate other considerations, such as the financial capability of the project sponsor; and research studies, which are studies to help states, tribes, and local communities establish or expand water reuse markets, improve existing water reuse facilities, or streamline the implementation of new water reuse facilities. Key Terms Related to Water Reuse Acre-foot of water: about 326,000 gallons generally identify specific water reuse opportunities; describe alternatives; and incorporate other considerations, such as the financial capability of the project sponsor. Federal awards for construction projects under the Title XVI program are generally limited to 25 percent of total project costs—up to $20 million in federal funding—and require a 75 percent nonfederal cost share from the project sponsor. Federal funding for feasibility studies under the Title XVI program is generally limited to 50 percent of the total study costs, up to $450,000, and federal funding for research studies is generally limited to 25 percent of the total study costs, up to $300,000. Reclamation generally awards Title XVI grants for construction projects to project sponsors in installments over multiple years before the federal funding maximum for each project is reached, whereas it generally awards the full amount for feasibility and research study grants in a single year. Potable: water that is suitable for drinking. Project sponsors: water districts, wastewater or sanitation districts, municipalities, tribes, and other entities that develop projects or studies eligible for Title XVI grants. tribes, and local communities establish or expand water reuse markets, improve existing water reuse facilities, or streamline the implementation of new water reuse facilities. From fiscal year 1992, when the Title XVI program was established, through fiscal year 2009, Congress authorized 53 Title XVI projects. Each of these projects was subject to a cap on the federal cost share. In fiscal years 1992 through 2010, Congress generally directed funding for these specific authorized projects each year. Starting in fiscal year 2011, Congress began appropriating funding for the Title XVI program without directing specific funding to individual projects. As a result, Reclamation started using a competitive process to award Title XVI grants to projects and studies, through which project sponsors with authorized projects applied for Title XVI grants. Only the 53 projects that were already authorized by Congress were eligible to apply for grants for construction projects. Section 4009(c) of the WIIN Act, enacted in December 2016, authorized an additional $50 million to be appropriated for water reuse projects. To be eligible to receive Title XVI grants under the WIIN Act, projects must submit a completed feasibility study to Reclamation, and Reclamation must review the study to determine whether, among other things, the project is technically and financially feasible and provides a federal benefit in accordance with the reclamation laws. Reclamation is then to submit a report with the results of its review to Congress, and projects determined to be feasible are then eligible to apply for grants under the competitive grant program established by the WIIN Act. Each feasibility study identifies an estimated project cost. Like most projects individually authorized prior to the WIIN Act, the federal share of this cost is generally capped at 25 percent, up to $20 million. In addition to construction projects, Reclamation began awarding Title XVI grants to project sponsors for feasibility studies in fiscal year 2011 and for research studies in fiscal year 2016. Figure 1 shows a timeline of the Title XVI program. Water Reuse With water reuse, water that is typically unusable, such as municipal or industrial wastewater, undergoes additional treatment to make it suitable for certain purposes. For example, municipal wastewater typically undergoes primary and secondary treatment before it can be discharged into a river, stream, or other body of water. With water reuse, wastewater generally undergoes further (tertiary) treatment to remove additional nutrients and suspended solids and to disinfect the water. The treated water can then be reused for nonpotable uses, such as landscape or agricultural irrigation or industrial uses. In some cases, wastewater undergoes additional, advanced treatment—such as microfiltration and reverse osmosis—and may then be suitable for potable uses, such as injection into a groundwater basin or reservoir where it may later be extracted for drinking water. Figure 2 shows some of the typical treatment processes that may be applied to reused water, and figure 3 shows some of the typical uses of reused water. Several reports have shown that water reuse could offer significant untapped water supplies, particularly in coastal areas facing water shortages. For example, in a 2012 report on municipal wastewater reuse, the National Research Council of the National Academies estimated that U.S. municipalities discharged about 12 billion gallons of treated municipal wastewater each day into coastal waters. They estimated that reuse of these coastal discharges could directly augment available water sources by providing the equivalent of 27 percent of the municipal supply. Municipalities discharge another 20 billion gallons each day to inland locations. While reuse of inland discharges has the potential to affect the water supply of downstream users by decreasing the amount of water available to them, we previously found that at least some of this volume could also be beneficial. Even with such potential uses, the Environmental Protection Agency reported in 2012 that only 7 to 8 percent of municipal wastewater was being intentionally reused in the United States. Grants Management In our past work, we have highlighted the importance of awarding competitive grants in a fair and transparent way and monitoring grants. In recent years, OMB has taken actions to help improve the effectiveness and efficiency of grantmaking across the federal government. In particular, in December 2014, OMB’s Uniform Guidance became effective for new grant awards after adoption by federal grantmaking agencies, including Interior. The Uniform Guidance requires, among other things, that federal agencies provide public notices of funding opportunities, and these notices are to contain information, such as key dates and the merit and other criteria that the agency will use to evaluate applications. The Uniform Guidance also requires certain monitoring activities for federal grants, such as generally requiring grant recipients to submit financial reports. Reclamation Awarded About $715 Million for Title XVI Projects and Studies, and Some Construction Projects Remain Eligible for Title XVI Grants From fiscal years 1992 through 2017, Reclamation awarded about $715 million for 46 construction projects and 71 studies under the Title XVI program, based on our review of agency documents. Most of this funding—about $703 million—went toward construction projects, while the remaining awards were for feasibility and research studies. Some construction projects remain eligible for Title XVI grants. Specifically, about $464 million in grant funding not yet awarded up to the federal ceiling remains for individually congressionally authorized Title XVI construction projects, and about $513 million remains in total estimated projects eligible for Title XVI grants under the WIIN Act, as of August 2018. Most of the $715 Million Reclamation Awarded Was for Title XVI Construction Projects Across the three different types of grants offered under the Title XVI program—construction projects, feasibility studies, and research studies—Reclamation awarded about $715 million from fiscal years 1992 through 2017, according to agency documents. This $715 million awarded under Title XVI leveraged more than $2.8 billion in nonfederal cost share. Reclamation awarded most of this Title XVI funding for construction projects, as shown in table 1. Overall, Reclamation awarded about $703 million under Title XVI to 46 construction projects from fiscal years 1992 through 2017. Of these 46 construction projects that received awards, 43 were individually congressionally authorized construction projects and 3 were construction projects that were eligible for Title XVI grants under the WIIN Act, according to agency documents we reviewed. Additionally, Reclamation made awards for 71 studies—58 feasibility study grants since fiscal year 2011 and 13 research study grants since fiscal year 2016. Some Construction Projects Remain Eligible for Title XVI Grants Based on our review of Reclamation financial data, some construction projects remain eligible for Title XVI grants. Eligible project costs fell into two categories: (1) grant funding not yet awarded up to the federal ceiling for individually congressionally authorized Title XVI construction projects, and (2) the federal share of estimated costs identified in feasibility studies for projects eligible for Title XVI grants under the WIIN Act. About $464 million in not-yet-awarded funding remained for 28 individually congressionally authorized Title XVI construction projects as of August 2018. Also, about $513 million remained in estimated project costs for the 40 construction projects that were eligible under the WIIN Act, as of August 2018, as shown in table 2 below. As of August 2018, of the 53 individually congressionally authorized construction projects, more than half—28 projects—had remaining project costs eligible for Title XVI grants. The 13 ongoing congressionally authorized projects had about $233 million in project costs that had not yet been awarded. Some project sponsors told us that they were in the process of designing or constructing projects. Others told us that while they were not currently designing or constructing projects, they had plans to pursue additional Title XVI grant awards in the future. More than one-third of the $233 million in remaining eligible project costs was for two projects— located in San Diego and San Jose, California—that were two of the projects authorized when the Title XVI program was created in 1992. The 15 congressionally authorized projects with no planned construction had remaining project costs of about $231 million eligible for Title XVI grants. Project sponsors identified several reasons why they were not planning to apply for further grant awards. Specifically, several project sponsors said they had faced challenges in applying for further grants because language in the statutes authorizing the projects limited the scope of their projects. For example, one project sponsor told us that it was interested in expanding its water reuse demonstration facility but that it was not eligible to apply for additional Title XVI grants because the statute that authorized the project specifically authorized a demonstration facility. In addition, one project sponsor stated that its project authorization had already reached its sunset date, which means the project can no longer apply for Title XVI grants. Some of the project sponsors with no construction planned said that they may consider applying for additional Title XVI grants under their existing authorizations in the future, should they decide to move forward with construction. However, others said that they had decided not to move forward with authorized projects and had no plans to apply for Title XVI grants in the future. For example, one project sponsor said that it had determined that its project was no longer financially feasible. In addition, as of August 2018, 40 projects had Reclamation-approved feasibility studies that had been transmitted to Congress, based on our review of agency documents, and were therefore eligible to apply for Title XVI construction grants under the WIIN Act. A total of about $513 million in project costs across these 40 projects remained eligible for Title XVI grants. Of the 40 projects, 20 applied for Title XVI grants in fiscal year 2017, and Reclamation selected 3 for awards. These 20 projects had about $269 million in project costs that remained eligible for Title XVI grants. Twenty projects did not apply for Title XVI grants in fiscal year 2017 and had about $244 million in project costs that remained eligible for these grants, as of August 2018. Title XVI Projects and Studies Vary in Their Uses of Reused Water and Include Urban and Rural Areas Title XVI projects and studies for fiscal years 1992 through 2017 cover various uses for reused water and include both urban and rural areas throughout the West, based on our review of agency data as well as documents from and interviews with project sponsors. For example, Title XVI construction projects produce both nonpotable and potable reused water for a variety of purposes, such as landscape and agricultural irrigation, habitat restoration, and extraction as drinking water. The projects and studies funded by the Title XVI program include both urban and rural areas throughout the West, with California accounting for 36 construction projects and about 90 percent of total Title XVI funding. Title XVI Projects Are Generally Large-Scale Infrastructure Projects that Produce Nonpotable and Potable Reused Water for a Variety of Purposes Title XVI construction projects are generally large-scale infrastructure projects, such as water reuse treatment plants and pipelines, that produce, store, and distribute reused water for a variety of purposes, both nonpotable and potable. Since the inception of the Title XVI program, Reclamation has awarded Title XVI grants to construction projects that cumulatively provided nearly 390,000 acre-feet of reused water in 2017. According to Reclamation data, the projects funded by Title XVI individually delivered between 38 acre-feet of reused water and more than 100,000 acre-feet of water in fiscal year 2017. Most of these construction projects provided reused water for nonpotable uses across four main categories: (1) landscape irrigation, (2) agricultural irrigation, (3) commercial and industrial use, and (4) habitat restoration. Landscape irrigation. Landscape irrigation—including irrigation of golf courses, road medians, school grounds, parks, sport fields, and other green spaces—is the most common use of reused water produced by Title XVI projects, with 29 Title XVI projects producing reused water for this purpose, based on our analysis of documents from Reclamation and project sponsors. The reused, nonpotable water produced by such projects is generally distributed through purple-colored pipes, to denote that the water is not for drinking purposes. For example, the Title XVI program provided grants to Eastern Municipal Water District—a water district located in Southern California—to help build water reuse infrastructure, including pipelines, pumping stations, and storage tanks. With this added storage capacity, the district has the ability to store more than 2 billion gallons of reused water, which is used to irrigate sports fields, golf courses, parks, school grounds, and medians, according to the project sponsor. By maximizing use of its reused water, the project sponsor noted that the district is reducing its dependence on water piped in from other parts of the state or region. Similarly, the Title XVI program provided grants to help build pipelines and reservoirs to distribute and store reused water for landscape irrigation and other purposes in other parts of California (see fig. 4). Agricultural irrigation. Reused water produced by Title XVI projects is also used to irrigate a variety of agricultural products, including fruits and vegetables, flowers, and vineyards. For example, the North Valley Regional Recycled Water Program is helping to provide a reliable water source for the Del Puerto Water District, which provides water to approximately 45,000 acres of farmland in California’s San Joaquin Valley, according to the project sponsor. The Del Puerto Water District has encountered water shortages in recent years, which have created economic hardships on growers in the area, according to the project sponsor. Title XVI grants provided under WIIN Act authority helped the district expand its reused water supply and distribution infrastructure and ensure a reliable, drought-resistant water supply, according to the project sponsor. In addition, reused water produced by the Watsonville Area Water Recycling Project near Watsonville, California, is used to irrigate strawberries and other fruits and vegetables as well as flowers. The groundwater basin that serves the coastal region where Watsonville is located has been overdrafted for a long time, causing groundwater elevations to drop below sea level and leading to seawater intrusion that makes the groundwater unusable in certain areas, according to the project sponsor. This sponsor noted that Watsonville’s Title XVI project helps reduce demand on the overdrafted groundwater basin, which in turn helps to protect against further seawater intrusion and also provides a reliable, drought-tolerant water supply to help protect the region’s agricultural economy. Figure 5 shows flowers in a greenhouse that are irrigated with reused water from Watsonville’s Title XVI project. Commercial and industrial use. Reused water produced by Title XVI projects is used for cooling towers at power plants and data centers, oil production, toilet flushing in university and commercial buildings, and for other commercial and industrial purposes, according to project sponsors. For example, some of the reused water produced by the Southern Nevada Title XVI project is used for power plant cooling, and reused water from San Jose’s Title XVI project is used for cooling at data centers in California’s Silicon Valley. In addition, reused water from the Long Beach Area Reclamation Project is injected into the ground after oil is extracted, which helps prevent the ground from sinking, according to the project sponsor. Having access to a secure source of reused water can attract data centers and other businesses that require large amounts of water to areas that can guarantee access to reused water, according to a project sponsor and representatives from a nongovernmental water reuse organization we interviewed. Habitat restoration. Some Title XVI projects use reused water to restore wetlands or supply water to recreational lakes. For example, in California’s Napa Valley, reused water from the North Bay Title XVI project is being used to restore the Napa Sonoma Salt Marsh. Some threatened and endangered species, such as the Chinook Salmon, have started returning to the area since the restoration began, according to the project sponsor. Reused water from this Title XVI project also provides other habitat benefits. For example, wineries in the area that irrigate with reused water do not need to divert as much water from streams, which leaves more water for fish, according to the project sponsor. In addition, the North Valley Regional Recycled Water Program in California’s San Joaquin Valley supplies reused water to wildlife refuges and wetlands, in addition to agricultural lands. This area has the largest remaining freshwater marsh in the western United States, which provides critical habitat for migratory birds as well as other species, according to the project sponsor (see fig. 6). There are also several potable projects that have been funded by Title XVI. These projects generally fall into two categories: (1) indirect-potable reuse and (2) desalination. Indirect-potable reuse. Title XVI has provided grants for indirect-potable projects, in which wastewater undergoes advanced treatment to obtain potable-quality water. The water is then injected into an environmental buffer, such as a groundwater aquifer, where it is left for a certain amount of time before it is extracted. The water is treated again before it is distributed as drinking water. One use for highly-treated reused water is for seawater barriers, where water is injected into the ground to prevent the intrusion of high-salinity water into groundwater aquifers. Indirect- potable reuse has been gaining prominence, according to some project sponsors and representatives from nongovernmental water reuse organizations, with Title XVI grants going to several project sponsors for both the construction of facilities as well as research into optimal treatment methods. For example, the Groundwater Replenishment System in Orange County, California, which was partially funded by Title XVI, takes highly-treated wastewater that would have previously been discharged into the Pacific Ocean and purifies it using an advanced treatment process. The water is then injected into a groundwater aquifer and is later extracted as drinking water that serves more than 800,000 people, according to the project sponsor. Figure 7 shows reused water at several different points in the treatment process and reverse osmosis treatment equipment at Orange County’s Groundwater Replenishment System. Desalination. Title XVI has provided grants for projects that treat brackish groundwater—water that has a salinity above freshwater but below seawater—and then feed it directly into potable water distribution systems or into a groundwater aquifer or surface water reservoir. For example, the Mission Basin Groundwater Purification Facility in Oceanside, California, desalinates brackish groundwater using reverse osmosis and other treatment methods. The reused water supplies about 15 percent of the city’s water needs, according to the project sponsor. In addition to Title XVI construction projects, Reclamation’s feasibility and research studies also vary in their planned uses of reused water. For example, one feasibility study project sponsor we interviewed was awarded a Title XVI grant to investigate the feasibility and potential impacts of reusing produced water from oil and gas operations in Oklahoma. The study plans to investigate possible dual benefits of reusing produced water, including (1) providing a new source of water for irrigation and other purposes and (2) reducing the disposal of produced water as a possible means for addressing increased seismic activity associated with oil and gas operations, according to the project sponsor. Another feasibility study project sponsor we interviewed from a rural, landlocked community in Washington State is investigating the feasibility of creating a virtual zero discharge system that would eliminate all wastewater disposal by reusing the wastewater. Similar to feasibility studies, Title XVI research studies address different topics. For example, one project sponsor we interviewed was researching how to optimize filtration of reused water using membrane filtration, which is a critical treatment process to reduce contaminants in water. Another project sponsor was researching impediments and incentives to using reused water for agricultural irrigation. Title XVI Projects and Studies Include Western Urban and Rural Areas Based on our review of agency documents, project sponsors in 12 of the 18 states eligible to participate in the Title XVI program were awarded at least one type of funding under Title XVI since the inception of the program in 1992, as shown in table 3. From fiscal year 1992 through fiscal year 2017, Reclamation awarded about $640 million—or about 90 percent of total awarded Title XVI funding—to projects in California, the majority of which was for construction projects. The concentration of projects in California reflects the early emphasis of the Title XVI program on Southern California and reducing its reliance on water provided by the Colorado River, as well as the high level of interest in the program in the state, according to a 2010 Congressional Research Service report. Overall, project sponsors in 9 states were awarded feasibility study grants, sponsors in 4 states were awarded research study grants, and sponsors in 8 states were awarded construction grants (see fig. 8). Title XVI projects and studies include western urban and rural areas. In particular, many Title XVI projects are sponsored by entities in urban areas that serve a large population base. For example, the main part of the Los Angeles Area Water Supply Title XVI project is sponsored by the West Basin Municipal Water District, which has a service area of nearly 1 million people in 17 cities and unincorporated areas in Los Angeles County. This Title XVI project produces five different types of reused water to meet the unique needs of West Basin’s municipal, commercial, and industrial reuse customers, according to the project sponsor. Similarly, the City of San Diego, which has a population of about 1.4 million, was awarded Title XVI grants for a number of projects, including an indirect-potable reuse project anticipated to provide one-third of San Diego’s water supply by 2035, according to the project sponsor. Other Title XVI projects are sponsored by entities in rural areas and small cities. For example, the Hi-Desert Water District project serves a rural and economically disadvantaged community in the town of Yucca Valley, California, that has a population of about 20,000. This Title XVI project will fund facilities to collect, treat, and reuse treated wastewater, thereby eliminating degradation of the local groundwater supply and helping ensure a safer, reliable water supply for this community, according to the project sponsor. Similarly, the city of Round Rock, Texas, which has a population of about 120,000, sponsored the Williamson County Title XVI project. This project produces reused water for landscape irrigation, most of which is used to irrigate a 650-acre park, according to the project sponsor. Some Title XVI projects are sponsored by regional partnerships composed of different local entities. For example, in the late 1990s, 4 entities in Northern San Diego County—Carlsbad Municipal Water District, Leucadia Wastewater District, Olivenhain Municipal Water District, and San Elijo Joint Powers Authority—formed a coalition to leverage their water reuse programs; the coalition has since grown to 10 entities. This coalition sponsored an individually congressionally authorized Title XVI project, the North San Diego County project, and applied for a Title XVI grant for a new project eligible under the WIIN Act in fiscal year 2017. Similarly, in the northern part of the San Francisco Bay Area, 10 local agencies formed a regional partnership covering 315 square miles across Sonoma, Marin, and Napa Counties to sponsor the North Bay Water Reuse Program. According to the project sponsors involved in this regional partnership, using a regional partnership approach to water reuse projects provides an economy of scale; maximizes the ability to obtain local, state, and federal funding for the projects; and allows smaller, local entities to access funding and expertise for projects that would be out of reach without regional collaboration. See appendix I for more detailed information on specific Title XVI construction projects. Reclamation’s Project Selection Process Is Consistent with Relevant Federal Grant Regulations, and Its Evaluation Criteria Have Changed in Recent Years Reclamation’s process for selecting projects and studies to award grants under the Title XVI program involves announcing the funding opportunity, establishing criteria to evaluate potential projects, and reviewing applications to make award decisions. We found that this process is consistent with relevant federal grant regulations outlined in OMB’s Uniform Guidance, based on our review of agency documents and federal grant regulations. The criteria Reclamation uses to evaluate Title XVI projects have changed in recent years, with the elimination or addition of some criteria and changes in the weighting of others. Reclamation Publicly Announces Funding Opportunities and Has a Merit Review Process for Applications, which Is Consistent with Relevant Federal Grant Regulations To start its selection process, Reclamation announces funding opportunities by developing annual funding opportunity announcements (FOA), which are publicly available on its website and on www.grants.gov. These FOAs contain information for applicants to consider prior to applying, including the types of eligible projects and studies, estimated funding available, information on the application review process, the application due date, and the criteria that Reclamation will use to score applications. Project sponsors submit applications for Title XVI grants to Reclamation in response to the FOAs, according to Reclamation officials. Reclamation officials then review the applications to ensure the projects are eligible and that applications are complete, according to agency officials we interviewed and documents we reviewed related to the selection process. Next, an application review committee scores eligible applications. The application review committee is composed of Reclamation staff representing the five regions and other staff with technical expertise. Committee members individually review and score each Title XVI application based on the evaluation criteria in the FOA. After the individual scoring, the application review committee meets collectively to discuss the scores; this meeting is generally facilitated by Title XVI program staff from Reclamation’s central office in Denver. If there are any outliers in the scores—e.g., if a committee member scores an application significantly higher or lower than the other members—then they are to discuss and may adjust the score to help ensure fairness and consistency in how the applications are scored relative to the evaluation criteria, according to agency officials. Following this discussion, Reclamation averages the members’ scores for each application and then ranks the applications based on the average scores. Reclamation creates a list of recommended projects and funding amounts for these projects, based on the rankings and congressional direction on the amount of funding for the Title XVI program in any given year. Reclamation’s process for selecting projects and studies to fund under the Title XVI program is consistent with relevant federal grant regulations outlined in the Uniform Guidance. Based on our review of Title XVI FOAs from fiscal years 2011 through 2018, all FOAs met the requirements prescribed by the Uniform Guidance. Specifically, the Uniform Guidance requires that grant funding opportunities be publicly announced and contain certain information, such as the evaluation criteria, key dates, and the process used to evaluate applications. Based on our review of FOAs, Reclamation’s FOAs were publicly announced and contained this information. Many project sponsors we interviewed said that Reclamation’s Title XVI application selection process is generally clear and well-managed and that Reclamation officials, at both the regional level and central office in Denver, were responsive and transparent throughout the selection process. Several project sponsors noted that Reclamation offered to debrief with Title XVI applicants after it made its grant selections; further, Reclamation officials provided constructive feedback to applicants to improve their applications in future years. Some project sponsors raised concerns about how long it takes WIIN Act- eligible Title XVI projects to be awarded grants. In particular, the WIIN Act provides that WIIN Act-eligible projects can only receive funding if an enacted appropriations act designates funding by project name, after Reclamation has recommended specific projects for funding and transmitted its recommendations to Congress. Given the timing of Reclamation’s FOA process, WIIN Act-eligible projects selected in a given fiscal year generally need to be included in the subsequent fiscal year’s appropriations act. For example, congressional direction in May 2017 provided that $10 million of the total Title XVI funding was to go to Title XVI WIIN Act-eligible projects, and Reclamation sent Congress its fiscal year 2017 selections for WIIN Act-eligible projects to fund in November 2017. However, according to Reclamation officials, Reclamation could not begin awarding fiscal year 2017 funding to selected projects until March 2018, after enactment of the fiscal year 2018 appropriations act, which listed the selected projects by name. One project sponsor noted that this two-part process created challenges related to the project timeline and budget. Reclamation officials said that project sponsors have also expressed concerns to Reclamation about how any resulting delays may affect the ability of projects to move forward. Reclamation officials noted that this is a statutory requirement and that they had discussed this process with project sponsors to make them aware of the timing for the grants. Criteria Used to Evaluate Projects Have Changed in Recent Years Reclamation has changed the evaluation criteria it uses to select projects to fund under the Title XVI program since it began using a competitive process in fiscal year 2011. Reclamation first developed criteria for the annual Title XVI project selection process in 2010, which it applied starting in fiscal year 2011. Prior to that, Congress generally provided project-specific funding direction for individually authorized Title XVI projects. According to agency officials, Reclamation developed the initial evaluation criteria for the annual Title XVI selection process based on (1) the language in the Reclamation Wastewater and Groundwater Studies and Facilities Act, as amended; (2) Reclamation goals and priorities for the program; and (3) the criteria Reclamation used to select projects to fund under the American Recovery and Reinvestment Act of 2009. Reclamation sought and incorporated public comments on the criteria in 2010. After that, Reclamation’s evaluation criteria for Title XVI construction projects generally remained unchanged from fiscal years 2011 through 2016. In fiscal years 2017 and 2018, Reclamation eliminated some criteria in the Title XVI FOAs for construction projects, added some new criteria, and changed the weighting of some criteria, based on our review of FOAs for those years. For example, in 2017, Reclamation more than doubled the weight of the economic criterion for the fiscal year 2017 FOA for WIIN Act-eligible projects, making it worth 35 percent of the points as compared to the previous 13 percent. Reclamation officials told us that these changes were made in response to the language of the WIIN Act— which listed a number of criteria for projects, including projects that provide multiple benefits—and comments they received from OMB during the review process for the revised criteria. In March 2018, Reclamation proposed further revisions to the evaluation criteria for the fiscal year 2018 Title XVI program and held a public comment period to solicit input on the proposed changes. The proposed FOA contained one set of criteria applicable to both types of eligible Title XVI construction projects—individually congressionally authorized and WIIN Act-eligible projects. Reclamation received 21 comment letters on the criteria and, after analyzing the comments, officials said that they made additional changes to some of the criteria before issuing the final fiscal year 2018 FOA on May 30, 2018. For example, Reclamation added clarification to the economic criteria. See appendix II for a more detailed description of the final fiscal year 2018 Title XVI criteria, as well as changes to the criteria in fiscal years 2017 and 2018. Several project sponsors noted that changes to the evaluation criteria may affect which projects are more competitive in Reclamation’s application scoring and project selection process. In particular, several project sponsors and representatives from nongovernmental organizations we interviewed told us they believed that recent changes— particularly the increased weight on economic criteria, including cost effectiveness—may disadvantage small projects. Others said increasing the weight on cost effectiveness may disadvantage new projects that are just beginning construction of costly new treatment facilities versus projects that are expanding existing facilities. Reclamation officials we interviewed stated that the economic criteria take into account the extent to which projects would provide multiple benefits—not just cost effectiveness. They also pointed out that they clarified in the fiscal year 2018 FOA that there are a number of ways to provide information on project benefits in Title XVI applications, including by describing benefits in a qualitative manner. They added that feedback from project sponsors had been positive on the additional changes Reclamation made in response to earlier stakeholder comments on the economic criteria for the final fiscal year 2018 FOA. Furthermore, Reclamation’s increased emphasis on economic criteria is consistent with federal principles on federal spending for water infrastructure projects, which states that federal infrastructure investments are to be based on systematic analysis of expected benefits and costs. Reclamation’s Process for Monitoring Title XVI Grants Is Consistent with Relevant Federal Grant Regulations To monitor Title XVI grants, Reclamation reviews financial and performance reports submitted by project sponsors, regularly communicates and visits with project sponsors to obtain information on the status of the projects, and collects information on the amount of water Title XVI projects deliver each year, which is included in Interior’s annual performance report. Financial and Performance Reports. In its financial assistance agreements for Title XVI grants, Reclamation generally requires project sponsors to submit financial and performance reports. Specifically, Reclamation generally requires that project sponsors submit financial and performance reports at least once per year and sometimes more frequently, as determined by the risk that each project poses, according to agency officials. Based on our review of reports, the financial reports list transactions related to Title XVI grants, such as expenditures, and the performance reports provide updates on the status of the Title XVI projects. Reclamation delineates its monitoring requirements, which generally include requirements for financial and performance reports, in the financial assistance agreements for Title XVI grants that each project sponsor agrees to prior to receiving funding. In our review of documents related to Reclamation’s monitoring process for Title XVI construction grants active in fiscal year 2017, we found that project sponsors submitted all but one financial and performance reports that Reclamation had required, and submitted all but two by their due date or within 2 weeks of this date. We found that Reclamation’s requirements are consistent with relevant federal grant regulations in OMB’s Uniform Guidance, which provide that federal awarding agencies, including Reclamation, generally are to collect financial reports from project sponsors at least annually. Ongoing Communication and Site Visits. To further monitor the performance of Title XVI grants, Reclamation officials communicate regularly with project sponsors via telephone and email and conduct site visits to obtain information on the status of the projects, according to Reclamation officials and project sponsors. Based on our review of agency guidance, Reclamation generally is to conduct at least one site visit per year for projects with significant on-the-ground activities, such as construction projects. During the visits, agency officials generally are to receive updates on progress made on the project and determine if it is on schedule and meets the scope of work identified in the financial assistance agreement. Reclamation generally requires officials to document these visits and other monitoring activities in project files, according to agency documents. Through the site visits and other communication with project sponsors, agency officials may also provide information on program requirements and respond to project sponsors’ questions about the Title XVI program. For example, during site visits, Reclamation officials have responded to project sponsors’ questions about the status of payments and allowable project costs and clarified requirements for financial and performance reports, according to our review of agency documents and interviews with project sponsors. In our review of Reclamation’s Title XVI construction grants active in fiscal year 2017, we found that Reclamation generally conducted annual site visits for Title XVI construction projects that year. We found that this is consistent with federal grant regulations in OMB’s Uniform Guidance, which state that federal awarding agencies may make site visits as warranted by program needs. Data Collection. Reclamation also annually collects data on the amount of water delivered from each Title XVI construction project, as well as projected water deliveries for the coming year. Reclamation analyzes the water delivery data, compares projected data to actual deliveries, and follows up with project sponsors to understand any discrepancies, according to agency officials. For example, actual water deliveries could be lower than projected deliveries if communities implement water conservation measures that result in projects having less wastewater to treat and deliver for reuse. According to Reclamation officials, information on the amount of reused water delivered from Title XVI projects helps them to monitor progress on Title XVI projects and helps demonstrate the benefits and accomplishments of the Title XVI program. These data are consolidated and included in Interior’s annual performance report to demonstrate how the agency is meeting Interior’s objective of achieving a more sustainable and secure water supply. Collecting data on Title XVI water deliveries is consistent with the Title XVI program’s purpose of supporting water supply sustainability by providing financial and technical assistance to local water agencies for the planning, design, and construction of water reuse projects. Agency Comments We provided a draft of this report to the Department of the Interior for review and comment. The Department of the Interior provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of the Interior, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Information on Construction Projects Eligible under the Title XVI Water Reclamation and Reuse Program This appendix provides information on construction projects that are individually congressionally authorized under the Bureau of Reclamation’s Title XVI Water Reclamation and Reuse Program (Title XVI), as well as projects to which Reclamation awarded grants under the Water Infrastructure Improvements for the Nation Act (WIIN Act) funding opportunity in fiscal year 2017. Figure 9 below provides information on the 53 construction projects that have been individually authorized by Congress under the Title XVI program. The projects are ordered by the total amount of Title XVI funding each was awarded from fiscal years 1992 through 2017, from highest to lowest. Figure 10 below provides information on the three construction projects to which Reclamation awarded grants under the Title XVI WIIN Act funding opportunity in fiscal year 2017. The projects are ordered by the total Title XVI funding each was awarded in fiscal year 2017—the first year that grants were awarded under the WIIN Act—from highest to lowest. Appendix II: Information on the Evaluation Criteria Used to Select Projects to Award Grants under the Title XVI Program This appendix provides detailed information on the evaluation criteria the Bureau of Reclamation used to select projects to award grants under the Title XVI Water Reclamation and Reuse Program (Title XVI). The six evaluation criteria Reclamation used to select construction projects to fund in fiscal year 2018 are as follows (points are out of a total of 110 points). 1. Water Supply (35 points) a. Stretching Water Supplies (18 points): Points will be awarded based on the extent to which the project is expected to secure and stretch reliable water supplies. Consideration will be given to the amount of water expected to be made available by the project and the extent to which the project will reduce demands on existing facilities and otherwise reduce water diversions. b. Contributions to Water Supply Reliability (17 points): Points will be awarded for projects that contribute to a more reliable water supply. 2. Environment and Water Quality (12 points): Points will be awarded based on the extent to which the project will improve surface, groundwater, or effluent discharge quality; will restore or enhance habitat for nonlisted species; will address problems caused by invasive species; or will provide water or habitat for federally listed threatened or endangered species. Indirect benefits of the project will also be considered under this criterion. 3. Economic Benefits (35 points) a. Cost Effectiveness (10 points): Points will be awarded based on the cost per acre-foot of water expected to be delivered upon completion of the project and how the cost of the project compares to a nonreclaimed water alternative. b. Economic Analysis and Project Benefits (25 points): Points will be awarded based on the analysis of the project’s benefits relative to the project’s costs. 4. Department of Interior Priorities (10 Points): Points will be awarded based on the extent that the proposal demonstrates that the project supports the Department of the Interior priorities, such as utilizing natural resources and modernizing infrastructure. 5. Reclamation’s Obligations and Benefits to Rural or Economically Disadvantaged Communities (8 points) a. Legal and Contractual Water Supply Obligations (4 Points): Points will be awarded for projects that help to meet Reclamation’s legal and contractual obligations. b. Benefits to Rural or Economically Disadvantaged Communities (4 Points): Points will be awarded based on the extent to which the project serves rural communities or economically disadvantaged communities in rural or urban areas. 6. Watershed Perspective (10 Points): Points will be awarded based on the extent to which the project promotes or applies a watershed perspective by implementing an integrated resources management approach, implementing a regional planning effort, forming collaborative partnerships with other entities, or conducting public outreach. Reclamation changed some of its evaluation criteria in fiscal years 2017 and 2018. The fiscal year 2017 changes were made in response to requirements in the Water Infrastructure Improvements for the Nation Act (WIIN Act)—which listed several criteria for projects, including projects that provide multiple benefits—and comments from the Office of Management and Budget, according to Reclamation officials. The fiscal year 2018 changes were generally made in response to comments Reclamation received during the formal comment period it held in March and April 2018 to solicit input on the criteria, according to Reclamation officials. The changes to the criteria are shown in table 4. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Elizabeth Erdmann (Assistant Director), Lesley Rinner (Analyst-in-Charge), Margaret Childs, and Sierra Hicks made key contributions to this report. Ellen Fried, Timothy Guinane, Thomas M. James, John Mingus, Patricia Moye, Anne Rhodes-Kline, Sheryl Stein, and Sara Sullivan made additional contributions.
Why GAO Did This Study Population growth and drought are among the factors that have placed increasing demands on the U.S. water supply, particularly in the arid West. The reuse of wastewater can help address water management challenges by treating water that is typically unusable and then reusing it for beneficial purposes, such as irrigation, according to the Environmental Protection Agency. Reclamation's Title XVI program awards grants for the study and construction of water reuse projects in 17 western states and Hawaii. From fiscal years 1992 through 2009, Congress individually authorized some Title XVI projects. In 2016, Congress amended the Title XVI program to allow grants to be awarded to additional water reuse projects. GAO was asked to review the Title XVI program. This report describes, among other things, for the Title XVI program (1) grants Reclamation has awarded for projects and studies and remaining projects that are eligible for grants, (2) the types and locations of projects and studies that have received grants, and (3) Reclamation's process for selecting projects and studies and its consistency with federal grant regulations as well as how the program's evaluation criteria have changed since 2011. GAO reviewed relevant laws, regulations, and agency guidance; analyzed financial data for fiscal years 1992 through 2017; compared documents related to the project selection process against federal grant regulations; and interviewed agency officials and nonfederal project sponsors with different types of projects. What GAO Found The Bureau of Reclamation, within the Department of the Interior, awarded about $715 million in water reuse grants for 46 construction projects and 71 studies under the Title XVI Water Reclamation and Reuse Program (Title XVI) from fiscal year 1992 through fiscal year 2017, according to agency documents. Most of the Title XVI funding—about $703 million—has been awarded for construction projects. Some construction projects remain eligible for Title XVI grant funding. About $464 million in eligible Title XVI grant funding not yet awarded remains for projects that Congress individually authorized; for projects eligible under the 2016 amendments to the Title XVI program, about $513 million remains. Title XVI projects and studies cover various uses for reused water. For example, many projects GAO reviewed produce reused water for landscape and agricultural irrigation, as well as water that may later be extracted for drinking water, as shown in the figure. Title XVI projects are located in western urban and rural areas, with California accounting for 36 construction projects. Reclamation's process to select Title XVI projects and studies to receive grants involves announcing the funding opportunity, establishing criteria to evaluate potential projects, and reviewing applications to make award decisions, according to agency documents GAO reviewed. GAO found that Reclamation's grant award process is consistent with relevant federal regulations for awarding grants. For example, the Title XVI funding opportunity announcements GAO reviewed contained information required by the regulations, such as the criteria used to evaluate applications. In recent years, Reclamation has changed the criteria it uses to evaluate projects, eliminating or adding some criteria and changing the weighting of others. Reclamation officials said that these changes were made in part in response to statutory changes.
gao_GAO-17-783T
gao_GAO-17-783T_0
Prior GSA and FBI Assessments Showed That FBI Headquarters Facilities Did Not Fully Support the FBI’s Long-Term Requirements In November 2011, we reported that over the previous decade, the FBI and GSA conducted a number of studies to assess the Hoover Building and its other headquarters facilities’ strategic and mission needs. Through these studies, they determined the condition of the FBI’s current assets and identified gaps between current and needed capabilities, as well as studied a range of alternatives to meet the FBI’s requirements. According to these assessments, the FBI’s headquarters facilities did not fully support the FBI’s long-term security, space, and building condition requirements. Since our report, the assessment of the Hoover Building has not materially changed. For example: Security: Since September 11, 2001, the FBI mission and workforce have expanded, and the FBI has outgrown the Hoover Building. As a result, the FBI also operates in annexes, including some located in the National Capital Region. During our 2011 review, FBI security officials told us that they have some security concerns—to varying degrees— about the Hoover Building and some of the headquarters annexes. In our report, we noted that the dispersion of staff in annexes created security challenges, particularly for at least nine annexes that were located in multitenant buildings, where some space was leased by the FBI and other space was leased by nonfederal tenants. While this arrangement did not automatically put FBI operations at risk, it heightened security concerns. In addition, in January 2017, we found that the FBI occupies space leased from foreign owners in at least six different locations, including one in Washington, D.C. Further, federal officials who assess foreign investments told us at that time that leasing space in foreign-owned buildings could present security risks, such as espionage and unauthorized cyber and physical access. Space: In 2011, we reported that FBI and GSA studies showed that much of the Hoover building’s approximately 2.4 million gross square feet of space is unusable, and the remaining usable space is not designed to meet the needs of today’s FBI. Moreover, the Hoover Building’s original design is inefficient, according to GSA assessments, making it difficult to reconfigure space to promote staff collaboration. For example, in its fiscal year 2017 prospectus for the proposed FBI headquarters consolidation project, GSA noted that the Hoover Building was designed at a time when FBI operated differently, and it cannot be redeveloped to provide the necessary space to consolidate the FBI Headquarters components or to meet the agency’s current and projected operational requirements. As a result, the FBI reported facing several operational and logistical challenges. We similarly noted in our prior work in 2011 that space constraints at the Hoover Building and the resulting dispersion of staff sometimes prevented the FBI from physically locating certain types of analysts and specialists together, which in turn hampered collaboration and the performance of some classified work. Building condition: In our 2011 report, we noted that the condition of the Hoover Building was deteriorating, and GSA assessments had identified significant recapitalization needs. At that time, we found that GSA had decided to limit investments in the Hoover Building to those necessary to protect health and safety and keep building systems functioning while GSA assessed the FBI’s facility needs. We found that this decision increased the potential for building system failures and disruption to the FBI’s operations. Given that the FBI would likely remain in the building for at least several more years, we recommended that GSA evaluate its strategy to minimize major repair and recapitalization investments and take action to address any facility condition issues that could put FBI operations at risk and lead to further deterioration of the building. In 2014, in response to our recommendation, GSA evaluated its strategy for the Hoover Building and determined it needed to complete some repairs to ensure safety and maintain tenancy in the building. For example, in 2014, GSA funded contracts to waterproof portions of the building’s mezzanine level to prevent water intrusion into the building and repair the concrete facade, small sections of which had cracked and fallen from the building. In July 2017, GSA and FBI officials stated that they cancelled the procurement for the new FBI headquarters consolidation project, noting that the there was a lack of funding necessary to complete the procurement. GSA added that the cancellation of the procurement did not lessen the need for a new FBI headquarters, and that GSA and the FBI would continue to work together to address the space requirements of the FBI. GSA Has Limited Successes in Completing Recent Swap Exchanges, but Has Plans to Improve the Process In July 2014, we reported that the swap exchange approach can help GSA address the challenges of disposing of unneeded property and modernizing or replacing federal buildings. GSA officials told us that swap exchanges can help GSA facilitate construction projects given a growing need to modernize and replace federal properties, shrinking federal budgets, and challenges obtaining funding. Specifically, GSA officials noted that swap exchanges allow GSA to immediately apply the value of a federal property to be used in the exchange to construction needs, rather than attempting to obtain funds through the appropriations process. In our 2014 report, GSA officials stated that the exchanges can be attractive because the agency can get construction projects accomplished without having to request full upfront funding for them from Congress. In addition, because swap exchanges require developers or other property recipients to complete the agreed-upon GSA construction projects prior to the transfer of the title to the current property GSA is exchanging, federal agencies can continue to occupy the property during the construction process for the new project, eliminating the need for agencies to lease or acquire other space to occupy during the construction process. GSA has limited experience in successfully completing swap exchange transactions and has cancelled several recently proposed swap exchanges. More specifically, in 2016 we reported that GSA had only completed transactions using the swap exchange authority for two small (under $10-million each) swap exchanges completed in Atlanta, Georgia, in 2001 and in San Antonio, Texas, in 2012. Furthermore, GSA has faced a number of obstacles in its use of this authority. For example, for our 2014 report, we reviewed five projects identified since August 2012 in which GSA solicited market interest in exchanging almost 8-million square feet in federal property for construction services or newly constructed assets. However, GSA chose not to pursue swap-exchanges in all five of these projects, including the proposed FBI headquarters consolidation project. For example, GSA officials told us that there was little or no market interest in potential swap exchanges in Baltimore, Maryland, and Miami, Florida, and that GSA chose to pursue different approaches. Respondents to the solicitations for these two GSA swap exchanges noted that GSA did not provide important details, including the amount of investment needed in the federal properties and GSA’s specific construction needs. In addition, from 2012 to 2015, GSA pursued a larger swap exchange potentially involving up to 5 federal properties located in the Federal Triangle South area of Washington, D.C., to finance construction at GSA headquarters and other federal properties. In 2013, GSA decided to focus on exchanging two buildings, the GSA Regional Office Building and the Cotton Annex, based on input from potential investors. On February 18, 2016, GSA decided to end its pursuit of the exchange, saying in a memorandum supporting this decision that private investor valuations for the two buildings fell short of the government’s estimated values. After the discontinuation of the Federal Triangle swap exchange project, we reported in 2016 that GSA officials noted they planned to improve the swap exchange process, including the property appraisal process, outreach to stakeholders to identify potential project risks for future projects, and to the extent possible, mitigate such risks. However, we also reported that several factors may continue to limit the applicability of the agency’s approach. Specifically, the viability of swap exchanges may be affected by specific market factors, such as the availability of alternative properties. In addition, the specific valuation approach used by appraisers or potential investors may reduce the viability of the swap exchange. For example, in reviewing the proposed Federal Triangle project, we found in 2016 that the proposals from two of the investment firms valued the two federal buildings involved in the proposed swap substantially less than GSA’s appraised property value. In addition, swap exchanges can require developers to spend large sums on GSA’s construction needs before receiving title to the federal property used in the exchanges. We found in 2014 that GSA’s solicitations have not always specified these construction needs in sufficient detail. Consequently, developers may be unable to provide meaningful input, and GSA could miss swap exchange opportunities. In 2014, we recommended that GSA develop criteria for determining when to solicit market interest in a swap exchange. GSA agreed with the recommendation and has since updated its guidance to include these criteria. In January 2017, GSA agreed to a swap exchange for the U.S. Department of Transportation Volpe Center in Cambridge, Massachusetts. After a competitive process, GSA selected the Massachusetts Institute of Technology (MIT) as its exchange partner for the existing Department of Transportation (DOT) facility. Per the agreement, MIT will construct a new DOT facility on a portion of a 14 acre site to which DOT has title and, in exchange, will receive title to the remaining portion of the site that will not be used by DOT, which is located near its main campus. GSA indicated that, once completed, the project will provide $750 million in value to the federal government in the form of the design and construction services and value-equalization funds from MIT. Various Alternative Funding Mechanisms for Federal Property Exist Our prior work has identified a number of alternative approaches to funding real property projects. In March 2014, we reported that upfront funding is the best way to ensure recognition of commitments made in budgeting decisions and to maintain fiscal controls. However, obtaining upfront funding for large acquisitions such as the Hoover Building replacement can be challenging. Congress has provided some agencies with specific authorities to use alternative funding mechanisms for the acquisition, renovation, or disposal of federal real property without full, upfront funding. Table 1 outlines selected funding mechanisms, and considerations for each mechanism we identified in our 2014 report. Some of these alternative mechanisms allow selected agencies to meet their real property needs by leveraging other authorized resources, such as retained fees or land swaps with a private sector partner. Funding mechanisms leverage both monetary resources, such as retained fees, and non-monetary resources, such as property exchanged in a land swap or space offered in an enhanced use lease. In some cases, the funding mechanism may function as a public-private partnership intended to further an agency’s mission by working with a partner to leverage resources. Some of these mechanisms allow the private sector to provide the project’s capital—at their cost of borrowing. The U.S. federal government’s cost of borrowing is lower than the private sector’s. When the private sector provides the project capital, the federal government later repays these higher private sector borrowing costs (e.g., in the form of lease payments). In some cases, factors such as lower labor costs or fewer requirements could potentially help balance the higher cost of borrowing, making partner financing less expensive. Our 2014 report also identifies budgetary options—within the bounds of the current unified budget—to meet real property needs while helping Congress and agencies make more prudent long-term decisions. In 2014, we reported that projects with alternative funding mechanisms present multiple forms of risk that are shared between the agency and any partner or stakeholder. Further, we noted project decisions should reflect both the likely risk and the organization’s tolerance for risk. Incorporating risk assessment and management practices into decisions can help organizations recognize and prepare to manage explicit risks (e.g. financial and physical) and implicit risks (e.g. reputational). For example, clearly defined lease terms may help agencies manage risks of costs for unexpected building repairs. Further considerations we noted in our 2014 report include the availability of an appropriate partner—and that partners should bring complementary resources, skills, and financial capacities to the relationship—and management of the relationship with that partner. While different funding mechanisms have been used as an alternative to obtaining upfront funding for federal real property projects, changes to the budgetary structure itself—within the bounds of the unified budget that encompasses the full scope of federal programs and transactions—may also help agencies meet their real property needs. Such alternatives may include changing existing or introducing new account structures to fund real property projects. Our previous work identified options for changes within the current discretionary budget structure and options on the mandatory side of the budget. Alternative budgetary structures may change budgetary incentives for agencies and therefore help Congress and agencies make more prudent long-term fiscal decisions. Chairman Barrasso, Ranking Member Carper, and Members of the Committee, this concludes my prepared statement. I am happy to answer any questions you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff members have any questions concerning this testimony, please contact me at (202) 512-2834 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Mike Armes (Assistant Director), Colin Ashwood, Matt Cook, Joseph Cruz, Keith Cunningham, Alexandra Edwards, Carol Henn, Susan Irving, Hannah Laufe, Diana Maurer, John Mortin, Monique Nasrallah, Matt Voit, Michelle Weathers, and Elizabeth Wood. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study GSA, which manages federal real property on behalf of other federal agencies, faces challenges in funding new construction projects due to budget constraints—including obtaining upfront funding—among other reasons. One type of transaction, called a swap exchange, enables GSA to apply the value of federal property to finance construction without relying on appropriated funds. Under such an exchange, GSA transfers the title of the unneeded property to a private investor after receiving the agreed upon construction services at another location. GSA proposed a swap exchange procurement for construction of a new FBI headquarters building in exchange for the Hoover Building and appropriations to compensate for the difference in value between the Hoover Building and the new building. GSA cancelled this procurement in July 2017 due to lack of funding. This statement addresses (1) GSA's and FBI's assessments of the Hoover Building, (2) GSA efforts to implement swap exchanges, and (3) alternative approaches to funding real property projects. It is based on GAO's body of reports on real property from 2011 to 2017, and selected updates from GSA. What GAO Found In November 2011, GAO reported that, according to General Services Administration (GSA) and Federal Bureau of Investigation (FBI) assessments, the FBI's headquarters building (Hoover Building) and its accompanying facilities in Washington, D.C., did not fully support the FBI's long-term security, space, and building condition requirements. Since GAO's report, the assessments have not materially changed, for example: Security: GAO's prior work noted that the dispersion of staff in annexes creates security challenges, including where some space was leased by the FBI and other space was leased by nonfederal tenants. Earlier this year, GAO reported the FBI is leasing space in D.C. from foreign owners. Space : In 2011, GAO reported that FBI and GSA studies showed that much of the Hoover Building is unusable. GSA noted in its fiscal year 2017 project prospectus for the FBI headquarters consolidation that the Hoover Building cannot be redeveloped to meet the FBI's current needs. Building Condition: In GAO's 2011 report, GAO noted that the condition of the Hoover Building was deteriorating, and GSA assessments identified significant recapitalization needs. Since GAO's report and in response to GAO's recommendation, GSA has evaluated its approach to maintaining the building and completed some repairs to ensure safety. GSA has limited experience in successfully completing swap exchange transactions and chose not to pursue several proposed swap exchanges, most recently the planned swap exchange for the Hoover Building. GSA has developed criteria for determining when to solicit market interest in a swap exchange, in response to recommendations in GAO's 2014 report. In addition, GSA officials told GAO that they planned to improve the swap exchange process, including the property appraisal process, outreach to stakeholders to identify potential risks associated with future projects, and to the extent possible, mitigate such risks. Nevertheless, several factors may continue to limit use of swap exchanges, including market factors, such as the availability of alternative properties and an investor's approach for valuing properties. For example, in reviewing a proposed swap exchange in Washington, D.C., GAO found in a 2016 report that the proposals from two firms valued the two federal buildings involved in the proposed swap substantially less than GSA's appraised property value. In a 2014 report, GAO identified a number of alternative approaches to funding real property projects. Congress has provided some agencies with specific authorities to use alternative funding mechanisms—including the use of private sector funds or land swaps—for the acquisition, renovation, or disposal of federal real property without full, upfront funding, though GAO has previously reported that upfront funding is the best way to ensure recognition of commitments made in budgeting decisions and maintain fiscal controls. GAO has reported that projects with alternative funding mechanisms present multiple forms of risk that are shared between the agency and any partner or stakeholder. In addition, alternative budgetary structures could be established, such as changing existing or introducing new account structures to fund real property projects. What GAO Recommends GAO has made recommendations in the past to GSA on various real property issues, including to develop additional guidance for swap exchanges and to evaluate its approach to maintaining the Hoover Building. GSA agreed with these two recommendations and addressed them.
gao_GAO-18-273
gao_GAO-18-273_0
Background JWST is envisioned to be a large deployable space telescope, optimized for infrared observations, and the scientific successor to the aging Hubble Space Telescope. JWST is being designed for a 5-year mission to find the first stars, study planets in other solar systems to search for the building blocks of life elsewhere in the universe, and trace the evolution of galaxies from their beginning to their current formation. JWST is intended to operate in an orbit approximately 1.5 million kilometers—or 1 million miles—from the Earth. With a 6.5-meter primary mirror, JWST is expected to operate at about 100 times the sensitivity of the Hubble Space Telescope. JWST’s science instruments are designed to observe very faint infrared sources and therefore are required to operate at extremely cold temperatures. To help keep these instruments cold, a multi-layered tennis court-sized sunshield is being developed to protect the mirrors and instruments from the sun’s heat. The JWST project is divided into three major segments: the observatory segment, the ground segment, and the launch segment. When complete, the observatory segment of JWST is to include several elements (Optical Telescope Element (OTE), Integrated Science Instrument Module (ISIM), and spacecraft) and major subsystems (sunshield and cryocooler). The hardware configuration created when the Optical Telescope Element and the Integrated Science Instrument Module were integrated, referred to as OTIS, is not considered an element by NASA, but we categorize it as such for ease of discussion. Additionally, JWST is dependent on software to deploy and control various components of the telescope, and to collect and transmit data back to Earth. The elements, major subsystems, and software are being developed through a mixture of NASA, contractor, and international partner efforts. See figure 1 for an interactive graphic that depicts the elements and major subsystems of JWST. For the majority of work remaining, the JWST project is relying on two contractors: Northrop Grumman Corporation and the Association of Universities for Research in Astronomy’s Space Telescope Science Institute (STScI). Northrop Grumman plays the largest role, developing the sunshield, the Optical Telescope Element, the spacecraft, and the Mid-Infrared Instrument’s cryocooler, in addition to integrating and testing the observatory. STScI’s role includes soliciting and evaluating research proposals from the scientific community, and receiving and storing the scientific data collected, both of which are services that it currently provides for the Hubble Space Telescope. Additionally, STScI is developing the ground system that manages and controls the telescope’s observations and will operate the observatory on behalf of NASA. JWST will be launched on an Ariane 5 rocket, provided by the European Space Agency. JWST depends on 22 deployment events—more than a typical science mission—to prepare the observatory for normal operations on orbit. For example, the sunshield and primary mirror are designed to fold and stow for launch and deploy once in space. Due to its large size, it is nearly impossible to perform deployment tests of the fully assembled observatory, so the verification of deployment elements is accomplished by a combination of lower level component tests in flight-simulated environments; ambient deployment tests for assembly, element, and observatory levels; and detailed analysis and simulations at various levels of assembly. Schedule and Cost Reserves for NASA Projects We have previously reported that complex development efforts like JWST face numerous risks and unforeseen technical challenges, which oftentimes can become apparent during integration and testing. To accommodate unanticipated challenges and manage risk, projects reserve extra time in their schedules, which is referred to as schedule reserve, and extra money in their budgets, which is referred to as cost reserve. Schedule reserve is allocated to specific activities, elements, and major subsystems in the event of delays or to address unforeseen risks. Each JWST element and major subsystem has been allocated schedule reserve. When an element or major subsystem exhausts schedule reserve, it may begin to affect schedule reserve on other elements or major subsystems whose progress is dependent on prior work being finished for its activities to proceed. The element or major subsystem with the least amount of schedule reserve determines the critical path for the project. Any delay to an activity that is on the critical path will reduce schedule reserve for the whole project, and could ultimately impact the overall project schedule. Cost reserves are additional funds within the project manager’s budget that can be used to address unanticipated issues for any element or major subsystem, and are used to mitigate issues during the development of a project. For example, cost reserves can be used to buy additional materials to replace a component or, if a project needs to preserve schedule reserve, reserves can be used to accelerate work by adding shifts to expedite manufacturing. NASA’s Goddard Space Flight Center (Goddard)—the NASA center with responsibility for managing JWST— has issued procedural requirements that establish the levels of both cost and schedule reserves that projects must hold at various phases of development. In addition to cost reserves held by the project manager, management reserves are funds held by the contractors that allow them to address cost increases throughout development. We have found that management reserves should contain 10 percent or more of the cost to complete a project and are generally used to address various issues tied to the contract’s scope. History of Cost Growth and Schedule Delays JWST has experienced significant cost increases and schedule delays. Prior to being approved for development, cost estimates of the project ranged from $1 billion to $3.5 billion, with expected launch dates ranging from 2007 to 2011. Before 2011, early technical and management challenges, contractor performance issues, low level cost reserves, and poorly phased funding levels caused JWST to delay work after cost and schedule baselines were established, which contributed to significant cost and schedule overruns, including launch delays. The Chair of the Senate Subcommittee on Commerce, Justice, Science, and Related Agencies requested from NASA an independent review of JWST in June 2010. NASA commissioned the Independent Comprehensive Review Panel, which issued its report in October 2010, and concluded that the baseline funding did not allot adequate reserves, resulting in an unexecutable project. Following this review, the JWST program underwent a replan in September 2011, and in November of that same year, Congress placed an $8 billion cap on the formulation and development costs for the project. On the basis of the replan, NASA rebaselined JWST with a life- cycle cost estimate of $8.835 billion which included additional money for operations and a planned launch in October 2018. The revised life-cycle cost estimate included a total of 13 months of funded schedule reserve. Previous GAO Reviews of JWST Project We have previously found that since the project’s replan in 2011, the JWST project has met its cost and schedule commitments. In our most recent report in December 2016, we found that the project was still operating within its committed schedule while in its riskiest phase of development—integration and test—but had used about 3 months of schedule reserve since our previous December 2015 report. In addition, we found that the project was facing numerous risks and single points of failure before launch. Finally, we found that while the project was meeting its cost commitments despite technical and workforce challenges, the observatory contractor had continued to maintain a larger workforce for longer than planned in order to address technical issues. In these prior reports, we have made recommendations with regard to improving cost and schedule estimating, updating risk assessments, and strengthening management oversight. NASA has generally agreed and taken steps to implement a number of our recommendations. For example, in December 2015, we recommended that the JWST project require contractors to identify, explain, and document anomalies in contractor-delivered monthly earned value management reports. NASA concurred with this recommendation and, in February 2016, directed the contractors to implement the actions stated in the recommendation. However, NASA did not implement some recommendations, which if implemented, may have provided insight into the challenges it now faces. For example, in December 2012, we recommended the JWST project update its joint cost and schedule confidence level (JCL), a point-in-time estimate that, among other things, includes all cost and schedule elements and incorporates and quantifies known risks. NASA policy requires projects to establish commitment baselines at a 70 percent confidence level. Although NASA concurred with this recommendation, it did not take steps to implement it. An updated JCL may have portended the current schedule delays, which could have been proactively addressed by the project. Considerable Progress Made Across JWST Project, but Integration and Test Challenges Have Delayed Launch at Least 5 Months with Further Delays Likely While much progress on hardware integration and testing and several risk reduction efforts have occurred over the past several months, the JWST project also used all of its schedule reserves established at the replan in 2011 to address various technical issues, including a test anomaly on the telescope and sunshield hardware challenges. In September 2017, the JWST project requested a launch window at least 5 to 8 months later than the planned October 2018 launch readiness date, based on the results of a schedule risk assessment that showed that various components of the spacecraft element integration were taking longer to complete than expected. The new launch window included up to 4 months of additional schedule reserves. However, shortly after requesting the revised launch window from the European Space Agency (ESA), which will contribute the launch vehicle, the project learned from Northrop Grumman that up to another 3 months of schedule reserve use was expected, due to lessons learned from conducting deployment exercises of the spacecraft element and sunshield. After incorporating some schedule efficiencies, the project now has 1.5 months of schedule reserve remaining. Given the remaining integration and test work ahead—the phase in development where problems are most likely to be found and schedules tend to slip—and risks remaining to be reduced to acceptable levels, coupled with a low level of schedule reserves, we believe that additional delays to the project’s launch readiness date are likely. JWST Project Completed Significant Integration and Test Work Since our last report, the JWST project has made considerable progress toward completing its third and fourth of five total integration and test phases for the combined optical telescope element and integrated science instrument module (OTIS) and the spacecraft elements, respectively. Previously, the project and Northrop Grumman completed the Integrated Science Instrument Module and the Optical Telescope Element integration phases in March 2016, as shown in Figure 2 below. OTIS progress: Hardware integration and two of three key environmental tests—acoustics and vibration—were completed in 2016 and early 2017, respectively. The third key test, cryovacuum— which was conducted in a large cryovacuum chamber to ensure the telescope can operate at the near-absolute zero cryogenic temperatures of space—began in July 2017 at Johnson Space Center and successfully concluded in October 2017. The project identified a technical issue with the stability of the optical mirror that affects image quality, and by conducting some additional testing, determined that it was caused by a test equipment setup issue and not related to the flight hardware itself. Project officials stated that they plan to delay shipping the completed OTIS element to the Northrop Grumman facility in California for final integration with the spacecraft element from late December 2017 to February 2018. According to project officials, the delay allows the project to shift some of the work to prepare OTIS for integration with the spacecraft—such as cleaning the mirrors—to Johnson Space Center where it will not have to share space in the crowded clean room at Northrop Grumman as sunshield fold and stow activities are ongoing. OTIS is expected to arrive at Northrop Grumman months ahead of its need date for integration into the observatory. Spacecraft element progress: All spacecraft element hardware has been delivered and mechanical integration of spacecraft hardware— including the five layers of the sunshield—is largely complete. Northrop Grumman has also completed a folding operation and the first full deployment of the integrated spacecraft element. Northrop Grumman plans to refold the sunshield and complete one more deployment cycle, after environmental testing, in this phase of integration and testing. The project and its contractors conducted risk reduction testing on OTIS and the spacecraft elements to reduce risk for challenging environmental tests on flight hardware. These tests allowed the project and its contractors to practice processes and procedures for testing on flight hardware to create a more efficient test flow and proactively address issues before flight hardware tests commenced. For example, the second risk reduction test on the OTIS pathfinder hardware showed that vibration levels inside the test chamber were too high, and adjustments to the ground support equipment were implemented to address this issue. Additionally, Northrop Grumman officials noted that risk reduction tests on the spacecraft element have helped demonstrate facility capability and logistics for the upcoming tests of flight hardware. The project has also progressed in preparing the software and ground systems that will operate the observatory and manage and control the telescope’s observations. According to NASA’s Independent Verification and Validation group, the overall status of JWST software development and integration efforts is very positive with minimal development remaining, and the group has significant confidence that the mission software will support the mission objectives. Additionally, the Space Telescope Science Institute has made considerable progress in preparing JWST’s ground systems, such as preparing the Mission Operations Center and conducting the Mission Operations Review in April 2017. The project has made notable progress in reducing and closing numerous tracked risks. In December 2016, we reported that the project maintained a risk list with 73 items. Currently, the list of tracked risks has 47 items to be closed or mitigated to acceptable levels. The completion of the OTIS cryovacuum test enabled the project to recently close several risks. For example, the project previously tracked a risk that the instrument module and telescope element might have to be de-integrated if OTIS testing revealed workmanship issues. With the successful completion of the testing, this risk was closed in fall 2017. The project also obtained a waiver from the Office of Safety and Mission Assurance to NASA’s risk policy for its over 300 single point failures throughout the observatory, the majority of which are related to the sunshield. Project officials reported that the elimination of all single point failures on the JWST Mission is not practical or even feasible, due mainly to the large number of deployments, and that all mitigations practical to address and minimize them have been implemented. JWST Delayed Launch Due to Integration Challenges on the Spacecraft Element, Avoiding a Potential Launch Site Conflict In the summer of 2017, the JWST project conducted a schedule risk assessment that showed that the October 2018 launch readiness date was unachievable, primarily due to the various components of spacecraft element integration taking longer to complete than planned. The project performed the schedule risk assessment in order to provide ESA a desired launch window about one year prior to the expected launch date. The assessment took into account remaining work to be completed, lessons learned from environmental testing, and the current performance rates of integrating the spacecraft element. As a result of the assessment, in September 2017 NASA requested from ESA a launch window of March 2019 to June 2019. The requested launch window represents a 5- to 8- month delay from the previously planned October 2018 launch readiness date. The schedule risk assessment incorporated input from Northrop Grumman on expected durations for remaining spacecraft and observatory level integration activities. However, the project’s analysis determined that the expected durations provided by Northrop Grumman were overly optimistic. As a result, the project incorporated uncertainty factors into the analysis, which added 2 to 3 months to the schedule. The project also estimated an additional 5 to 8 weeks would be needed because of emerging technical issues not specifically accounted for by the schedule risk assessment. Additionally, the project updated the expected time required at the launch site for processing activities and added about 1.25 months. According to project officials, the confidence in the launch window identified is in line with that of a typical NASA JCL at 70 percent. NASA’s independent Standing Review Board reviewed the assessment and found that it was a thorough approach for reviewing the schedule, risks, and uncertainties and that the new proposed launch readiness range is technically feasible with reasonable risk. NASA’s request for a March to June 2019 launch window was driven by its own schedule and technical issues, but also avoids potential conflicts with other mission launches. Regardless of JWST’s launch readiness, and prior to undertaking the schedule risk assessment, the project learned in November 2016 of potential scheduling conflicts at the launch site in French Guiana. After numerous delays, BepiColombo, a joint ESA/Japan Aerospace Exploration Agency mission to Mercury, is currently forecasted to have an October 2018 launch readiness date. According to program officials, that mission could have taken precedence over JWST given that planetary missions generally have more limited launch windows. Additionally, Arianespace, a commercial company, currently has a commercial launch scheduled for the December 2018 timeframe. JWST Project Consumed all of its Planned Schedule Reserve to Address Technical Challenges While much progress has been made since we last reported in December 2016, the project and Northrop Grumman consumed the remaining 6 months of schedule reserves established at the 2011 replan to address technical challenges that arose during the OTIS and the spacecraft element integration and test work, as well as additional challenges identified by the schedule risk assessment. Specifically: In February 2017, a vibration anomaly during OTIS vibration testing at Goddard Space Flight Center, occurring in parallel with spacecraft and sunshield issues, consumed 1.25 months and delayed the start of cryovacuum testing, the final event in the OTIS integration and test phase, by several weeks. In April 2017, spacecraft and sunshield issues consumed an additional 1.25 months. Specifically, a contractor technician applied too much voltage and irreparably damaged the spacecraft’s pressure transducers, components of the propulsion system, which help monitor spacecraft fuel levels. The transducers had to be replaced and reattached in a complicated welding process. At the same time, Northrop Grumman also addressed several challenges with integrating sunshield hardware such as the mid-boom assemblies and membrane tensioning system, which help deploy the sunshield and maintain its correct shape. Finally, in September 2017, the remaining 3.5 months of previously planned schedule reserves was consumed as a result of the contractor having underestimated the time required to complete integration and test work on the spacecraft and other risks identified in the schedule risk analysis. Specifically, execution of spacecraft integration and test tasks, due to the complexity of work and cautious handling given sensitivity of flight hardware, was slower than planned. For example, the installation of numerous membrane retention devices slowed the pace of the work. According to Northrop Grumman officials, the sunshield is elevated off the ground for installation work and the size and quantity of the work lifts necessary for the technicians to access the sunshield requires more maneuvering and prevents the technicians from working on the forward and aft sunshield assemblies simultaneously. Taking into account the consumption of planned reserves and the establishment of the revised launch window, the project expected to have up to 4 months of schedule reserve extending to the end of the launch window range, or June 2019. However, shortly after the project notified ESA of the launch delay in September 2017, the project received updated information from Northrop Grumman and determined that up to 3 months of schedule reserve would be needed based upon lessons learned from Northrop Grumman’s initial sunshield folding operation and implications for remaining deployment test activities. After incorporating some schedule efficiencies, the project now has 1.5 months of schedule reserve remaining. This level of schedule reserve is below the standards established by Goddard Space Flight Center for a project at this stage of development. The project is working with Northrop Grumman to determine if any further schedule reserve can be regained by incorporating schedule efficiencies and adjusting integration and test plans. As shown in the figure below, Northrop Grumman’s work on the spacecraft element remains on the project’s critical path—the schedule with the least amount of reserve, which determines the overall schedule reserve for the project—now with an estimated 1.5 months of schedule reserve to the end of the launch window in June 2019. Ongoing Spacecraft Integration and Test Issues, Challenging Remaining Work, and Slow Contractor Performance Make Additional Launch Delays Likely Given several ongoing technical issues, and the work remaining to test the spacecraft element and complete integration of the telescope and spacecraft, combined with continuing slower than planned work at Northrop Grumman, we believe that the rescheduled launch window is likely unachievable. For example, in May 2017, Northrop Grumman found that 8 of 16 valves in the spacecraft propulsion system’s thruster modules were leaking beyond allowable levels. The project and Northrop Grumman were unable to definitively isolate the root cause of the leaks; however, Northrop Grumman determined that the most likely cause is a handling error at their facility. Specifically, the material around the valves deteriorated due to a solvent used for cleaning. All of the thruster modules were returned to the vendor for investigation and refurbishment. According to project officials, the refurbished thruster modules were returned to the contractor facility in late 2017 for reattachment. However, reattaching the repaired modules is a challenge because of the close proximity of electronics and other concerns. The project included about one month in the schedule risk assessment to account for the time spent investigating and determining the path forward for the thruster issue; however, the full schedule impact of reattaching the thruster modules to the spacecraft element had not yet been determined and was not incorporated into the analysis. In November 2017, the project and Northrop Grumman chose a reattachment method that project officials stated is expected to require less time to complete and pose fewer risks to the hardware than a traditional welding approach. In October 2017, when conducting folding and deployment exercises on the sunshield, Northrop Grumman discovered several tears in the sunshield membrane layers. According to program officials, a workmanship error contributed to the tears. The tears can be repaired; however, some schedule reserve may be required to repair them. Additionally, during the deployment exercise, one of the sunshield’s six membrane tensioning systems experienced a snag. Northrop Grumman is planning to implement a slight design modification to prevent the issue from occurring again. Northrop Grumman officials have not yet determined if the schedule will be affected as a result. Beyond mitigating the specific spacecraft thruster module valve leak and sunshield issues, the project faces significant work ahead, and numerous risks remain to be mitigated to acceptable levels. For example, the project and Northrop Grumman must: Resolve lingering technical issues from the OTIS cryovacuum test and prepare and ship OTIS to the Northrop Grumman facility in California for integration with the spacecraft. Complete integration of spacecraft hardware, and conduct spacecraft element environmental tests and remaining deployments of the spacecraft and sunshield—activities which, to date, have taken considerably longer than planned. Integrate the completed OTIS element with the spacecraft element and test the full observatory in the fifth and final integration phase, which includes another set of challenging environmental tests. Mitigate approximately 47 remaining tracked hardware and software risks to acceptable levels and continue to address the project’s 300+ potential single point failures to the extent possible. Prepare and ship the observatory to the launch site and complete final launch site processing, including installation of critical release mechanisms. Project officials have expressed concern with Northrop Grumman’s ability to prevent further schedule erosion as the project moves through remaining integration and test work. With the project’s current low level of schedule reserves, even a relatively minor disruption could cause the project to miss its revised launch window. According to program officials, the contractor has increased its daily work shifts from two to three and is now working 24 hours per day on spacecraft integration, which further limits schedule flexibility. In early 2018, the project’s independent Standing Review Board will review the latest schedule inputs based on updated knowledge about spacecraft integration and test activity durations. For example, according to project officials, by early 2018, the contractor is expected to have completed the second of four planned fold and stow sequences on the sunshield, which will provide more insight into whether the current planned schedule is realistic. The Standing Review Board will also examine the project’s plans for schedule efficiencies and potential integration and test adjustments to determine if the June 2019 launch window can be met. Project officials stated that following this review, NASA senior management will be briefed on the Standing Review Board’s findings and will then formally identify a new launch readiness window. Our prior work has shown that integration and testing is the phase in which problems are most likely to be found and schedules tend to slip. For a uniquely complex project such as JWST, this risk is magnified. Now that the project is well into its complex integration and test efforts, events are sequential in nature and there are fewer opportunities to mitigate issues in parallel. Since the replan, the project has used about 2.5 months of schedule reserve per year to address technical issues, but, as discussed above, it now has only approximately 1.5 months of schedule reserve to last until the end of the revised launch window in June 2019. Thus, past experience with technical issues in earlier integration phases suggests that this amount of reserve will not be adequate for the challenging work ahead, and further delays to launch readiness are likely. We will continue to monitor the project’s progress in meeting its revised schedule as more information is available during this critical integration and test phase. Higher Contractor Workforce Levels to Address Continuing Technical Challenges Places JWST at Risk of Exceeding Cost Commitments Northrop Grumman continued to maintain higher than planned workforce levels in the past year and, as a result, NASA will have limited cost reserves to address future challenges. Northrop Grumman’s ability to control costs and decrease its workforce is central to JWST’s capacity to meet its long-term cost commitments. For the past 44 months, Northrop Grumman’s actual workforce has exceeded its projections and the company is not expected to significantly reduce its workforce until the spring of 2019, when NASA plans to ship the completed observatory to the launch site. Northrop Grumman had planned to reduce its workforce in fiscal years 2016 and 2017 as work was planned to be completed, but has needed to maintain higher workforce levels due to technical challenges and the work taking longer than expected. Figure 6 illustrates the difference between the workforce levels that Northrop Grumman projected for fiscal years 2016 and 2017, and its actual workforce levels during that period. As shown in figure 6, Northrop Grumman has slightly reduced its workforce since the beginning of fiscal year 2016. However, staffing levels remain higher than projected as a result of previously noted technical challenges including spacecraft and sunshield integration and test challenges, to keep specialized engineers available when needed during final assembly, and to complete required testing activities. Projections made at the beginning of fiscal year 2017—when the expected launch readiness date was October 2018—expected workforce levels to begin at 472 full-time equivalent staff and drop to 109 at the end of the fiscal year. However, technical challenges and delays in completing scheduled work did not allow for the planned workforce reduction and Northrop Grumman reported 496 full-time equivalent staff in September 2017, or 387 more than planned. According to JWST project officials and similar to previous years, Northrop Grumman’s priority for fiscal year 2018 is to maintain schedule in order to ensure that the new launch window set from March to June 2019 can be met. As a result, Northrop Grumman’s contractor workforce levels are expected to continue to be elevated through JWST’s final integration and test phase in fiscal year 2019 where the spacecraft and OTIS will be integrated before shipment to the launch site. Northrop Grumman submitted a cost overrun proposal to NASA in July 2016, primarily to address costs associated with sustaining its workforce at higher levels than planned in fiscal year 2017. An overrun proposal seeks to increase the value of a cost-reimbursement contract when the total estimated cost is less than the contract’s estimated cost to complete the performance of the contract. In addition to higher workforce levels, the overrun proposal replenished contractor management reserves that had been used to address technical issues, and addressed projected growth in the contractor’s cost to complete work. NASA and the contractor completed negotiations in September 2017 and executed a contract modification that added $179.9 million to the value of the contract to cover Northrop Grumman’s cost overrun and additional negotiated items, such as particle dampers. This amount was intended to cover the cost of the remaining work through the expected launch date of October 2018. However, by September 2017 Northrop Grumman had no remaining schedule reserves and a limited amount of cost reserves with which to address future costs. Furthermore, the project determined—as discussed above—that the October 2018 launch window was not feasible and established a new launch window. According to JWST project officials, the project expects to issue a request for proposal in early 2018 to cover the costs for the remaining work through the new launch window. The project plans to use a significant portion of fiscal years 2018 and 2019 program cost reserves to address Northrop Grumman costs and unanticipated technical challenges. According to JWST program officials, if the contractor does not improve its schedule efficiency, the remaining reserves will be used to offset increased cost resulting from taking longer to complete the work. For the sixth consecutive year, the JWST project managed spending within its allocated budget in fiscal year 2017. However, JWST is still resolving technical challenges and planned work continues to take longer to complete. Prudent management of its resources allowed the project to carry into fiscal year 2018 about a third more carry over funding than it had projected at the beginning of the fiscal year. Program officials said that assuming the remaining integration and tests proceed as planned and no long delays are encountered, the existing program resources accommodate the new launch window of March to June 2019. The project continues to identify funding options in the event of a delay of beyond the end of the launch window. Under the 2011 replan, Congress placed an $8 billion cap on formulation and development costs, but any long delays beyond the new launch window—which, as noted above, are likely— place the project at risk of exceeding this cap. Agency Comments and our Evaluation We requested comments from NASA, but agency officials determined that no formal comments were necessary. NASA provided technical comments, which were incorporated as appropriate. We are sending copies of the report to NASA’s Administrator and interested congressional committees. In addition, the report is available at no charge on GAO’s website and http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Elements and Major Subsystems of the James Webb Space Telescope (JWST) Observatory Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Richard Cederholm, (Assistant Director); Karen Richey, (Assistant Director); Jay Tallon, (Assistant Director); Brian Bothwell, Laura Greifner, Daniel Kuhn, Katherine Lenane, Jose Ramos, Carrie Rogers, Sylvia Schatz, and Roxanna Sun made key contributions to this report.
Why GAO Did This Study JWST, a large, deployable telescope intended to be the successor to the Hubble Space Telescope, is one of NASA's most complex and expensive projects, at an anticipated cost of $8.8 billion. Congress set an $8 billion JWST development cost cap in 2011, and the remaining $837 million is for its operations costs. JWST is intended to revolutionize our understanding of star and planet formation and advance the search for the origins of our universe. With significant integration and testing planned for the remaining period until launch, the JWST project will still need to address many challenges during the remainder of integration and testing. Conference Report No. 112-284, accompanying the Consolidated and Further Continuing Appropriations Act, 2012, included a provision for GAO to assess the project annually and report on its progress. This is the sixth such report. This report assesses the extent to which JWST is (1) meeting its schedule commitments, and (2) able to meet its cost commitments. GAO reviewed monthly JWST reports, reviewed relevant policies, conducted independent analysis of NASA and contractor data, and interviewed NASA and contractor officials. What GAO Found In 2017, the National Aeronautics and Space Administration's (NASA) James Webb Space Telescope (JWST) project delayed its launch readiness date by at least 5 months, and further delays are likely. The delay—from October 2018 to a launch window between March and June 2019—was primarily caused by components of JWST's spacecraft taking longer to integrate than planned. JWST made considerable progress toward the completion of integration and test activities in the past year. However, the project used all remaining schedule reserve—or extra time set aside in the schedule in the event of delays or unforeseen risks—to address technical issues, including an anomaly on the telescope found during vibration testing. Extending the launch window provided the project up to 4 months of schedule reserve. However, shortly after requesting the new launch window in September 2017, the project determined that several months of schedule reserve would be needed to address lessons learned from the initial folding and deployment of the observatory's sunshield (see image). Given remaining integration and test work ahead—the phase in development where problems are most likely to be found and schedules tend to slip—coupled with only 1.5 months of schedule reserves remaining to the end of the launch window, additional launch delays are likely. The project's Standing Review Board will conduct an independent review of JWST's schedule status in early 2018 to determine if the June 2019 launch window can be met. JWST will also have limited cost reserves to address future challenges, such as further launch delays, and is at risk of breaching its $8 billion cost cap for formulation and development set by Congress in 2011. For several years, the prime contractor has overestimated workforce reductions, and technical challenges have prevented these planned reductions, necessitating the use of cost reserves. Program officials said that existing program resources will accommodate the new launch window—provided remaining integration and testing proceeds as planned without any long delays. However, JWST is still resolving technical challenges and work continues to take longer than planned to complete. As a result, the project is at risk of exceeding its $8 billion formulation and development cost cap. What GAO Recommends GAO has made recommendations on the project in previous reports. NASA agreed with and took action on many of GAO's prior recommendations, but not on others—some of which may have provided insight to the current schedule delays. For example, in December 2012, GAO recommended that the JWST project perform an updated integrated cost/schedule risk analysis.
gao_GAO-18-611T
gao_GAO-18-611T_0
The Number of SFSP Meals Served Generally Increased from 2007 through 2016, but Estimates of Children Participating Were Unreliable The total number of SFSP meals served nationwide during the summer— one indicator of program participation—increased from 113 million meals in fiscal year 2007 to 149 million meals in fiscal year 2016, or by 32 percent. Although almost half of the total increase in meals served in the summer months was due to increases in lunches, when comparing across each of the meal types, supper and breakfast had the largest percentage increases over the 10-year period, 50 and 48 percent, respectively (see table 1). The increase in SFSP meals over this time period was generally consistent with increases in the number of meals served in the National School Lunch Program (NSLP), the largest child nutrition assistance program, during this period. Although states reported the actual number of SFSP meals served to FNS for reimbursement purposes, they estimated the number of children participating in SFSP, and these participation estimates have been calculated inconsistently, impairing FNS’s ability to inform program implementation and facilitate strategic planning and outreach to areas with low participation. Specifically, state agencies calculated a statewide estimate of children’s participation in the SFSP, referred to as average daily attendance (ADA), using sponsor-reported information on the number of meals served and days of operation in July of each year. However, according to our review of states’ survey responses and FNS documents, states’ methods for calculating ADA have differed from state to state and from year to year. For example, although FNS directed states to include the number of meals served in each site’s primary meal service—which may or may not be lunch—some states calculated ADA using only meals served at lunch. In addition, five states reported in our survey that the method they used to calculate ADA in fiscal year 2016 differed from the one they used previously. While FNS clarified its instructions in May 2017 to help improve the consistency of states’ ADA calculations moving forward, ADA, even if consistently calculated, remained an unreliable estimate of children’s daily participation in SFSP for at least two reasons. First, ADA did not account for existing variation in the number of days that each site serves meals to children. Specifically, because FNS’s instructions indicated that sites’ ADAs were to be combined to provide a statewide ADA estimate, differences in the number of days of meal service at each site were disregarded. As a result, ADA did not reflect the average number of children served SFSP meals daily throughout the month. Second, ADA was an unreliable estimate of children’s participation in SFSP because it did not account for state variation in the month with the greatest number of SFSP meals served. According to FNS officials, the agency instructed states to calculate ADA for July because officials identified this as the month with the largest number of meals served nationwide. However, according to our analysis of nationwide FNS data, in summer 2016, 26 states served more SFSP meals in June or August than in July. Although FNS had taken some steps to identify other data that states collect on the SFSP, at the time of our May 2018 report, FNS had not yet used this information to help improve its estimate of children’s participation in the program. In 2015, FNS published a Request for Information, asking whether states or sponsors collected any SFSP data that were not reported to FNS, and received responses from 15 states. The responses suggested some states collected additional data, such as site-level data, that may allow for an improved estimate of children’s SFSP participation, potentially addressing the issues identified in our analysis. FNS also followed up with several of these states in 2016 and 2017 to explore the feasibility of collecting additional data and improving estimates of children’s SFSP participation. FNS stated in a May 2017 memo to states that it is critical that the agency’s means of estimating children’s participation in the SFSP is as accurate as possible because it helps inform program implementation at the national level and facilitates strategic planning and outreach to areas with low participation. Yet, at the time of our report, FNS had not taken further action to improve the estimate. In our May 2018 report, we concluded that FNS’s limited understanding of children’s participation in the SFSP impaired its ability to both inform program implementation and facilitate strategic planning and outreach to areas with low participation. To improve FNS’s estimate of children’s participation in the SFSP, we recommended that FNS focus on addressing, at a minimum, data reliability issues caused by variations in the number of operating days of meal sites and in the months in which states see the greatest number of meals served. FNS generally agreed with this recommendation. Other Federal and Nonfederal Programs Helped Feed Low- Income Children over the Summer to Some Extent Other federal and nonfederal programs that operate solely in the summer, as well as those operating year-round, helped feed low-income children in the summer months. For example, in 2016, FNS data indicated about 26 million meals were served through the NSLP’s Seamless Summer Option, a separate federal program that streamlines administrative requirements for school meal providers serving summer meals. Some children also received summer meals through nonfederal programs operated by entities such as faith-based organizations and foodbanks, though the reach of these efforts was limited, according to our state survey and interviews with providers and national organizations at the time of our report. For example, of the 27 states that reported in our survey awareness of the geographic coverage of these nonfederal programs, 11 states indicated that they operated in some portions of the state—the most common state response. States and SFSP Providers Faced Challenges with Meal Sites, Participation, and Program Administration, and FNS Actions Had Addressed Some, but Not All Areas States and SFSP providers reported challenges with issues related to meal site availability, children’s participation, and program administration, though federal, state, and local entities had taken steps to improve these areas. For example, a lack of available transportation, low population density, and limited meal sites posed challenges for SFSP implementation in rural areas, according to states we surveyed, selected national organizations, and state and local officials in the three states we visited. In response, state and local entities took steps, such as transporting meals to children by bus, to address these issues—efforts that FNS supported through information sharing and grants. States and SFSP providers also reported challenges with meal site safety, and FNS’s efforts to address this area were limited. Seventeen states reported in our survey that ensuring summer meal sites are in safe locations was moderately to very challenging. Some states and sponsors took steps to help address this issue, and FNS also used its available authorities to grant some states and sponsors flexibility with respect to the requirement that children consume summer meals on site, such as when safety at the site is a concern. However, our review of FNS documentation showed FNS had not clearly communicated to all states and sponsors the circumstances it considers when deciding whether to grant this flexibility. These circumstances—described in letters the agency sent to requesting states—generally included verification that violent crime activities occurred within both a 6-block radius of the meal site and 72 hours prior to the meal service. Although FNS officials explained that they reviewed state and sponsor requests for flexibility due to safety concerns on a case-by-case basis, they also acknowledged that the set of circumstances they used to approve state and sponsor requests for flexibility, which we identified in their letters to states, had been used repeatedly. Further, states and sponsors reported challenges obtaining the specific data needed for approval of a site for this type of flexibility, including inconsistent availability of timely data, which hampered some providers’ efforts to ensure safe delivery of meals. We concluded that unless FNS shared information with all states and sponsors on the circumstances it considered when deciding whether to grant flexibility with respect to the requirement that children consume summer meals on site, states and sponsors would likely continue to be challenged to use this flexibility, hindering its usefulness in ensuring safe summer meal delivery to children. We therefore recommended that FNS communicate to all SFSP stakeholders the circumstances it considers in approving requests for flexibility with respect to the requirement that children consume SFSP meals on-site in areas that have experienced crime and violence, taking into account the feasibility of accessing data needed for approval, to ensure safe delivery of meals to children. FNS generally agreed with this recommendation. We also found that while FNS had issued reports to Congress evaluating some of its demonstration projects, as required under its statutory authorities, the agency had not issued any such reports to Congress specifically on the use of flexibilities with respect to the on-site requirement in areas where safety was a concern. As previously discussed, the agency is required to annually submit certain reports to Congress regarding the use of waivers and evaluations of projects carried out under its demonstration authority. FNS officials told us that they had not evaluated or reported on these flexibilities, in part, because they had limited information on their outcomes. We concluded that without understanding the impact of its use of these flexibilities, neither FNS nor Congress knew whether these flexibilities were helping provide meals to children—the goal of the program. Accordingly, we recommended that FNS evaluate and annually report to Congress, as required by statute, on its use of waivers and demonstration projects to grant states and sponsors flexibility with respect to the requirement that children consume SFSP meals on-site in areas experiencing crime or violence, to improve understanding of the use and impact of granting these flexibilities on meeting program goals. FNS generally agreed with this recommendation. Although FNS had established program and policy simplifications to help lessen the administrative burden on sponsors participating in multiple child nutrition programs, challenges in this area persisted, indicating that information had not reached all relevant state agencies. According to officials we spoke with from a national organization involved in summer meals, management of each child nutrition program and the processes related to applications, funding, and oversight were fragmented in many states. For example, in one of the states we visited, a sponsor that provided school meals during the school year told us they had to fill out 60 additional pages of paperwork to provide summer meals, which they described as significant burden. FNS officials told us that some of the duplicative requirements might have been a function of differences in statute, and although FNS provided guidance to states on simplified procedures for sponsors participating in more than one child nutrition program, some states might have chosen not to implement them. We concluded that without further efforts from FNS to disseminate information on current options for streamlining administrative requirements across multiple child nutrition programs, overlapping and duplicative administrative requirements may limit children’s access to meals by discouraging sponsor participation in child nutrition programs. We recommended that FNS disseminate information about the existing streamlining options, and FNS generally agreed with this recommendation. Chairman Rokita, Ranking Member Polis, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Kathryn A. Larin at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Rachel Frisk, Melissa Jaynes, and Claudine Pauselli. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study This testimony summarizes information contained in GAO's May 2018 report entitled Summer Meals: Actions Needed to Improve Participation Estimates and Address Program Challenges , GAO-18-369 . It addresses (1) what is known about SFSP participation, (2) other programs that help feed low-income children over the summer, and (3) challenges in providing summer meals to children and the extent to which USDA provides assistance to address these challenges. For its May 2018 report, GAO reviewed relevant federal laws, regulations, and guidance; analyzed USDA's SFSP data for fiscal years 2007 through 2016; and surveyed state agencies responsible for administering the SFSP in 50 states and the District of Columbia. GAO also visited a nongeneralizable group of 3 states and 30 meal sites, selected based on Census data on child poverty rates and urban and rural locations, and analyzed meal site data from these 3 states. In addition, GAO interviewed USDA, state, and national organization officials, as well as SFSP providers, including sponsors and site operators. What GAO Found Nationwide, the total number of meals served to children in low-income areas through the Summer Food Service Program (SFSP) increased from 113 to 149 million (about 32 percent) from fiscal year 2007 through 2016, according to GAO's May 2018 report. GAO noted that the U.S. Department of Agriculture (USDA) directed states to use the number of meals served, along with other data, to estimate the number of children participating in the SFSP. However, GAO found that participation estimates had been calculated inconsistently from state to state and year to year. In 2017, USDA took steps to improve the consistency of participation estimates, noting they are critical for informing program implementation and strategic planning. However, GAO determined that the method USDA directed states to use would continue to provide unreliable estimates of participation, hindering USDA's ability to use them for these purposes. Other federal and nonfederal programs helped feed low-income children over the summer to some extent, according to states GAO surveyed and SFSP providers and others GAO interviewed for its May 2018 report. For example, GAO found that in July 2016, about 26 million meals were served through a separate federal program that allowed school meal providers to serve summer meals, according to USDA data. Some children also received summer meals through nonfederal programs operated by faith-based organizations and foodbanks, though GAO's state survey and interviews with SFSP meal providers and national organizations indicated the reach of such efforts was limited. In GAO's May 2018 report, states and SFSP meal providers reported challenges with issues related to meal sites, participation, and program administration, though USDA, state, and local officials had taken some steps to address these issues. Seventeen states in GAO's survey and several providers in the states GAO visited reported a challenge with ensuring meal sites were in safe locations. To address this issue, USDA granted some states and providers flexibility from the requirement that children consume meals on-site. However, GAO found that USDA had not broadly communicated the circumstances it considered when granting this flexibility or reported to Congress on the use of flexibilities with respect to the on-site requirement in areas where safety was a concern, per requirements. As a result, neither USDA nor Congress knew whether these flexibilities were helping provide meals to children and meeting program goals. Further, officials from national and regional organizations GAO interviewed, as well as providers GAO visited, reported challenges related to the administrative burden associated with participating in multiple child nutrition programs. Although USDA had established program and policy simplifications to help lessen related burdens, the persistence of challenges in this area suggested that information had not reached all relevant state agencies, potentially limiting children's access to meals by discouraging provider participation. What GAO Recommends In its May 2018 report, GAO made four recommendations, including that USDA improve estimates of children's participation in SFSP, communicate the circumstances it considers when granting flexibilities to ensure safe meal delivery, evaluate and annually report to Congress on its use of waivers and demonstration projects when granting these flexibilities, and disseminate information about existing flexibilities available to streamline administrative requirements for providers participating in multiple child nutrition programs. USDA generally agreed with GAO's recommendations.
gao_GAO-18-114
gao_GAO-18-114_0
Background Forty-five states and the District of Columbia levy sales taxes on the sale of goods and services. Of these, thirty-seven states also have local sales taxes at the county or municipal level. Five states do not have statewide sales taxes: Alaska, Delaware, Montana, New Hampshire, and Oregon. Tax policy specialists have cited figures as high as 12,000 and as low as 10,000 for the number of tax jurisdictions in the United States—each with potentially different tax rates, different rules governing tax-exempt goods and services, different product category definitions, and different standards for determining whether an out-of-state seller has a substantial presence (referred to as nexus) in a state. On average, states receive about one-third of their total tax collections from general sales taxes. However, reliance on sales taxes varies considerably across states. Five states that do not have a broad-based individual income tax—Florida, Nevada, South Dakota, Tennessee, and Texas—collect more than half their tax revenue from general sales taxes. As of January 1, 2017, most state sales tax rates were about 6 percent, although analysis prepared by the Tax Foundation shows that five states—Alabama, Arkansas, Louisiana, Tennessee, and Washington—had average combined state and local tax rates close to or above 9 percent. Generally, businesses are required to collect sales taxes on goods and services sold to in-state consumers at the time of the purchase, and remit those taxes to the state, and sometimes local government, revenue office. The growth of e-commerce has greatly increased the likelihood of businesses selling to out-of-state customers. In 1992, the U.S. Supreme Court ruled in Quill v. North Dakota that a state can only require a business to collect and remit sales tax if the business has substantial presence, referred to as nexus, in that state. However, the decision stated that Congress could pass legislation to overrule the Quill decision. Legislation has been proposed to expand states’ tax collection authority to all remote sales, but no bill has received enough support to pass both the Senate and the House of Representatives. Some of the legislation has included provisions for small seller exemptions, free software, liability protection, and transition periods. In general, under present law in states with sales taxes, if the seller does not have nexus in a state, and is therefore not required to collect tax, then the consumer is required to pay a use tax in the same amount. Although functionally similar to a sales tax, the use tax is a tax levied on the consumer for the privilege of use, ownership, or possession of taxable goods and services. However, consumer compliance rates for use tax remittance are estimated to be very low. State Activity With the growth in e-commerce, states have increased their enforcement activities to collect sales tax from residents who make purchases from out-of-state businesses. A few states have passed laws or changed regulations that directly challenge or test the limits of the 1992 Quill v. North Dakota decision—most notably, Alabama, Colorado, and South Dakota—to increase tax collections on remote sales. In reviewing testimony and tax industry publications, we found that states have also sought additional revenue through more indirect approaches, such as asserting jurisdiction on the basis of nexus to include “affiliate nexus” and “click-through nexus.” Colorado for instance enacted a law requiring retailers who do not collect taxes on sales to Colorado customers to notify those customers of their use tax obligations and send an annual report on customers’ purchases to the state revenue agency. The revenue agency could then use this information to identify which purchasers have a use tax obligation. South Dakota took a different approach aimed at overturning the Quill decision. In 2016, the legislature passed a law requiring out-of-state businesses meeting certain criteria to collect and remit sales tax on purchases made by South Dakota residents. The state supreme court ruled on September 13, 2017, that the law violated Quill. On October 2, 2017, South Dakota filed a petition for a writ of certiorari with the U.S. Supreme Court. Alabama promulgated a regulation in September 2015 requiring out-of- state retailers who made $250,000 or more in sales to Alabama residents annually, or who conducted one or more statutorily defined activities, to collect and remit sales tax. A suit was filed with the Alabama Tax Tribunal, but no decision has been made. New York took a different route passing a “click-through” nexus law in 2008. Some out-of-state retailers enter into agreements with local online retailers to advertise the local retailer’s merchandise on the out-of-state retailer’s website. Because the agreement was with an in-state vendor, the law defined that to be a sufficient nexus to impose sales tax on the out-of-state-vendor. Several companies unsuccessfully challenged the statute. A few state governments have taken action to increase tax collection from e-marketplace sellers. As of October 2017, two states (Minnesota and Washington) had passed laws imposing new requirements on e-marketplace companies to collect sales taxes on behalf of the sellers using their e-marketplace platforms. Some states have asserted that the warehousing of goods and fulfillment of orders from within a state is enough to create nexus, and therefore a requirement to collect taxes on sales to customers in that state. To enforce compliance, we found that at least three state revenue agencies have been seeking sales, shipping or location data about goods sold through e-marketplaces. Taxes are Collected on Most Remote Sales, but States Could Gain Additional Revenue with the Authority to Require All Businesses to Collect Taxes State and Local Governments Are Able to Collect Taxes on More Than Half of Sales We estimate that state and local governments can, under current law, require remote sellers to collect about 75 to 80 percent of the taxes that would be owed if all remote sellers were required to collect tax on all remote sales at current rates. We found that the extent to which state and local governments can, under current law, require businesses to collect taxes on remote sales varies with the type of remote seller (as shown in table 1). For business-to-consumer (B2C) remote sales, we found that the percentage of taxes already being collected by sellers (which we call the “seller collection rate”) was generally higher for Internet retailers than for other types of remote sellers like catalog retailers or e-marketplaces. Based on our analysis of nearly 1,000 Internet retail companies, we estimate that about 80 percent of the potential revenue from requiring all Internet retailers to collect is already collectible. Many of the largest Internet sellers are established retail chains or consumer brands with a physical presence, such as retail stores, in all, or nearly all, of the 45 states (plus the District of Columbia) that have a statewide sales tax. As noted earlier, under current law, if a remote seller has a substantial presence (referred to as nexus) in a state, the seller is required to collect taxes on remote sales into that state. In addition, even without being required to, some large Internet retailers have entered into agreements with states to collect applicable taxes on all their Internet sales, regardless of physical presence. The rise of e-marketplaces, such as eBay, Etsy, and Amazon Marketplace, has complicated nexus determinations. At these marketplaces, sellers can access large customer bases and utilize the marketing and distribution services of the marketplace platform, often for a fee. Certain states can rely on inventory stored within their borders as sufficient nexus to impose taxes. This has included sellers using a large marketplace’s fulfillment services. As a result, to properly collect and remit taxes, sellers using marketplace fulfillment services need information on where their inventory is stored. While we estimated the seller collection rate to be relatively high for the category of Internet retailers (about 80 percent), we found it to be lower for other types of B2C remote sellers. For example, we estimate that e-marketplace sellers are currently collecting 14 percent of the taxes on their sales, in our highest potential revenue gain estimate, to up to 33 percent, in our lowest potential revenue gain estimate. For other types of remote retailers, such as mail-order companies, we estimate that they are currently collecting tax on 58 percent of their sales in our highest potential revenue gain estimate and up to 64 percent of their sales in our lowest potential revenue gain estimate (as shown in table 1). Although business-to-business (B2B) sales account for a larger share of total e-commerce than B2C sales, potential state and local government revenue gains from taxing all of these sales is less because fewer B2B sales are taxable, and seller collection rates are higher (as shown in table 1). We estimate that about half of all wholesale e-commerce purchases involve businesses purchasing raw materials or other intermediate goods that are then manufactured or incorporated into a final product. These purchases of intermediate goods are generally exempt from state and local government taxes because only the final sale to the end consumer would be taxable. For the remaining taxable B2B purchases, we estimate that the seller collection rates are between 85 percent for those sales in our highest potential revenue gain estimate and 94 percent in our lowest potential revenue gain estimate. Potential Revenue Gain across All States for 2017 is about $8 billion to $13 billion Based on our Low and High Scenario Estimates Based on the seller collection rates we estimated using high and low scenarios to illustrate the effect of underlying uncertainties, we determined that state and local governments could potentially gain about $8 billion based on our low scenario to about $13 billion, based on our high scenario, in 2017 if they were given expanded authority to require sales tax collection from all remote sellers. Table 2 presents our range of estimates. Appendix II presents our range of estimates for each of the 45 states plus the District of Columbia that have a statewide sales tax. Our estimates range from more than $1 billion for more populated states like California and Texas to about $20 million for less populated states like Vermont and Wyoming. The average gain is about $200 million. In aggregate, our national estimate of about $8 billion (low scenario) to about $13 billion (high scenario) represents about 2 to 4 percent of total state and local government general sales tax revenues. According to data from the U.S. Census Bureau, state and local governments in 2016 collected about $377 billion in general sales and gross receipts taxes. Larger States Collect Taxes on a Greater Share of Remote Sales than Smaller States We found that the extent to which state and local governments can require remote sellers to collect taxes varies by state. Based on analyses of remote sellers’ nexus locations, we estimate that some of the largest states (in terms of population) can currently require sellers to collect about 80 to 90 percent of the taxes these states could collect with expanded authority on all remote sales. In contrast, we estimate that some smaller states can only require sellers to collect and remit about 60 to 70 percent of the taxes they could collect on all remote sales. The difference is based on the greater likelihood of Internet retailers having a physical presence in larger states. We researched store locations and sales tax policies for the largest 100 Internet retailers identified by researchers at Internet Retailer. We found that about 85 percent of these Internet retailers had store locations in, or stated on their websites that they were collecting sales taxes for, California and New York. By contrast, about 55 percent of these large Internet retailers had stores or were collecting in less populated states like North Dakota and Wyoming. For smaller Internet retailers with only one location, we also found that a disproportionate share of them were located in larger states. Based on our analysis of more than 400 Internet retailers with only one location, we found that 19 percent were located in California and 12 percent in New York. With Internet retailers and other remote sellers less likely to have a physical presence in less populated states, smaller states are at a disadvantage compared to larger states in their ability to require remote sellers to collect taxes on all sales into their states. About Half of Potential Revenue Gains Could Come from Tax Collections on E-Marketplace Sales We estimate that nearly half of potential revenue gains to state and local governments would result from collecting sales taxes on all e-marketplace sales. To date, e-marketplaces have not been obligated to collect state sales taxes on behalf of sellers. Instead, like with all remote sellers, individual sellers who have title to the goods being sold through an e-marketplace are required to collect tax on sales to states in which they have nexus. However, we identified two states that have recently taken action to attribute a collection obligation to the e-marketplace. Through our review of tax industry publications and interviews with tax practitioners, we learned that some individual sellers have difficulty obtaining information from the e-marketplace companies on where their goods might be stored. While the three large e-marketplaces that we interviewed offer their sellers additional services that help sellers calculate and collect sales taxes, not all sellers take advantage of this service. None of the e-marketplaces that we interviewed could provide us data on the extent to which their sellers currently collect sales tax. Given the lack of available data, we made a conservative estimate of potential revenue gains to states if given the authority to require all e-marketplace sellers to taxes on all their sales. If e-marketplace sellers are currently collecting less tax than we assume in our model, the actual potential revenue gain to states would be higher than the estimate we provide in this report. Compliance with Use Tax on Most Remote Purchases is Low for Individual Taxpayers, but High for Businesses Because state and local governments currently do not have the authority to require businesses to collect tax on all remote sales, states generally require taxpayers who were not charged a tax on their purchases from out-of-state vendors to pay a use tax on those purchases. However, with the exception of purchases that are required to be registered with the state, such as vehicles, voluntary compliance is generally thought to be extremely low. For those states that permit taxpayers to report use taxes on their income tax returns, it is estimated that only about 1 to 2 percent of returns include use tax payments. Unlike estimates for individual compliance with use tax, estimates for business compliance are high, ranging from 70 to 90 percent. Some tax practitioners we interviewed told us that businesses routinely retain records of their taxable and tax- exempt purchases, including remote purchases, and are more likely to be compliant with any use taxes. We identified at least four states that have begun implementing new laws intended to increase consumer use tax compliance. Under these “notice and reporting” laws, remote sellers not collecting taxes on out-of-state sales are required to notify customers that they may be liable for use taxes to their home state. The states are also requiring remote sellers to send their out-of-state customers an annual summary of all purchases for which sales tax was not collected. Data from these annual summaries are shared with state revenue agencies that can use this information for enforcement purposes. Data were not yet available to estimate the revenue effects of these new programs. As we have previously reported, tax compliance is generally much higher when there is third-party reporting of information to the revenue agency. We expect that state collection of third-party information will achieve similar results. Some Businesses Would Likely Incur Several Types of Costs If Required to Collect Taxes on All Remote Sales We identified various costs associated with typical steps involved in multistate sales tax collection. We group these costs into three broad categories: software related costs, audit and assessment compliance costs, and costs associated with research and liability. We found that businesses with limited experience in multistate tax collection and those that lack software systems designed to facilitate multistate tax collection would incur the highest costs under such a scenario. Representatives from a large national chain and a trade group representing retailers told us that, generally speaking, larger retailers and those that primarily engage in brick-and-mortar retailing believe that expanded state authority would end the unfair advantage that remote retailers gain by not collecting sales tax on their out-of-state sales. Those familiar with multistate collection explained that because the software used for multistate collection is easily scaled up, retailers already using such systems, would incur few challenges to adapt to this expanded authority. Further, larger retailers that already collect in many states would already have the systems in place for collection under expanded authority. We also identified state and national efforts for simplifying tax collection for businesses. These efforts show potential for mitigating the expected costs, but much depends on the specifics of any legal changes. Our research found that a number of commercial software offerings are available to assist businesses with collecting sales taxes in multiple states. Two people familiar with the use of tax software told us that although many standard business software products generally include some sales tax functionality, these systems do not always fully support businesses selling in multiple tax jurisdictions. As a result, sellers with more widespread collection obligations typically use specialized multistate sales tax software. A representative from a Certified Public Accounting (CPA) firm explained that costs are incurred both when businesses collect sales tax from customers, and when they remit the tax to the appropriate state revenue department. In some instances, there are also start-up costs that businesses incur prior to tax collection, as well as audit or assessment costs that occur after tax collection. Figure 1 summarizes these steps and can help inform the discussion of the specific costs. Businesses Selling Remotely May Incur High Upfront Costs to Establish Software for Multistate Tax Collection The cost of both collecting and remitting sales tax rises with increased exposure to tax jurisdictions. As the number of jurisdictions for which a business collects taxes increases, the amount of administrative work also increases. Businesses will have to prepare and file a greater number of returns, license more functionality from the collection software they use, and collect tax on a greater number of sales. All of these actions add additional costs to a business’s operations. While all sellers would incur these additional costs, costs will be highest for those that do not already use software for multistate tax collection. This is especially true for those selling goods treated differently by different states and those that do not use easily-integrated software. Costs for collection software include, start-up costs, licensing fees, administrative costs, and options for premium services, such as preparing or automatically filing sales tax returns. Start-up costs are the costs associated with setting up the software for first use. Tax practitioners told us that software is necessary for multistate collection because of the complexity created by unstandardized requirements across jurisdictions. As we note above, tax policy specialists have cited figures as high as 12,000 and as low as 10,000 for the number of tax jurisdictions in the United States. In addition to differences that exist among the tax codes of the 45 states and the District of Columbia with statewide sales taxes, many local bodies have the power to impose additional sales taxes on purchases within their jurisdictions. Some tax practitioners that we interviewed said that mapping and system integration related to the necessary software for multistate collection are the most costly of the start-up activities. Mapping requires coding all of a business’s product offerings to the taxation categories used by the software. One software provider told us that generally, these software products do not require businesses to research the legal categorization in each state’s laws; however, it does require businesses to categorize products with sufficient precision for the software to assign its tax status based on state laws. For example, apparel is treated differently across states. Pennsylvania exempts clothing, except for formal apparel; items made of real, imitation, or synthetic fur; and athletic apparel. Across the border, New York State exempts clothing sold for less than $110; however, some jurisdictions do not apply these exemptions and charge a local sales tax on these items. The initial product mapping required before using multistate tax software can be labor intensive. As such, we expect that businesses setting up software for the first time, and selling goods which states treat differently will have more labor-intensive product-mapping work. Some software providers offer consulting services to assist businesses with mapping their offerings. Software providers, however, treat these services as a premium option so businesses will generally incur extra costs for using these services. Several people familiar with the use of sales tax software said that errors in mapping products can expose businesses to liability in the form of uncollected taxes. Recognizing the wide variations in sales tax laws, a group of states launched the Streamlined Sales Tax Initiative in 1999. The initiative was designed to standardize these variations and provide software assistance to make it easier for businesses to comply with state and local sales and use tax laws. This initiative sought to shield businesses from liability by directing software providers participating in the effort to complete mapping for businesses and assume liability for errors. However, more recent changes allow software providers to negotiate these issues directly with their business clients. According to a representative of the Streamlined Sales Tax Governing Board, 24 states have passed legislation to conform to the Streamlined Sales and Use Tax Agreement. These states account for a third of the United States population, but many of the largest states (in terms of population) are not fully participating. Software integration, or establishing a connection between existing business software and the new multistate tax software, will be required for businesses that begin to use multistate tax software. Two software providers we spoke with said that they have already created integration modules for the most common business software packages in use today. One explained that integration with these common business systems is generally the least expensive and may come at no cost to the business. However, businesses using customized software or software that is not in common use may see higher costs to integrate these systems. Some businesses may need to integrate several systems with the collection software. This integration may be required for transactions such as processing sales through different retail channels or ensuring that merchandise returns are removed from existing collections. Businesses will also face additional costs to license the necessary software functionality from the provider. A public accounting firm told us that these on-going licensing fees are generally lower in the first year, than the one-time costs associated with mapping and integration. Licensing costs generally are a function of the volume of information requests sent to the tax database maintained by the software provider. In estimating costs to license multistate collection software, online businesses must consider both the number of completed transactions they anticipate as well as the browsing behavior of those using their websites. A CPA firm we interviewed explained how these software packages work. Whenever a business website calculates a sales tax amount, it does so by sending an information request to a rate and address database maintained by the software provider. Importantly, this process is often an automated function of the “shopping cart” system, which may calculate a sales tax amount whenever a customer changes the goods in the shopping cart, even in the absence of a completed sale. As such, businesses must account for both completed transactions as well as how often customers change the bundle of goods in the online shopping cart. For example, customers may use shopping carts while comparison shopping on different websites. Our market research found licensing costs as low as $12 per month for up to 30 information requests each month, and as high as $200,000 per year for unlimited information requests. Businesses and others familiar with sales tax software told us that licensing fees are only one of multiple costs required to collect sales taxes in multiple states. As such, simplification proposals that include provisions for states to pay these licensing fees may not mitigate significant costs to businesses transitioning to software assisted multistate collection. Businesses will still incur start-up costs and additional administrative costs, even when states pay the licensing fees on the use of the software. Even under such proposals when software comes with no licensing fees, mapping can be labor intensive for businesses selling products that state tax laws treat differently, and integration can create costs for businesses using custom software or software that is not widely used. Further, for software to reduce administrative costs, it must be integrated with more than just a business’s shopping cart system. However, simplification proposals that only cover software licensing costs and integration with the shopping cart system may leave businesses with the costs of a more extensive integration. Businesses would either have to incur additional costs to better integrate sales tax software with existing business information systems (such as a general ledger accounting system), or regularly reconcile receipts and records manually to prepare sales tax returns for all states where it makes sales. Additional costs for software include administrative costs associated with use of the software. These costs are incurred because even automated software requires some administrative work by staff. The use of optional premium services offered by software providers may further reduce these administrative costs, but increase software costs in the process. Administrative costs tend to be highest, as a proportion of taxes collected, for the smallest sellers. Some businesses told us that collecting sales tax in all jurisdictions where they have customers would increase staffing costs, even when collection is facilitated by software. Premium services commonly offered by software providers assist businesses with preparing and filing tax returns. While electing to use these services may save businesses labor costs, they incur additional fees to use these premium services. We interviewed several businesses based in states that do not collect a sales tax. They told us that they are already researching software options should the need to collect sales tax on all remote sales arise. These businesses told us that they have little experience with collecting sales tax. As reported above, in the first year, start-up costs for the software are much higher than the on-going licensing fees. Businesses that do not need to collect sales tax in their own state may be less likely to already have multistate tax collection software or in-house expertise. Businesses May Incur Increased Audit and Assessment Costs as Exposure to Collecting Jurisdictions Grows If states are allowed to require businesses to collect tax on all remote sales, businesses we spoke with expect audit and assessment related costs to rise because of increased exposure to more tax jurisdictions. Attorneys told us that state revenue departments also employ other low- cost enforcement tools that create compliance costs. Officials from three state revenue departments that we spoke with said that they primarily focus their audits on large businesses because audits are resource intensive. Officials from one agency acknowledged that other enforcement tools, such as a letter audit, require fewer resources to use. Some businesses told us that they already expend significant resources responding to audits on sales tax collection and remittance. These costs include making staff available, developing justification for tax claims, and complying with document or information requests. A representative from the tax department of one company with nexus in most states said that auditors return every few years to audit the company and that they are currently contending with 8 to 10 audits from different tax authorities. They expect audit related costs to grow with exposure to more jurisdictions and that will require hiring additional staff. Another business we spoke with said they had just dealt with an expensive audit that lasted 3 years. They reported that they do not have the resources to comply with similar audits from other jurisdictions. We interviewed 11 businesses, attorneys, or representatives from the business community who said that fear of increased audits, should states gain expanded authority to tax remote sales, is a legitimate concern for businesses. Attorneys we spoke with offered several reasons that small- and medium-sized businesses will be audited should states gain the authority to tax remote sales. One explained that sales tax audits of small businesses often identify non-compliance and produce revenue. Another said that assessments prepared by revenue offices generally carry a presumption of accuracy. In practice, this places the burden of proof on the retailer to rebut claims made by revenue offices. However, some state revenue departments we spoke with said that they do not expect their audit resources to increase and therefore would be spread more thinly if states are allowed to require businesses to collect tax on remote sales. Two state revenue offices explained that this change would mean they have a much larger universe of businesses from which to select. As such, it is unknown how frequently businesses might have to contend with concurrent audits in different states. Travel to, and securing counsel in, remote jurisdictions would create additional costs for audited businesses that would not occur in the current environment. A business representative explained that the CPAs and attorneys they employ, or have on retainer, may not be able to represent the business in an out-of-state venue. As such, businesses would need to retain counsel qualified to practice in the assessing jurisdiction. Two business representatives also told us that businesses may be less successful at challenging tax assessments in out-of-state courts. This may prompt them to settle claims in an out-of-state court that they might litigate in their home state. Further, the federal Tax Injunction Act restricts businesses’ ability to seek relief in federal court for matters related to state taxes. In addition to audits, state revenue departments have many low-cost enforcement tools at their disposal. One example is the letter audit. An attorney we spoke with explained that in this process, a revenue office sends a letter to a business stating that the office suspects they owe sales taxes. The business incurs costs to prove the state wrong to avoid the assessment. In some cases, states bypass the assessment process and sue the business—arguing that the business has nexus in the state and owes tax. In conducting interviews, we found that states also send information requests and questionnaires to businesses designed to uncover whether they have nexus obligations. One representative from a trade group we spoke with said that a business will normally be responsive in order to remain in compliance with the law, despite potential uncertainty about the state’s authority to collect. Businesses we spoke with in states that do not collect a sales tax generally were not collecting sales taxes for other states, so they had little experience with a sales tax audit. Further, some businesses in these states were not tracking the legal requirements on businesses imposed by out-of-state jurisdictions. Businesses located in states without a sales tax also may incur costs to alter business practices after initial exposure to sales tax audits. This might happen because the procedures they currently use may not withstand the taxing states’ scrutiny. Businesses Incur Costs to Stay Current with Legal Requirements in Multiple Jurisdictions, but are Still Exposed to Risk If states gain the authority to require businesses to collect tax on remote sales, businesses will have to incur costs to understand their new compliance obligations, which can differ by state or tax jurisdiction. The related liability cost increases along with an increase in exposure to more tax jurisdictions. These costs will likely increase the most for businesses that do not have established legal teams, software systems, or outside counsel to assist with compliance related questions. We identified three areas, based on interviews with businesses, where these costs are most likely to occur. First, businesses expressed concern that changes in legal precedent could expose businesses to liability for past sales. Second, some businesses reported paying assessments based on contestable laws. Third, some businesses reported instances where businesses’ actions created nexus that led to an unforeseen liability. Retroactive Enforcement The U.S. Supreme Court’s 1992 decision in Quill Corp. v. North Dakota constrained states’ ability to tax sales originating from outside the state. We identified four states that recently changed their laws in an attempt to re-litigate this decision. A representative from the business community told us that the effect of the U.S. Supreme Court potentially overturning the Quill decision may allow laws that are on the books in many states to be enforced. For example, Alabama’s Department of Revenue told us that they have asserted jurisdiction over remote sellers under a previously unenforced law to further litigation challenging the Quill decision. They acknowledged that this action has the potential to allow retroactive enforcement, should the challenge succeed. However, they said the state was most interested in prospective compliance. Some businesses worry that, if legal arguments like these prevail, states will not confine themselves to prospective enforcement efforts. They fear that states could decide that businesses owe taxes from years when enforcement of the law did not impose collection obligations on out-of-state businesses. Risk of Overpayment Due to Compliance Culture State revenue departments mail assessments, questionnaires, and other correspondence to out-of-state businesses. These may direct businesses to provide information, pay taxes, or register to collect sales taxes. In some cases, the Quill decision protects businesses from obligations to comply with these directives. Nevertheless, some businesses have complied. One representative from a trade organization representing remote businesses said that the natural tendency for a business is toward compliance. This may lead them to pay or comply without thoroughly examining the strength of their legal position. He cited a state that mailed around two hundred demand notices to out-of-state businesses for unremitted sales tax. Even though he said that these businesses did not have nexus in the state, more than half of businesses remitted payment. Another business told us that they registered to collect in a state that was attempting to challenge the Quill decision because they judged that the cost of challenging the state’s new law was likely to exceed any increased compliance costs. This business said that collecting the tax, but waiting to remit it pending the results of a legal challenge, would expose the business to penalties and interest. Risk of Unknown Nexus Obligations Because state tax laws are complex and subject to change, businesses may not always be aware of their obligations under state law. Our research revealed cases where businesses incurred collection obligations unknowingly. One lawyer, whose practice represents several businesses in sales tax related issues, described a business that was contacted by a nearby state’s revenue office and asked to provide information on its use of fulfillment services from a popular marketplace provider. The business downloaded a report from the marketplace provider and sent it to the revenue office. The business said that the marketplace provider had formatted the information in a way that made it uninterpretable without knowledge of the location codes it contained. The state revenue office was able to use the report to show that the marketplace’s fulfillment services stored the business’s property in the state. Stored property suffices to create a nexus obligation and the business received an assessment for back taxes, interest, and penalties dating back to when the property was first stored in the state. The lawyer we spoke with has seen six similar cases since that one and said that the addition of interest and penalties often doubles the amount of taxes owed. Active monitoring of sales tax laws across the country can help businesses ensure they are compliant with all of their legal obligations. Businesses we spoke with differed in the way they conducted this research. Some undertook the research in-house. Others used software that provides updates when laws change. Some said that they require outside legal counsel to resolve difficult questions. In all cases, this research imposed additional costs on businesses. Four businesses in states without sales taxes told us that they have incomplete research or a lack of familiarity with recent changes to state laws that impose obligations on out-of-state businesses. Businesses like these may encounter additional costs in the form of unforeseen liabilities or costs to conduct research. Strategies Show Some Potential for Containing Risks In the course of our research, we identified strategies with the potential to mitigate the concerns laid out above. However, much would depend on the specifics of any legal changes. These strategies include: simplification rules for collection and remittance in multiple states, small business exemptions for businesses under a certain size, transition periods for businesses to come into compliance, and limitations on lookback periods. Simplification Rules May Help Businesses Understand Collection Obligations Simplification rules for remote sellers could provide businesses with a single compliance requirement instead of varied requirements from the jurisdictions with the authority to assess sales tax. These rules could lower research and compliance costs, and leave businesses less exposed to hidden liabilities. One multistate effort has created a set of simplified rules for collection and remittance. However, one attorney we spoke with said that the rationale for including and excluding certain items in the classification is unclear, and this leaves room for states to interpret the taxability in different ways. Further, some of the simplifications proposals we analyzed do not apply to state definitions of nexus. As such, it is possible that businesses might be aware of and compliant with the simplification rules, but unclear on how to structure their operations to avoid the less simple rules that come from acquiring nexus. These cases might require additional research costs and legal services to resolve and may expose a business to unforeseen liability. Small Business Exemptions May Help Small Businesses Avoid Additional Costs Small business exemptions would ensure that businesses with sales below a specified threshold would not be liable for taxes to remote jurisdictions. This could reduce research and liability costs for small businesses because these businesses would only have to verify that their sales were below the threshold that requires collection. However, some business representatives we spoke with said that the thresholds contained in many proposals were too low. The Small Business Administration defines a small business as one with $32.5 million in annual sales for electronic shopping retailers, and $38.5 million for mail-order houses. Federal legislation allowing states to tax remote sales have included a variety of small business exemptions. For example, one proposal would initially exempt small business with annual sales below $10 million, but that exemption would decline and eventually expire after 3 years. Another proposal would set a permanent exemption of $1 million in annual sales. New state laws and administrative regulations require out-of-state sellers to collect taxes. We identified small seller exemptions in some of these laws and regulations as low as $10,000 and as high as $500,000 in annual sales into the state. However, one business owner said that $25 million in annual sales is still a small business. The owner explained that such businesses can quickly go bankrupt and have little capital to survive downturns in the business cycle. Business representatives said that business models which emphasize low margins and high sales volume are common in remote sales. These businesses may have limited resources for additional compliance obligations. Transition Periods Can Help Businesses Prepare for Collection Obligations Transition periods may give businesses time to examine their legal obligations and secure tools, such as software or legal counsel, to facilitate compliance but can prompt increased demand for assistance and services. Our work has shown that sometimes tax system transition deadlines are likely to prompt a large volume of requests from taxpayers for compliance assistance from taxing authorities. Because businesses reported that additional software or legal services would be required to transition to new collection obligations, we expect demand for such services to increase before transition deadlines. Limits to Lookback Periods May Protect Newly Registered Businesses Limited lookback periods restrict how far back a state revenue agency can examine a business’s records after that business registers to collect taxes. Attorneys that we interviewed said that registering to collect with a state can trigger an examination of that business’s records with an eye to discovering if the business owes taxes for sales prior to the registration. They explained that if businesses are not protected by limitations to lookback periods upon registration, this may inhibit registering to collect in new states. One business owner told us that the risks of additional scrutiny and unforeseen liability have prevented him from registering to collect in a nearby state where he would like to do more business. Limitations to lookback periods would give businesses more confidence in registering to collect because they would be less likely to incur additional scrutiny or an unforeseen liability as a result of the registration. States Generally Do Not Anticipate Major Administrative Costs or Challenges If Given the Authority to Require Businesses to Collect Tax on All Remote Sales Actions by state and local governments to increase tax collections on remote sales could require additional government resources to administer sales taxes. State revenue agency officials, as well as representatives from the Federation of Tax Administrators and other state government organizations we interviewed, did not identify any major increases in administrative costs or significant administrative challenges if states were given the authority to require businesses to collect taxes on all remote sales. In the absence of congressional action to grant states expanded tax collection authority on all remote sales, state legislatures have recently considered, and in some cases enacted, new laws designed to increase tax collections on remote sales. As these proposals were being considered, we identified five revenue agencies or legislative budget offices that had estimated the costs to implement and administer these new programs. For example, one state’s analysis concluded that current state revenue agency resources were sufficient to implement and administer the new program, and another state’s analysis determined that the program would have only a moderate effect on the state revenue agency. Other state analyses that estimated additional annual costs varied widely, from a few hundred dollars to up to $4 million. While these estimates varied widely, we found that this information helped to illustrate potential challenges and costs state and local governments could face in trying to collect taxes from all remote sellers. Interviews with three state revenue agency officials who had already implemented, or were beginning to implement, new programs also provided us further information on potential administrative costs and challenges. Sales Tax Administration Activities Registration of vendors. States need to process registration forms from new vendors, including out-of-state vendors. States also need information to help identify unregistered vendors. Returns processing. States require resources to process sales tax returns, including returns from out-of-state vendors. States typically capture data in information systems, and identify and process over- or underpayments. Enforcement efforts. Audit resources are needed to verify vendors’ total taxable sales. When auditing out-of-state vendors, state revenue departments may face higher travel costs. Collections. States send delinquency notices to vendors for late, miscalculated, or underpaid collections. Taxpayer services. States provide education efforts and taxpayer assistance to improve voluntary compliance. We previously reported that the following state functions are typically associated with administering sales taxes: identifying and registering vendors; returns processing; enforcement; collections; and taxpayer services (see sidebar titled “Sales Tax Administration Activities”). If remote sellers were required to collect state taxes regardless of nexus, states may need to process an influx of new registration forms from out- of-state vendors. State revenue agency officials as well as representatives from the Federation of Tax Administrators told us, however, that they did not anticipate that registering new out-of-state vendors and processing additional returns would pose major challenges to state agencies. They explained that state revenue agencies already process a large volume of registration changes annually as new businesses are created or existing businesses fail. As a result, they expected that new registrations from out-of-state sellers would not represent a significant strain on current resources. Potential increases in new out-of-state vendor registrations could be lessened by states’ small seller exemptions. Some state proposals for increasing tax collections on remote sales have exempted smaller out-of- state sellers with annual sales less than a certain dollar amount, or annual transactions less than a certain number, into a state. Recent small seller exemptions have set annual sales exemption thresholds ranging from $10,000 in Washington State to $500,000 in Massachusetts. One revenue agency official from Alabama, which began enforcing a new remote-seller regulation in 2016 that has a $250,000 small seller exemption, told us that the approximately 100 newly registered out-of- state sellers is an extremely small share of the state’s total 40,000 registered sellers. States may need additional resources to process new tax returns from out-of-state vendors and to verify out-of-state vendors’ total taxable remote sales into a state. However, as tax administrators noted above with regard to new vendor registrations, any increase in out-of-state returns processing may be minimal when compared to the volume of routine in-state returns. When processing new out-of-state returns, states may need to decide whether to capture the same amount of data from out-of-state filers as they currently do for in-state filers in order to limit errors and required resources for follow-up. Depending on whether and how some states choose to centralize registration and reporting for out-of-state vendors, some administrative costs and burdens associated with these functions might be reduced or mitigated. For example, a revenue agency official from Alabama told us that implementation of its new administrative rule (requiring out-of-state vendors to collect taxes on sales to Alabama customers) has been facilitated by having its state revenue department serve as a centralized collection point on behalf of local tax authorities. Thirty-seven states, like Alabama, have local sales taxes in addition to statewide sales taxes. Some of these local taxes are already centrally collected by a state revenue agency, but in some states, local authorities collect them. States that are members of the Streamlined Sales and Use Tax Agreement have agreed to allow centralized state registration and reporting for out-of-state vendors. Louisiana, another state with many local sales tax jurisdictions, recently enacted a new law creating a sales tax board for promoting “uniformity and efficiency” of local sales and use tax administration. The law also created an independent agency within the state’s Department of Revenue for administering and collecting state and local taxes related to remote sales. When allocating enforcement and collections resources, state administrators may need to weigh trade-offs between pursuing incidences of noncompliance (typically higher among small filers) against potential revenue effects (greatest among large filers). Representatives from the Federation of Tax Administrators did not anticipate significant increases in enforcement costs because they said most sales tax noncompliance is detected not through intensive audits but through less costly automated matching of electronic data such as credit card sales receipts with business-reported sales. They also said that most noncompliance issues are resolved via automatically-generated correspondence with taxpayers. That is, most taxpayers resolve additional amounts owed or other noncompliance matters after receiving notification letters from state revenue agencies. One state revenue agency official told us that his agency may experience higher travel costs associated with audits of out-of-state vendors. The same official believed, however, that this might merely require re- allocating current travel expenses from in-state audits to out-of-state audits rather than requiring an increase in travel budgets. The Oklahoma legislature recently authorized the state revenue agency to create an out- of-state sales tax enforcement division. While the final bill provided the state agency with flexibility to staff this division using existing resources, the original proposal would have mandated opening a new office outside the state and staffing it with a minimum of five employees at an estimated annual cost of $450,000. Finally, state revenue agency officials and representatives from the Federation of Tax Administrators told us that they anticipated some additional resources may be needed for taxpayer assistance such as providing increased telephone assistance or publishing guidance for new out-of-state vendors. Demand for taxpayer assistance is likely to be higher from smaller out-of-state vendors with less experience in collecting and remitting taxes to other states. The complexity of a state’s sales tax laws, such as rules for when to exempt a certain type of product based on how it is used, are also likely to affect levels of taxpayer service requested by new out-of-state vendors. States Implementing Notice and Reporting Requirements May Experience Difficulties Matching Sales Data to Taxpayer Information We identified at least four states that have enacted new “notice and reporting” laws in attempts to increase tax collections from remote sales. Under these laws, if an out-of-state seller chooses not to collect taxes on sales into a state, then the seller is required to notify its customers of state use tax obligations, send customers annual summaries of their purchases, and share that information with state revenue agencies. One state’s fiscal analysis of its new notice and reporting law estimated that out-of-state retailers will decide to collect the tax rather than comply with notice and reporting requirements. The handful of new notice and reporting laws that we identified have only recently become effective, so it is unclear to what extent this has or will occur. We found two recent estimates of costs to implement and administer these new notice and reporting laws. The Louisiana Legislative Fiscal Office estimated that the state revenue agency would incur costs of $90,000 annually to administer a new notice and reporting law. By contrast, the Washington Department of Revenue estimated that it would cost about $4 million annually to administer the state’s new notice and reporting law. Washington revenue officials told us that most of these costs come from hiring new staff. They explained that increased costs are common when they must enforce new provisions of the tax code because it is not easy to reassign tax staff. State revenue agencies implementing new notice and reporting laws may experience difficulty matching sales information from out-of-state retailers with taxpayer data. Revenue officials from Colorado told us that the annual sales reports remote sellers are required to send to their customers and share with state revenue agencies, will not contain unique taxpayer identification data like Social Security numbers. Without these data, these officials explained that revenue agencies will need to use customers’ names and addresses to match with taxpayer returns. If buyers with similar names make use of the same delivery address, this may complicate efforts to identify a taxpayer’s use tax obligation. Colorado and Washington officials also told us that once their revenue agencies begin sending letters to taxpayers with estimated use tax obligations, they anticipate significant increases in phone calls and other requests for taxpayer assistance. In order to manage expected increases in call volumes and control costs, Colorado officials said they plan to be selective about sending notices in the first years. Officials from Washington’s Department of Revenue told us that one part of their new notice and reporting law applied to e-marketplaces rather than sellers. Officials told us that it is easier for states to enforce compliance against one large entity (the e-marketplace company) instead of the thousands of smaller sellers that sell through the e-marketplace’s platform. Washington’s notice and reporting law requires e-marketplace companies to comply with the notice and reporting requirements if the e-marketplaces choose not to collect and remit taxes on behalf of their individual sellers. In August 2017, the Multistate Tax Commission began offering a general sales tax amnesty program for e-marketplace sellers. During the amnesty period, the commission would accept applications from qualifying remote sellers. The sellers would affirm in their applications that their only connection with the participating state or states is through inventory housed in an e-marketplace’s warehouse or fulfillment center. In exchange, one group of participating states would agree to waive back tax liabilities for sales and use taxes, as well as for income and franchise taxes, including penalties and interest, without regard to any lookback period. At the time of our report, 24 states and the District of Columbia were participating. The program was set to end in November 2017. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate Senate and House committees. We will also send copies of the report to the Secretary of the Treasury and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix III. Appendix I: Methodology for Revenue Gain Estimates To address our objective on estimating how much revenue state and local governments could gain by being able to collect taxes on sales made by all remote sellers, we updated a model we used to prepare similar estimates in 2000. The sidebar titled “Steps Involved in Estimating Potential Revenue Gains” summarizes the steps in our methodology. Compared to when we did similar analysis in 2000, there are some areas where we have better data, but a single point estimate is still not possible because of uncertainty surrounding estimates of several key inputs to our model. In our 2000 report, there were few reliable data sources on which to base our calculations and adjustments. We noted then that projections of sales were particularly difficult to make given the rapidly changing e-commerce environment. Today, there are more data sources available on current and future e-commerce sales. In addition to the past uncertainty regarding the magnitude of remote sales, we reported in 2000 that there was considerable uncertainty about the amount of tax that state and local governments were already collecting from these remote sales. Today, data are more easily available on where e-commerce companies have a substantial presence (referred to as nexus) in states. Some research companies track this information, and more companies are explicitly stating on their websites in which states they collect sales tax. Still, we had to make several broad assumptions about the volume of e-marketplace sales, including the extent to which e-marketplace sellers were already collecting sales taxes. As states continue to research tax losses associated with e-marketplace sales and pursue increased enforcement actions, we believe that more data could help improve the accuracy of our estimates. Additional data from e-marketplace companies about the extent to which their sellers are collecting sales taxes through the e-marketplace optional tax services would also help improve further analysis in this area. The Volume and Composition of Internet and Other Remote Sales To obtain sales estimates, we reviewed academic, government, and private-sector studies. We also contacted these authors and other specialists in this field to identify other potential sources of sales estimates. Some state revenue agencies and other researchers who have estimated tax revenue losses from remote sales have used data from the U.S. Census Bureau to derive their base estimates of total Internet and other remote sales. While we did use some Census data in our analyses, we primarily relied upon data from Forrester Research (a research company whose data we had used in our 2000 report) to arrive at low and high scenario estimates for total sales volumes for different types of remote sales as shown in table 3. We chose not to provide a single point estimate, because the low and high scenarios illustrate how the numbers can vary—sometimes non-trivially—depending on reasonable assumptions about the underlying uncertainties. Forrester Research’s estimates of business-to-consumer (B2C) e-commerce sales for the years 2016 to 2021 presented data on 31 different product categories to which we could then apply specific state sales tax rates and exemptions. By contrast, similar Census data were more limited in that: the data contained fewer categories (13 merchandise lines plus non-merchandise receipts); the most recent data were for the year 2015; and the data did not include e-marketplace sales. Forrester Research’s total online retail forecast for 2016 was about $400 billion and nearly $450 billion for 2017. We reduced this total by about $20 billion by removing sales for two product categories (movie tickets and event tickets) that were more akin to services industry (rather than retail) activities. Business-to-Consumer E-Marketplace Sales Unlike Census data, Forrester Research included sales from e-marketplaces in its e-commerce forecasts. Sales tax losses associated with e-marketplace sales have become an increasing area of focus for state revenue agencies, and so it was important to include in our analysis. To separate e-marketplace sales from the sales of other Internet retailers, we analyzed data from the annual reports of three leading e-marketplace companies and data we obtained from Internet Retailer. We estimated the value of merchandise being sold on these three leading e-marketplaces to be about $110 billion in 2016. However, some of these are sales by other Internet retailers using the e-marketplaces to sell their goods. That is, some retailers operate stores and their own websites but also sell their goods through “storefronts” on the e-marketplaces. We adjusted our total e-marketplace sales estimate to avoid double-counting retailers’ Internet sales in our analysis. In the end, we estimated that e-marketplace sales (excluding the sales of Internet retailers using e-marketplaces) accounted for 20-25 percent of total 2017 online retail sales ($85 billion to $106 billion). Business-to-Consumer Other Remote Sales Data sources on other remote sales like mail-order catalogs or television shopping channels are more limited, compared to available data on e-commerce sales. A representative of catalog companies we interviewed told us that it is becoming increasingly difficult to attribute retail sales to particular sales channels. For example, many catalog retailers also have websites or sell their goods in retail stores or via e-marketplaces. We decided the best available estimates could be derived by separating out aggregate Census data on Electronic Shopping and Mail-Order Houses into separate e-commerce and mail-order components. We first estimated that the mail-order portion of the top-line Census category to be about $150 billion in 2016, but then removed about $95 billion in estimated mail-order prescription drug sales because nearly all states exempt prescription drugs from sales taxes. Using data on historical growth rates for the mail-order catalog industry, we then estimated the range of other remote sales for 2017 to be from $58 billion to $61 billion. Business-to-Business Internet Sales Forrester Research’s estimates of business-to-business (B2B) e-commerce wholesale trade for the years 2016 to 2021 presented data on 11 different product categories to which we could then apply specific state sales tax rates and exemptions. While similar Census data included 19 different product categories, the most recent Census data was only for year 2015. Forrester’s estimates exclude sales via electronic data interchange networks which accounts for some of the difference with Census’ larger e-commerce estimate. Forrester Research’s total B2B forecast was about $825 billion for 2016 and about $885 billion for 2017. We removed about $125 billion in petroleum and petroleum products sales because these sales would generally be subject to excise (not sales) taxes and, furthermore, these sales would be taxed on volume (not dollar value) and we lacked volume data, such as gallons sold. We also lowered the value of the motor vehicles and parts category by 40 to 60 percent under the assumption that most vehicles are taxed when registered with state motor vehicle agencies and sales and use tax compliance is considered generally high. The Taxability of Remote Sales To estimate the amount of tax due on remote sales, we apportioned a share of total e-commerce and other remote sales to each state (and the District of Columbia) and then applied each state’s tax exemptions and rates to those sales. We allocated sales across states by assuming that each state’s share of sales to individual consumers is proportionate to the state’s share of U.S. disposable personal income, and that each state’s share of sales to businesses is proportionate to the state’s share of U.S. gross domestic product. We made this allocation for each of the B2C and B2B product categories. We then determined which categories of products and services are taxed by state and local governments and at what rates. Our main sources for state and local rates and exemptions were CCH’s State Tax Guides and Multistate Quick Answer Charts, Federation of Tax Administrators’ summary tables, and the Tax Foundation’s 2017 State Business Tax Climate Index. Eight states plus the District of Columbia do not have additional local sales tax rates levied by cities, counties, or other special taxing districts. For the other 37 states with both statewide and local tax rates, we used weighted average local rates as estimated by the Tax Foundation after first comparing and testing these rates with similar data published by the Washington State Department of Revenue. For B2B e-commerce wholesales, we made additional adjustments to reflect the fact that many B2B sales are exempt from tax based on the type of purchaser or the type of use. These purchaser and use exemptions are important for estimating the proportion of B2B sales that are exempted as raw materials or as inputs incorporated into a final product. Our sources of sales estimates did not disaggregate them by type of purchaser or types of use. In order to estimate the percentage of business-to-business sales that would be exempt, we used input-output account tables prepared by the Department of Commerce’s Bureau of Economic Affairs. These tables show the inter-industry transactions of the U.S. economy for 2015 and provide detailed information on the composition of inputs and the distribution of outputs of all major U.S. industries. On the basis of our analysis of the input-output data, we excluded a range from 50 to 60 percent of all B2B e-commerce wholesales from our model (see row titled “less exempt intermediate goods” in table 3). The Extent to Which Remote Sellers Already Collect Taxes Seller collection rates represent the share of taxes on remote sales that state and local governments can currently require remote sellers to collect due to remote sellers’ substantial presence (referred to as nexus) in a state. To estimate seller collection rates for selected categories of e- commerce and other remote sales, we followed an approach similar to that in our 2000 study. We made separate estimates for Internet retailers, e-marketplaces, other remote retailers, and merchant wholesale e- commerce sales because a different population of firms dominates in each group. Again, we chose not to use a single point estimate, because the low and high alternatives illustrate how assumptions made about collection rates can vary our model output—sometimes non-trivially. The ranges of our estimates are shown in table 4. To make our estimate for Internet retailers, we analyzed data from Internet Retailer’s 2017 list of the leading 1,000 U.S. companies to determine the states in which they collect sales taxes. We first used data from company financial reports to adjust Internet Retailer’s 2016 global sales figures for the top 100 companies to reflect only U.S. Internet sales. We also used company annual reports and a smaller list of leading Internet retailers from eMarketer to test the accuracy and reliability of Internet Retailer’s data, which we found to be sufficiently reliable for our purposes. We then verified Internet Retailer’s data on the states where each of the top 100 companies were collecting sales taxes by comparing it to sales tax collection policies published on companies’ websites or lists of companies’ physical locations (such as retail stores, warehouses, or company headquarters). We performed our research on companies’ collection policies and nexus from May to June 2017. During this period some companies’ collection policies or nexus changed from the date when Internet Retailer published its Top 1000 list in April. For example, the largest Internet retailer completed agreements with the remaining few states where it was not previously collecting sales tax. As of September 2017, the company stated on its website that it collects taxes on sales of all its products sold to customers in the 45 states (plus the District of Columbia) with statewide sales taxes. For 27 of the top 100 companies, Internet Retailer did not report any data on states where the companies were collecting sales taxes, so we used the results of our own nexus research. For the remaining states where we could do comparisons, we found Internet Retailer’s data on companies’ nexus to be sufficiently reliable for our purposes. On the basis of our nexus research, we found that about 40 percent of the top 100 companies were collecting in all 45 states (plus the District of Columbia) with statewide sales taxes, and three-quarters were collecting in at least half the states. Only 2 of the top 100 companies were only collecting in, or only had nexus, in one state. To estimate the percent of sales on which Internet retailers were currently collecting taxes, we first allocated each company’s total sales to states based on each state’s share of national disposable personal income. We then multiplied each state’s share of sales by the combined state and local government weighted average tax rate to estimate the total tax dollars that could be collected on all sales regardless of nexus. We then used our nexus data for each company to estimate the tax dollars companies were already collecting. The ratio of these two estimates (total taxes collectible under current law, divided by total taxes that could be collected if states had expanded authority) is our estimated “seller collection rate.” For the top 100 companies on Internet Retailer’s list, we estimated this seller collection rate to be from 87 to 96 percent. We then extended our research of companies’ nexus to the remaining 900 companies on Internet Retailer’s top 1000 list. These remaining 900 companies accounted for about 20 percent of the total dollar sales volume for all 1,000 companies on Internet Retailer’s list (after we had adjusted global sales to U.S.-only sales for the top 100). For about one- third of these 900 companies, Internet Retailer did not report any nexus data so we did our own research. For the other two-thirds, we relied on Internet Retailer’s nexus data because we found it sufficiently reliable based on our analysis of first 100 companies listed. Compared to the top 100 companies, these remaining 900 companies were far less likely to have nexus (or said they were collecting) in all or most states. About half the remaining 900 companies only had nexus (or said they were collecting) in one state. In terms of tax dollars, we estimated that these 900 Internet retailers were already collecting from 44 to 49 percent of the potential taxes that states and local governments could require to be collected if given expanded authority on all remote sales. For all 1000 Internet retailers, we adjusted our estimates of dollars currently being collected by plus (+) and minus (-) 5 percent, which gave us a range of overall estimated collection rates from 78 to 86 percent for the category. E-Marketplace Sellers Collection Rates The wider range of our estimates on seller collection rates for e-marketplace sales is because less data were available on the extent to which these types of sellers already collect sales taxes. We could not find sufficiently reliable data on the physical locations of sellers who use e-marketplaces. The three major e-marketplaces (that we analyzed to estimate total e-marketplace sales) offer their sellers additional services that help sellers calculate and collect sales taxes, but not all sellers take advantage of this service. None of the e-marketplaces that we interviewed were able to provide us data on the extent to which their sellers currently collect sales tax. We found limited data on the extent to which e-marketplace sales include sales taxes. Two studies estimated that sales taxes were more likely to be collected by larger sellers like other retailers using e-marketplaces to sell some of their products. As we noted above when describing our methods for estimating total e- marketplace sales, we estimated that about 40 percent of Internet retailers sell their products not only via their own stores and websites, but also offer their products for sale on e-marketplace sites. In our calculations, we assumed that from 10 to 30 percent of e-marketplace sales were made by large sellers that collected taxes in most states (either due to nexus or collection agreements with states). After allocating those sales to states based on share of disposable personal income, we assumed that these large sellers collected taxes at the same rates we had estimated for the top 100 Internet retail companies. We assumed that the remaining e-marketplace sales (from 70 to 90 percent) were made by smaller sellers with only one nexus, and that these small sellers were geographically located similar to other Internet retailers with only one nexus. After allocating those sales to states, we assumed that these small sellers collected taxes only in their home state. Our resulting seller collection rates for all e-marketplace sellers ranged from 14 to 33 percent. Due to a lack of sufficiently reliable data, we did not consider what percentage of e-marketplace sales are used items. According to one e-marketplace company, about 20 percent of items listed on their site are used. According to information from one tax software company, the taxability of used items for sale varies by state. Other Remote Retailers Collection Rates We could not find data that listed the leading mail-order catalog companies, and in which states they have nexus and are collecting taxes. However, 116 of the companies in Internet Retailer’s 2017 Top 1000 list were classified by Internet Retailer as “Catalog/Call Center” companies. These companies had from $5 million to $5 billion in 2016 Internet sales to U.S. customers and were distributed similarly to the full population of all 1000 companies. Since we had already estimated their collection rates as part of our analysis on Internet retailers, we re-calculated an aggregate collection rate for these 116 companies. We adjusted our estimates of dollars currently being collected by plus (+) and minus (-) 5 percent, which gave us a range of overall estimated collection rates from 58 to 64 percent. B2B E-Commerce Wholesalers Collection Rates We followed a similar approach for estimating seller collection rates for business-to-business e-commerce wholesalers. We identified 106 companies on the Internet Retailer’s 2017 Top 1000 list with significant B2B sales. Some of the companies appeared to sell exclusively to businesses whereas others had both significant consumer and business sales. These companies had 2016 Internet sales to U.S. customers ranging from $5 million to $10 billion, and the subpopulation was distributed similar to the overall Top 1000 population. The 106 companies were more likely to come from Internet Retailer’s categories of: automobile parts, computers/electronics, hardware/home improvement, and office supplies. Comparatively fewer were in Internet Retailer’s categories of apparel/accessories, food/drug, health/beauty, or housewares/home furnishings. Because we had already estimated their collection rates as part of our analysis on Internet retailers, we re-calculated an aggregate collection rate for these 106 companies. We adjusted our estimates of dollars currently being collected by plus (+) and minus (-) 5 percent, which gave us a range of overall estimated collection rates from 85 to 94 percent. The Extent to Which Purchasers Already Pay Tax According to data we found, consumer and business use tax compliance rates have not changed significantly since we did similar analyses in 2000. As we reported then, consumer use tax rates are estimated to be very low whereas business use tax compliance rates are estimated to be very high. The most widely-cited study we found on consumer use tax compliance was prepared by the Minnesota legislature in 2015. The study reported that for those states that allowed taxpayers to report use taxes on their state income tax returns, the percentage of returns including use taxes ranged from a low of 0.2 percent in Rhode Island to a high of 10.2 percent in Maine. We used the various rates from the study in our calculations. For those states not listed in the Minnesota legislature study, we used a default median rate of 1.2 percent. We had more to up- to-date data for California, Mississippi, and Vermont, which we used in our calculations. We then adjusted the total dollar amount of use taxes paid by consumers from 0 to 10 percent to provide us a range of inputs for our model. Making these adjustments had little to no effect on the final results. For business use tax compliance rates, we found data from five states that estimated business use tax compliance to be from 70 percent to 90 percent. In our model, we applied both these figures to give us a range of estimated use tax dollars paid by businesses. Ranges of Potential Revenue Gains Table 5 shows the potential revenue gains for 2017 that we calculated using various combinations of low and high estimates for sales and sellers collections rates described above. Here too, we chose to not provide a single point estimate because the low and high scenarios for potential revenue gains illustrate how the many underlying uncertainties affect potential revenue gains. By adjusting various model inputs we produced some lower estimates resulting from the following assumptions and adjustments: (1) decreasing our estimated e-marketplace and other remote retailer sales; (2) increasing our estimated seller collection rate for all types of remote sellers; (3) increasing our estimated consumer and business use tax compliance; and (4) increasing our estimates of tax-exempt business inputs (intermediate goods). The higher estimate results from: (1) increasing our estimated e-marketplace and other remote retailer sales; (2) decreasing our estimated seller collection rates for all types of remote sellers; (3) decreasing our estimated consumer and business use tax compliance rates; and (4) decreasing our estimates of tax-exempt business purchases (intermediate goods). Including Additional Factors in Our Model Would Likely Lower Our Overall Estimates of Potential Revenue Gains We lacked sufficient data on four additional factors that, if we had included in our model, would likely reduce our estimates of state and local government revenue gains. We lacked sufficient data on the extent to which requiring all remote sellers to collect sales taxes on all sales (regardless of a sellers’ nexus) would raise final prices to consumers and thus lower demand for goods sold remotely. Facing higher final prices, some online or other remote shoppers might shop instead at traditional brick and mortar retailers, or place orders with non-U.S. remote sellers. A representative from one major Internet retailer we interviewed believed that its customers placed higher value on the convenience of shopping online and were less likely to change their shopping behavior if previously untaxed sales now included sales taxes. Some economists have concluded that consumers alter buying decisions when remote retailers begin to collect sales taxes. However, one of the tax policy specialists who reviewed our report noted a lack of consensus on this topic. We lacked sufficient data on what portion of e-commerce sales included in our model might be tax exempt digital downloads of software, music, books, and games. Some states consider digital downloads to be a service (not a physical good) and therefore exempt from sales taxes. The variations in state laws governing the taxability of digital downloads were too numerous for us to reliably include in our model. Assuming that states do not change their laws to make these purchases taxable, it is likely that our estimates of potential revenue gains would be lower. We were unable to factor in the extent to which some small remote sellers might be exempt from sales tax collection requirements even if states had expanded authority over all remote sales. Recent state laws and regulations regarding taxes on remote sales have included small seller provisions that exempt sellers who make less than a specified dollar amount of sales or a number of transactions annually into a state. Proposed federal legislation granting states expanded taxing authority on all remote sales also includes different nationwide dollar amount exemptions for small sellers. We could not find sufficiently reliable data to estimate how many businesses or what dollar volume of sales might be exempt either at the state or federal level. As a result, our final estimates most likely overstate the total potential revenue gains for some, or all, states depending on what types of small seller exemptions might be enacted at either the state or federal level, or both. Sales to Tax Exempt Entities We lacked sufficient data on what share of remote sales are made to tax exempt entities. In our 2000 report, we were also unable to identify any estimates of sales by taxable versus tax-exempt purchaser. Officials from one state revenue agency we interviewed estimated that the percent of purchases made by tax-exempt entities or persons to be extremely low. Our final estimates of potential tax gains would be lower for states if we had included an estimate in our model. Appendix II: State and Local Government Potential Revenue Gains Appendix III: GAO Contact and Staff Acknowledgments GAO Contact James R. McTigue, Jr. (202) 512-9110 or [email protected]. Staff Acknowledgments In addition to the contact named above, Tara Carter (Assistant Director), Mark Kehoe (Analyst in Charge), Brett Caloia, and Christine N. Dickason made key contributions to this report. Anne Stevens, A.J. Stephens, Cynthia Saunders, JoAnna Berry, Stewart W. Small, Donna Miller, Andrew Emmons and Andrew Howard also provided key assistance.
Why GAO Did This Study Over the past two decades, e-commerce sales have grown rapidly, greatly expanding a category of sales known as remote sales. Under current law, states cannot require all businesses to collect taxes on remote sales. Congress has been considering proposals to change this. Little current, nationwide information exists to inform the debate. In this report, GAO (1) estimated revenue states and localities could gain by being able to require businesses to collect taxes on all remote sales, and (2) described what is known about the related compliance costs and challenges to businesses, and administrative costs and challenges to states. GAO estimated 2017 revenue gains to state and local governments based on actual and estimated sales data for remote sellers, excluding certain sales that were exempt from taxation or already collected by remote sellers with a substantial presence in a state. Ranges for GAO's estimates were based on a number of key assumptions that were varied based on available data. To describe related costs and challenges to businesses and states, GAO interviewed officials from state revenue agencies, subject matter specialists, and a wide variety of retailers with remote sales and the organizations that represent them. GAO provided a draft of this report to subject matter specialists who agreed with the general approach that GAO followed in making its estimates. What GAO Found Forty-five states and the District of Columbia levy taxes on the sale of goods and certain services, including those sold remotely, such as over the Internet. In 1992, the Supreme Court ruled in Quill v. North Dakota that a state can only require a business to collect and remit sales tax if the business has substantial presence, referred to as nexus, in that state. However, the decision stated that Congress could pass legislation to overrule this limitation. In general, under present law, if a seller does not have nexus in a state, and therefore does not collect tax, then a purchaser is required to pay a use tax in the same amount to his or her state government. GAO estimated that state and local governments can, under current law, require remote sellers to collect about 75 to 80 percent of the taxes that would be owed if all sellers were required to collect tax on all remote sales at current rates. GAO found that the extent to which state and local governments can require businesses to collect taxes varies with the type of remote seller and by state. GAO estimated that state and local governments could gain from about $8 billion to about $13 billion in 2017 if states were given authority to require sales tax collection from all remote sellers. This is about 2 to 4 percent of total 2016 state and local government general sales and gross receipts tax revenues. Some businesses would likely see increases in several types of costs if required to collect taxes on all remote sales. These costs would be higher for businesses not currently experienced in multistate tax collection. Officials from state revenue departments told us that they generally do not anticipate major administrative costs or challenges if given the authority to require businesses to collect tax on all remote sales. What GAO Recommends GAO is not making recommendations in this report.
gao_GAO-18-226
gao_GAO-18-226_0
Background The Military Selective Service Act established the Selective Service System whose mission is to be prepared to provide trained and untrained manpower to DOD in the event of a national emergency when directed by the President and the Congress. Additionally, the Selective Service System is to be prepared to implement an alternative service program within the civilian community for registrants classified as conscientious objectors during a draft. The Selective Service System is an independent agency, and it maintains a database that includes the names, birthdates, social security numbers, and mailing addresses of men ages 18 through 25 who could be drafted into the service of our nation, if needed, in the event of a national emergency. Further, the Selective Service System also is to conduct peacetime activities, such as public registration awareness and outreach; responding to public inquiries about registration requirements; and providing training and support to its workforce of career, non-career, full-time and part-time employees, uncompensated employees, and selected military personnel. The Military Selective Service Act does not currently authorize the use of a draft for the induction of persons into the armed forces. In order to meet a national emergency requiring a mass mobilization, Congress and the President would be required to enact a law authorizing a draft to supplement the existing force with additional military manpower. In the event of a draft, the regulation governing the Military Entrance Processing Stations would have the Under Secretary of Defense for Personnel and Readiness, with input from the military services, provide the Director of the Selective Service System with the number of personnel needed to be drafted. The Selective Service System would then conduct a lottery and send induction notices to selected draftees to supply the personnel requested by the Secretary of Defense. Each draftee would be required to report to one of DOD’s 65 Military Entrance Processing Stations throughout the country at a specific time and date to undergo assessments of their aptitude, character, and medical qualifications in order to determine whether they are fit for military service based on standards set by each military service. Fully qualified draftees would receive induction orders and would be transported from one of the Military Entrance Processing Stations to the appropriate military service’s entry- level training location. According to DOD, the Selective Service System must deliver the first inductees within 193 days from when the President and the Congress authorize a draft, and the military services then are to train, equip, and accommodate in other ways the new inductees. The military services are generally smaller today than they have been in many years. In fiscal year 2003, for example, DOD’s total active military end strength was approximately 1.5 million, while in fiscal year 2017 the number was 1.38 million. Additionally, DOD’s total workforce mix has also changed. For example, in late 2003 DOD directed the military services to convert certain military positions to federal civilian or contract positions based on evaluations that showed that many military personnel were being used to accomplish work that was not military essential and that civilians could often perform these tasks in a more efficient and cost- effective manner than military personnel. In May 2013, we reported that DOD officials stated that about 50,000 military positions were converted to DOD federal civilian positions or to contractors since fiscal year 2004 in order to devote more military positions to the support of ongoing military operations. Under current law, women may serve voluntarily in the armed forces but are not required to register with the Selective Service System. In the 1981 case of Rostker v. Goldberg, the Supreme Court of the United States upheld the constitutionality of our nation’s practice of registering only men. Recognizing the purpose of registration was to prepare for a draft of combat troops and since women were excluded from combat, the Supreme Court ruled that Congress could exclude women from registration. DOD gradually began to eliminate prohibitions on the assignment of women to direct ground combat positions and on January 24, 2013, the Secretary of Defense and the Chairman of the Joint Chiefs of Staff rescinded a 1994 rule preventing women from serving in direct ground-combat positions and directed the military services to open all closed positions and occupations to women by January 1, 2016. In December 2015, the Secretary of Defense announced that all military occupational specialties were open to women and removed all final restrictions on the service of women in combat. As part of the congressional notification process when DOD decided to open previously- closed positions and occupations to women, the department was required to provide a detailed legal analysis of the implications of the proposed change with respect to the constitutionality of the Military Selective Service Act to men only. DOD’s July 2017 report on the purpose and utility of a registration system for military selective service stated that in December 2015, DOD advised Congress that the opening of all positions and occupations to women “further alters the factual backdrop” to the Supreme Court’s ruling on a challenge to the exemption of women from selective service registration. However, the report stated that DOD took no further stance on the legal issues raised by the then-Secretary of Defense’s decision to open all military positions to women. Further, DOD stated that it would consult with the Department of Justice as appropriate regarding these issues. DOD Included Information on the Six Required Reporting Elements but Additional Information May Benefit the Commission’s Ongoing Review DOD Included Information on the Six Required Reporting Elements in Its Report DOD included information on each of the six required reporting elements in its July 2017 report to Congress and the Commission on the purpose and utility of a registration system for military selective service, as shown in table 1. In preparing the report, officials within the Office of the Assistant Secretary of Defense for Manpower and Reserve Affairs stated that they coordinated and consulted with subject matter experts at the Selective Service System and the Joint Staff as well as with officials from selected organizations within the Office of the Secretary of Defense, including the U.S. Military Entrance Processing Command. Further, the DOD report references internal DOD documents, a policy publication from the Congressional Research Service regarding Selective Service issues, statements from former DOD executives, and publications from contributing authors on web-based foreign policy and national security discussion sites for additional support. Additional Information May Be Useful for the Commission’s Ongoing Review of the Military Selective Service Process While DOD included information on the six required reporting elements in its report, we identified additional information that may be useful in supporting the ongoing review of the military selective service process by the Commission. Specifically, based on our review of DOD’s report and our prior work, the Commission could benefit from additional information on (1) DOD’s requirements and timelines for the induction of individuals into the military services who are selected through a draft, and (2) the perspectives of the military services on the military selective service processes. First, one of the six required reporting elements in the NDAA for FY 2017 required DOD to provide a detailed analysis of its personnel needs in the event of an emergency requiring a mass mobilization, along with a timeline for obtaining these inductees. In response, DOD provided the personnel requirements and a timeline that was developed in 1994 and that have not been updated since. These requirements state that, in the event of a draft, the first inductees are to report to a Military Entrance Processing Station in 193 days and the first 100,000 inductees would report for service in 210 days. DOD’s report states that the all-volunteer force is of adequate size and composition to meet DOD’s personnel needs and it has no operational plans that envision mobilization at a level that would require a draft. Officials stated that the personnel requirements and timeline developed in 1994 are still considered realistic. Thus, they did not conduct any additional analysis to update the plans, personnel requirements, or timelines for responding to an emergency requiring mass mobilization. Further, they said that they were limited in the amount of time that they were given to respond to the congressional mandate and that they believed it would be most helpful to produce a report that provided basic information that could serve as a starting point for the Commission to begin a more in-depth review of the military selective service process. As previously discussed, in 2012, we reported that changes in the national security environment require DOD and the services to reassess their force structure requirements, including how many and what types of units are necessary to carry out the national defense strategy. We reported that these changes represented junctures at which DOD could systematically reevaluate service personnel levels to determine whether they are consistent with strategic objectives. As such, we recommended that DOD establish a process of periodically reevaluating DOD’s requirements for the Selective Service System in light of changing operating environments, threats, and strategic guidance. Since DOD did not perform additional analysis to reevaluate its requirements or timelines for obtaining inductees to respond to this mandate and the most recent requirements were determined based on assumptions developed in 1994, we continue to believe our 2012 recommendation is valid. An updated analysis would also benefit the Commission by informing their study and recommendations. Second, the military service officials that we met with told us that their perspectives on the selective service processes that would affect them had not been solicited in the preparation of DOD’s report. For example, while the military services are responsible for training inductees upon their mobilization and integrating them into the force, service officials expressed concerns to us regarding whether, for example, they would have the training facilities, uniforms or funding to receive, train, equip, and integrate a large influx of inductees in the event of a draft. Additionally, the services are expected to provide support to the Selective Service System during a national emergency. A 1997 memorandum of understanding between the Selective Service System and DOD indicates, among other things, that the Department of the Army will provide 1,500 enlisted Army retirees to augment the Selective Service System within 72 hours after a draft is initiated. According to officials within the Office of the Under Secretary of Defense for Personnel and Readiness-Military Personnel Policy, this memorandum of understanding was reviewed and revalidated in 2014. However, Army officials told us that they believed some of their service-specific procedures might require updates identifying individuals to augment the Selective Service System’s staff, especially the retired personnel that would need to be recalled to duty. They thought it would be beneficial for officials within the Office of the Secretary of Defense to conduct a thorough, top-down review, and lead an update of service instructions related to supporting a draft to ensure the services are prepared to provide their share of personnel if needed. These Army officials said, however, that their higher Army headquarters saw no operational reason to review their policies and procedures related to mass mobilization given that DOD has no operational plans that envision mobilization at a level that would require a draft. As discussed previously in this report, DOD’s workforce mix has been changing. For example, over the last decade, the use of unmanned aerial systems has emerged as an integral part of warfighting operations and the demand for their use has outpaced the Air Force’s ability to produce pilots to operate them. Additionally, each of the services has reported critical skill gaps in such areas as various military medical specialties. Further, challenges exist in identifying cyber capabilities of all National Guard units, as required by law, which could be used for the support of a cyber-related emergency. Officials from the Office of the Under Secretary of Defense for Personnel and Readiness-Military Personnel Policy stated that critical skills identified as necessary today may not be the critical skills needed in future crises. Additionally, they said that creating and maintaining tools, such as databases of individuals with these needed critical skills, is costly and may become outdated quickly. We agree that the requirements for critical skills will evolve over time; however, any discussion of a draft using the selective service process— as presented in DOD’s July 2017 report—that focuses on specific military occupational specialties would benefit from the perspectives and input of officials from the military services and the impact a draft may have on meeting those demands. Specifically, these officials would be helpful in identifying the needed critical skill sets for their emerging mission demands and the impact a draft may have on meeting those demands. DOD officials within the Office of the Assistant Secretary of Defense for Manpower and Reserve Affairs stated that they are currently collecting the perspectives of the military services on the selective service process and plan to provide this information to the Commission. DOD officials explained that they did not incorporate information from the military services into their report because DOD’s involvement in any potential decision to initiate and implement a draft is mostly centralized within the Office of the Secretary of Defense, not within the individual military services. They further stated that information regarding the level of additional personnel that would be needed using a draft in the event of a national emergency comes from the war plans that are developed and maintained by the Joint Staff. Additionally, they said that they primarily produced a report that characterized the overall processes and was a factual account of how DOD interacts with various aspects of the Selective Service System. Another provision within the NDAA for FY 2017 required the Secretary of Defense and other Cabinet-level government officials, along with any experts designated by the President, to submit to the Commission and Congress recommendations for the reform of the military selective service process not later than 7 months after the Commission’s establishment date. To accomplish this, officials from the Office of the Assistant Secretary of Defense for Manpower and Reserve Affairs said that they initially developed a questionnaire on which the Commission provided feedback. These officials stated that they sent it to 18 organizations, including the Cabinet positions listed in the act and to additional organizations that were recommended by the National Security Council or that had some role or responsibility in the event of a draft. In order to produce the Secretary of Defense’s submission, these officials further stated that they requested each of the military services and the Joint Staff to complete the questionnaire by November 2017. Further, these officials viewed the questionnaire as an opportunity for the respondents—the military services in the case of DOD—to provide their ideas regarding military selective service processes, both current and future. Agency Comments We provided a draft of this report to DOD for review and comment. In an email, the Director of Accession Policy within the Office of the Deputy Assistant Secretary of Defense for Military Personnel Policy stated that the military services concurred with the report and DOD had no additional comments. We are sending copies of this report to the appropriate congressional committees; the National Commission on Military, National, and Public Service; the Secretary of Defense; the Acting Assistant Secretary of Defense for Manpower and Reserve Affairs; the Commander, U.S. Military Entrance Processing Command; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Director, Selective Service System. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. Appendix I: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kimberly Seay, Assistant Director; Rebecca Beale; Vincent Buquicchio; Mae Jones; Kevin Keith; Jordan Mettica; and Amber Sinclair made key contributions to this report.
Why GAO Did This Study The Military Selective Service Act established the Selective Service System whose mission, among other things, is to be prepared to provide trained and untrained manpower to DOD in the event of a national emergency when directed by the President and the Congress. In the NDAA for FY 2017, Congress included a provision requiring that DOD submit a report on the current and future need for a centralized registration system under the Military Selective Service Act. In addition, the act established a Commission to review, among other things, the military selective service process and report on it. The act also included a provision for GAO to review DOD's procedures for evaluating selective service requirements. In this report, GAO compared the information DOD included in its report with the act's required elements and identified additional information that could benefit the Commission as it further reviews the military selective service process. GAO reviewed DOD's report and the statutory elements and interviewed officials involved in the military selective service process to identify additional information that could benefit the Commission's ongoing review. What GAO Found In its July 2017 report to Congress and the National Commission on Military, National, and Public Service (i.e., “the Commission”), the Department of Defense (DOD) provided information regarding each of the six required reporting elements contained in the National Defense Authorization Act (NDAA) for Fiscal Year (FY) 2017. Specifically, DOD provided information on: 1. the direct and indirect benefits of the military selective service system; 2. the functions performed by the Selective Service System that would be assumed by DOD in the absence of a national registration system; 3. the systems, manpower, and facilities needed by DOD to physically mobilize inductees in the absence of the Selective Service System; 4. the feasibility and the utility of eliminating the focus on the mass mobilization of primarily combat troops in favor of a system that focuses on the mobilization of military occupational specialties, and the extent to which such a change would impact the need for both male and female inductees; 5. DOD's personnel needs in the event of an emergency requiring mass mobilization; an analysis of any additional critical skills that would be needed in the event of a national emergency; and a timeline for when DOD would require the first inductees to report for service; and 6. a list of the assumptions used by DOD to conduct its analysis. GAO identified additional information that may benefit the Commission's ongoing evaluation of the military selective service process. The fifth required reporting element required DOD to analyze its personnel needs in the event of an emergency requiring mass mobilization and a timeline for obtaining these inductees. In response, DOD provided the personnel requirements and timeline that were developed in 1994 and that have not been updated since. DOD officials stated that they did not conduct additional analysis to update these requirements because the all-volunteer force is of adequate size and composition to meet DOD's personnel needs. In 2012, GAO recommended that DOD establish a process to periodically reevaluate DOD's requirements for the Selective Service System. Although DOD concurred with this recommendation, it has not yet implemented it. GAO believes this recommendation is still valid. Having updated DOD Selective Service System requirements and timelines for a potential draft may be useful in supporting the ongoing evaluation of the military selective service process by the Commission. Further, military service officials told GAO that their perspectives on how selective service processes that could affect them had not been solicited in the preparation of DOD's report. Since the military services are to receive, train and integrate the inductees; provide support to the Selective Service System during a national emergency; and could help identify critical skill sets needed to meet emerging demands and the impact a draft could have on meeting those demands, the military service officials' perspectives could be useful to the Commission. DOD officials stated that they are currently collecting these perspectives and plan to provide this information to the Commission. What GAO Recommends GAO is not making any new recommendations. GAO believes its 2012 recommendation to DOD to periodically reevaluate its requirements for the Selective Service System, which DOD concurred with, is still valid. DOD had no additional comments on this report.
gao_GAO-18-102
gao_GAO-18-102_0
Background The section presents information on (1) water utilities and water operators, (2) federal and state roles in overseeing and assisting water utilities, and (3) federal and state roles in workforce development. Water Utilities and Water Operators Water utilities provide drinking water and wastewater services, including drinking water treatment and distribution and wastewater collection, treatment, and discharge. Figure 1 shows the processes for treating and distributing drinking water and for collecting, treating, and discharging wastewater, which are overseen by water operators. Fresh water is pumped from wells, rivers, streams, or reservoirs to water treatment plants, where it is treated and distributed to customers. Wastewater travels through sewer pipes to wastewater treatment plants where it is treated and returned to streams, rivers, and oceans. Water utilities are organized differently depending on the city or community they serve. For example, drinking water service may be provided by one utility, and wastewater service may be provided by a separate utility, or a single utility may provide both services. Regardless of the configuration, a utility can be owned and managed by a municipality, county, independent district or authority, private company, or not-for-profit water association, among others. Utilities may serve a city and neighboring area, a county, or multiple counties. As of January 2016, there were about 52,000 drinking water and 16,000 wastewater utilities in the United States. These water utilities vary widely in the number of people they serve, but the majority of water utilities in the United States serve fewer than 10,000 people. Water utilities employ a broad range of workers, including water operators; engineers; customer service representatives; accountants; legal support; and skilled technical occupations, such as electricians, machinists, and instrument technicians. It is difficult to find an estimate of total workforce at water utilities, but BLS reported that as of December 2016 employment in industries related to water utilities—including local government utilities (both water and energy utilities); water, sewage, and other systems; and water and sewer system construction—totaled 478,700. A study commissioned by the American Water Works Association estimated that 55 percent of water utility employees are water operators; of the remainder, 20 percent work in customer service and metering, and 25 percent work in administration of various kinds. The number of water operators at individual water utilities depends partly on the size of the population the utility serves. Large utilities may have dozens of water operators supported by a staff of customer service representatives, electricians, instrument technicians, machinists, and plumbers. In contrast, utilities in rural communities may have a single water operator who is sometimes tasked with additional duties. Water operators at drinking water utilities run the equipment, control the processes, and monitor the plants that treat water to make it safe to drink. Water operators at wastewater utilities do similar work to remove pollutants from domestic and industrial wastewater before it is reused or released into a receiving body of water. Many duties of water operators are technical and water operators need knowledge, skills, and abilities in science, technology, engineering, and mathematics (STEM). The list of academic competencies described in the DOL Water and Wastewater Competency Model for employment in the drinking water and wastewater industry includes calculating averages, ratios, proportions, and rates; translating practical problems into useful mathematical expressions; and understanding biology, chemistry, and physics. Water operators need to be able to prepare chemicals and confirm chemical strength, adjust chemical feed rates and flows, and understand software and equipment used for industrial process control, such as supervisory control and data acquisition software and systems. (See fig. 2). Industry representatives we interviewed told us that as drinking and wastewater treatment processes become more technologically advanced, water operators increasingly will need to have more advanced technical skills. Water operators must meet specialized certification requirements, which are overseen by state regulators. A number of 2-year and 4-year colleges offer programs across the country that provide training for individuals seeking certification as water operators. For drinking water operators, regulations under the Safe Drinking Water Act establish minimum standards for certifications. Each state must implement a water operator certification program that meets the requirements of these guidelines or that is substantially equivalent to these guidelines. The Clean Water Act does not have similar minimum requirements for wastewater operators, and certification standards are established by the states. Accordingly, there is no single standard national certification. Even though there has been an industry effort to harmonize the certification requirements across states for both drinking water and wastewater operators, reciprocity of certification between different states remains limited. Federal and State Roles in Overseeing and Assisting Utilities EPA regulates water utilities under the Safe Drinking Water Act and the Clean Water Act. Under the Safe Drinking Water Act, EPA establishes and enforces standards for public water systems, including drinking water utilities, that generally limit the levels of specific contaminants in drinking water that can adversely affect public health; attaining and maintaining these levels typically requires water treatment. Under the Clean Water Act, EPA regulates discharge of pollutants from point sources such as municipal and industrial wastewater treatment plants, and stormwater discharges from industrial facilities and municipal sewer systems. EPA’s Office of Enforcement and Compliance has established national enforcement goals and works with state and tribal governments and other federal agencies to enforce the nation’s environmental laws, including the Safe Drinking Water Act and Clean Water Act. EPA authorizes most states to have primary enforcement responsibility— “primacy”—for the Safe Drinking Water Act, if the state meets certain requirements. Similarly, EPA authorizes most states to operate their own clean water discharge permitting program (also called primacy) in lieu of the federal program if the state program meets certain requirements. EPA regulations require states to have inspection programs for drinking water utilities—called sanitary surveys—to maintain their primacy. EPA regulations also require states to conduct periodic compliance inspections of wastewater utilities. These inspections support EPA’s monitoring of compliance with the Safe Drinking Water Act and Clean Water Act. EPA provides states with guidance for evaluating the utilities. Inspections of drinking water utilities include eight areas of review: water sources, treatment plants, distributions systems, finished water storage, pumping facilities, monitoring plans and treatment records, management and operations, and water operator compliance with certification requirements. The inspections also function as an opportunity for state agencies to educate drinking water operators about proper monitoring and sampling procedures and to provide technical assistance. The goal of the inspections is to ensure that the utility can supply safe drinking water. For wastewater utilities, the inspections are more narrowly focused on monitoring the utilities’ compliance with their Clean Water Act obligations. The goals of the wastewater utility inspections include identifying and documenting noncompliance and gathering evidence to support enforcement actions. States receive federal funding for infrastructure projects and technical assistance under the Clean Water Act and Safe Drinking Water Act. EPA provides annual funding to states through its Drinking Water and Clean Water State Revolving Fund programs. States use this funding to support water infrastructure projects and to provide assistance to communities. Specifically, portions of a state’s annual EPA funding may be used for implementation of, among other things, capacity development and water operator certification programs. Under the Safe Drinking Water Act, states are required to implement water operator certification programs, and EPA is required to withhold 20 percent of a state’s Drinking Water State Revolving Funds if the state fails to do so. Under the Clean Water Act, states may use their Clean Water State Revolving Funds to provide assistance to any qualified nonprofit entity, to provide technical assistance to owners and operators of small- and medium-sized publicly owned wastewater treatment utilities to, among other things, help them achieve compliance with the act. Water utilities in rural communities also receive funding and technical assistance provided by USDA’s Rural Utilities Service. The Rural Utilities Service provides funding for drinking water and wastewater infrastructure projects in rural communities. The Rural Utilities Service is one of three agencies under Rural Development—a USDA mission area focused on improving the economy and quality of life in rural America by providing financial programs to support essential public facilities and services such as drinking water and sewer systems, housing, health care, emergency service facilities, and electric and telephone service. The Rural Utilities Service’s Water and Environmental Programs provide loans, grants, and loan guarantees for drinking water, sanitary sewer, solid waste, and storm drainage facilities in rural areas. The Rural Utilities Service also provides funding for technical assistance to rural water utilities through a contract with the National Rural Water Association and grants to other nonprofit organizations. Federal and State Roles in Workforce Development Workforce development in the United States is driven by a variety of private and public investments in workforce education and development. Under the Workforce Innovation and Opportunity Act, the federal government has programs, administered primarily by DOL and Education, that provide a combination of education and training services to help job seekers obtain employment. Through these programs, DOL provides grants to states to provide funding for employment and training programs. Although the public workforce system receives federal funds, states may choose to add their own funding, and most of the system’s services for businesses and job seekers are delivered at the state and local levels. In implementing the Workforce Innovation and Opportunity Act, enacted in 2014, each state is to have a state-level workforce development board that develops strategies for providing outreach to individuals and employers and identifies in-demand industries. Helping ensure that the workforce system focuses on regional and local economies, each state is divided into one or more workforce areas, led by a local workforce development board. The local boards are responsible for, among other things, analyzing the employment needs of employers and the workforce development activities (including education and training) in the region. According to DOL, workforce boards are also responsible for determining how many American Job Centers are needed in their area, where these centers will be located, and how they will be operated. There are about 2,500 American Job Centers across the United States that offer many resources under one roof. The typical center serves individuals seeking employment. Centers also work with employers to assess hiring needs; find qualified candidates, including veterans; connect to training options for new and current employees; and provide other workforce-related assistance. Data Suggest Need for Water Operators Resembles Workforce Needs across All Occupations, but Little Is Known about Effects of Any Unmet Needs on Compliance Data available from BLS suggest that the workforce replacement needs for water operators are similar to workforce replacement needs nationwide across all occupations. However, little information is available about the current and future effects of any unmet workforce needs on utilities’ abilities to comply with the Safe Drinking Water Act and Clean Water Act. BLS Projections Indicate That the Replacement Needs for Water Operators Resemble the National Average for All Occupations BLS projections suggest that the workforce replacement needs for water operators are similar to workforce replacement needs nationwide across all occupations. BLS uses survey estimates and economic models to project future employment in specific occupations; the latest such projections are for the 10-year period from 2016 through 2026. BLS intends its projections to capture the long-run trend, direction, and growth of the labor force rather than to predict precise outcomes in specific years. As of October 2017, the most recent projections indicate that the replacement needs for water operators—resulting from retirement or other separations—are relatively similar to the projected national annual average of replacement needs across all occupations (8.2 percent versus 10.9 percent, respectively). BLS projects that there will be an annual average of 9,200 job openings for water operators between 2016 and 2026. It also projects a slight decline in overall employment for water operators because of increasing automation at water utilities; this decline contrasts with total employment across all occupations, which is projected to increase by an annual average of 1,151,850 jobs. On average, for years during this the 10-year period, BLS projects that about 8 percent of water operator jobs will be filled by workers replacing those who are separating from the occupation, and about 92 percent will be filled by workers staying in the water operator occupation. In comparison, over the same period for workers across all occupations, a projected annual average of about 1 percent of jobs will filled because of growth, about 11 percent by workers replacing those separating from their occupation, and about 88 percent by workers staying in their occupation from the previous year. (See fig. 3.) BLS tracks growth and workforce replacement projections for the water operator occupation, but not for water utilities; however, the water operator position is concentrated at publicly and privately owned drinking water and wastewater utilities. BLS estimates from May 2016 (the latest data set with data by type of employer) show that about 77 percent of water operators were employed by local governments—this percentage represents those employed at water utilities owned by cities and municipalities. Water, sewage, and other systems employed about another estimated 12 percent of water operators, which are primarily in privately owned drinking water and wastewater utilities. The remaining water operators (about 11 percent) were employed in state government or in various other private industries, such as waste treatment and disposal (e.g., solid waste, among other things). BLS data indicate that the median age of water operators in 2016 was slightly older than the national median age of the workforce across all occupations. BLS does not collect information on tenure, retirement age, or retirement eligibility of workers; however, the 2016 Current Population Survey shows that 24.7 percent of water operators were age 55 or older, compared with 22.7 percent of the total U.S. workforce. The data also show that in 2016, the median age for water operators was 46.4, compared with the median age across all occupations of 42.2. Industry reports from 2008 to 2010 included retirement eligibility estimates of as high as 30 to 50 percent of the water utility workforce. However, industry representatives we interviewed told us that many workers postponed retirement during the recession that began in December 2007, thus reducing the industry’s hiring needs. The representatives added that retirements may increase as the overall U.S. economy continues to expand. In addition, industry representatives said that workers in the water industry tend to have a long tenure in their jobs, often working several years past the earliest age at which they meet the requirements for full retirement. In addition to water operators, larger water utilities employ a broad range of workers, including skilled workers, such as electricians and machinists, as described above. While BLS does not provide employment projections specific to water utilities for these occupations, it does provide national employment projections for these occupations that can be illustrative. The future demand for such workers—as represented by projected job growth and occupational separations rates—is shown in table 1. BLS defines the growth rate as the estimated percentage change in the projected number of jobs added or lost in a U.S. occupation or industry over a given period. The occupational separations rate is the sum of the projected percentage of workers exiting the labor force because of retirements or other reasons (“labor force exit rate”) and the projected percentage of workers transferring to different occupations (“occupational transfer rate”). Higher than average growth rates for the electrical and plumbing occupations, as well as higher occupational separations rates than the water operator occupation, suggest that the water industry will need to compete with other employers in faster-growing sectors, such as construction, for workers in these high-demand occupations. Limited Information Is Available on Unmet Workforce Needs and Their Effects on Compliance, and EPA Has Not Prompted States to Collect Information on Future Needs Little is known about whether unmet workforce needs are affecting water utilities’ overall abilities to comply with the Safe Drinking Water Act and Clean Water Act. At a national level, neither the water utilities’ industry associations nor EPA has analyzed whether there is a relationship between unmet workforce needs and compliance problems. Some water utility industry associations have analyzed projected employee retirement eligibility and employee turnover, but these studies did not analyze the potential effect of these retirements on utilities’ operations. The 2010 Water Sector Workforce Sustainability Initiative study sponsored by the Water Research Foundation and the American Water Works Association provides the most recent, broad industry evaluation of workforce challenges at water utilities. That study outlined projected workforce challenges caused by impending retirements and shifting demographics in the U.S. labor market, but it did not address specific operational impacts related to those retirements. Similarly, the American Water Works Association’s annual benchmarking surveys collect data on utilities’ water and wastewater regulatory compliance rates; however, the association does not analyze whether there is a relationship between retirement eligibility and regulatory compliance. Water utilities and industry associations have some planned and ongoing work to learn more about workforce needs at water utilities. For example, representatives from one of the selected large utilities that we interviewed told us that a group of 16 large water utilities are informally working together to address workforce challenges and have proposed a major applied research project with the objectives of (1) exploring in greater depth the specific occupations, skills, and career pathways that can bridge the water sector’s looming employment gap; (2) clarifying the range of water jobs available at a regional level; (3) identifying the potential pools of labor to fill these positions; and (4) exploring new development strategies to equip workers with the skills they need. Additionally, the Water Environment and Reuse Foundation is participating in an international Workforce Skills of the Future project to analyze future work scenarios and their impact on the water sector and develop recommendations for how the sector can prepare for and accommodate new capabilities and future skills in the water sector. The utilities we interviewed had experienced compliance problems with the Safe Drinking Water and Clean Water acts and some difficulties in hiring certified water operators and other skilled workers. In our interviews with representatives of selected water utilities, the representatives reported that they had experienced some difficulties in hiring operators but that those difficulties had not had an effect on their utilities’ compliance with the Safe Drinking Water Act or Clean Water Act to date. However, the representatives from 6 of the 11 selected utilities reported that their difficulties in replacing workers had resulted in a greater use of overtime to meet workload demands. We reviewed EPA compliance violation data for the selected utilities and found that all of the utilities had at least one violation of either the Safe Drinking Water Act or Clean Water Act within the last 10 years; however, it was not possible to determine whether workforce challenges contributed to these violations. The violations represented a range of issues including exceeding the maximum contaminant levels in drinking water; failing to conduct regular monitoring of drinking water quality or to submit monitoring results in a timely fashion to the state agency or EPA; violating public notification requirements, which require systems to alert consumers if there is a serious problem with their drinking water; and failing to issue annual Consumer Confidence Reports. According to EPA officials, utilities may have violations for a number of reasons, including equipment breakdowns or impaired quality of source water, which makes water treatment more difficult. Because the compliance data is not specific enough to indicate the source of the problem, it was not possible for us to independently verify whether the compliance violations were linked to utilities’ difficulties in replacing workers. EPA relies on states to inspect utilities and ensure compliance with requirements under the Safe Drinking Water and Clean Water acts. EPA’s inspection guidance—for both drinking water sanitary surveys and wastewater compliance inspections—advises states to examine the adequacy of water utilities’ workforces—that is, the quality and quantity of staff operating and maintaining drinking water and wastewater facilities. EPA requires states to report some inspection information, including whether there are management issues at a utility. EPA officials told us that, on the basis of their conversations with state regulators, they believe states are collecting information about workforce adequacy during state inspections of drinking water utilities. For wastewater utilities, EPA officials stated that in the course of conducting an on-site inspection, inspectors will ask plant managers and staff questions about staffing and should note concerns in their inspection reports. EPA officials said that collecting workforce information at the state level is beneficial for the states and the drinking water utilities so that they can take steps to implement strategies to address the utilities’ workforce needs. The officials said state regulators can find patterns in utilities’ compliance reporting data that alert them to the likelihood that a utility is experiencing operational issues, such as losing a certified water operator. In those instances, an EPA official told us, state regulators work with the utility to help identify solutions, such as locating a nearby water operator who can contract with the utility on a part-time basis until it can hire a permanent water operator. EPA officials further stated that they believe state regulators are using the workforce information to help build capacity at drinking water utilities and prioritize training. However, the EPA inspection guidance that states currently use in conducting sanitary surveys for drinking water utilities and compliance inspections of wastewater utilities outlines criteria for evaluating existing workforce issues but does not address workforce issues that could affect utility operations, and potentially compliance, in the future. The guidance contains suggested assessment criteria that focus on whether there is an adequate number of qualified staff in the existing workforce to perform the work required. For example, the guidance for drinking water utilities states that the utility should have enough personnel to enable continuous operation of the treatment plant at all times and that staff should be able to perform operations and maintenance tasks regularly with little or no overtime hours. The inspection guidance does not contain similar questions that focus on whether there will be an adequate number of qualified staff in the future workforce to perform the work required. According to our December 2003 report, strategic workforce planning focuses on developing of long-term strategies for acquiring, developing, and retaining an organization’s total workforce to meet the needs of the future. In that report, we stated that while agencies’ approaches to workforce planning will vary, there are five key principles that strategic workforce planning should address irrespective of the context in which the planning is done. These principles include: determining the critical skills and competencies that will be needed to achieve current and future programmatic results, and developing strategies that are tailored to address gaps in number, deployment, and alignment of human capital approaches for enabling and sustaining the contributions of all critical skills and competencies. According to our interviews with selected utilities, five of the six large utilities had conducted workforce planning, while none of the small utilities had conducted such planning. By amending its inspection guidance with questions on strategic workforce planning—such as any potential gaps in critical skills and strategies to address any gaps in the number of water operator positions to meet the needs of the future—EPA could ensure that such information is available for states to assess future water utility workforce needs. Information on future workforce needs could help the states and water utilities identity potential workforce issues and take action as needed. According to EPA officials, they have not considered amending inspection questions but have heard that future workforce issues are a concern to the states and the industry and said that making such changes could be helpful to develop workforce strategies that address the specific needs of a state or regional area. Selected Utilities Managed Their Workforce Needs Using a Mix of Approaches but Reported Ongoing Challenges Hiring Water Operators and Other Skilled Workers Representatives from selected utilities that we interviewed reported using a mix of various approaches to meet their workforce needs. However, the selected large utilities reported ongoing hiring challenges with skilled technical workers such as machinists, electricians, and pipefitters. Selected Utilities Used a Mix of Approaches to Manage their Workforce Needs The representatives from the selected utilities reported that by using various approaches, they were generally able to hire water operators, but they faced some challenges in doing so. The number of water operator vacancies at each of the six selected large utilities in the spring of 2017, as reported by the utilities’ representatives, ranged from 2 to 60, representing a range of about 2 to 15 percent of the utilities’ water operator workforces. Only one of the five selected small utilities had a water operator vacancy in the spring of 2017. That utility had 1 vacancy among its workforce of 44 water operator positions. When we asked representatives of selected large utilities for their top three recruitment approaches for water operators, their responses included advertisements on their own websites, “word of mouth,” advertising with professional water organizations, partnering with a local technical college, and use of general-purpose websites (not owned by the utility or a professional water organization). Similarly, responses from representatives of selected small utilities included “word of mouth,” local newspapers, advertisements through professional water organizations, advertisements on general- purpose employment websites, and outreach to the local veterans’ office. We also asked the representatives of the selected large and small utilities about various water operator recruitment approaches described to us by association representatives or noted in industry publications. These approaches included recruiting from other states, working with local workforce boards and American Job Centers, establishing formal apprenticeships, reaching out to recruit veterans, and partnering with local technical and postsecondary schools. The representatives of large utilities reported that they used some of the approaches to varying degrees, but none of these representatives reported using all of them. The two most commonly used approaches of the selected large utilities (4 of the 6) were a partnership with one or more local community colleges to offer water treatment education, followed by reaching out to recruit veterans (3 of the 6). Representatives of one of the large utilities said they also had a partnership with a high school. Representatives of another large utility indicated that although they had access to local trade schools, the schools did not provide good candidates for the utility’s jobs. Representatives of one large utility said that the utility recruits out of state to find water operators with at least a minimum set of qualifications and a license because it lacks a local pool of water operators from which it can recruit. However, representatives of another large utility indicated that many water operators do not like to move from one state to another, and therefore it is difficult to recruit in other states. Representatives of the selected large utilities were divided about whether a national standard certification for water operators would help with worker availability or recruiting. For example, one utility’s representatives said that a national standard certification would not help in recruitment, while representatives of another indicated that a uniform, transferable skill set, as represented by a national certification, would be helpful. Representatives of the five selected small utilities reported that they generally had not used the various recruiting approaches about which we inquired. For example, according to their representatives, none of the small utilities recruited out-of-state water operators, in part because they preferred to recruit locally or they would not be able to attract such water operators with the relatively low compensation they could offer. In contrast to larger utilities, representatives of four of the selected small utilities told us they did not have a partnership with a trade school or a community college to offer water treatment education for various reasons, including filling key needs elsewhere and a lack of focus on water education at the technical college. Selected Utilities Reported Ongoing Challenges Hiring Water Operators and Other Skilled Workers Selected utilities reported ongoing challenges hiring water operators and other skilled workers. Representatives of all six selected large utilities told us that they had attempted to hire water operators during the past 5 years and, with one exception, they described hiring water operators as “somewhat difficult.” Reasons they described for this difficulty included a lack of candidates with a STEM background, a distaste for shift work among younger employees, the lack of a local pool of candidates, and low pay. Representatives of three of the selected large utilities said hiring to replace departing water operators had been a problem in the past, but there was no consensus among the three on whether the problem was increasing, decreasing, or staying about the same. The utility that indicated the problem was decreasing cited two steps it had taken to address it: expanding its geographical search and improving its internal training program. Five of the selected large utilities reported that replacing retiring water operators was currently a problem, and three of them indicated that it could become one over the next 5 years for reasons such as water operators having to perform rotating shift work and fewer qualified candidates than in the past. The percentage of water operators eligible to retire over the next 5 years, compared to the total number of water operator positions in the six large utilities, ranged from a low of 100 out of 507 (about 20 percent) to 68 out of 136 (50 percent), the representatives told us. Representatives of selected small utilities generally reported challenges recruiting and hiring certified water operators. Representatives of four of the five selected small utilities noted that replacing retiring water operators could become a problem over the next 5 years; these representatives often cited an inability to compete with larger utilities on compensation for certified water operators, in particular. Some representatives told us that, although they would have preferred to hire certified water operators for some of their vacancies, they often decided to hire and train an entry-level person, for whom there was less competition regarding compensation. Small utilities were roughly split regarding whether retirements had increased or remained about the same. Representatives of two small utilities told us that, over the past 5 years, the number of water operators retiring each year increased, but representatives of the other three reported that the number remained about the same. Representatives of two small utilities told us they have no water operators eligible for retirement during the next 5 years, and representatives of the other three small utilities reported that the number of water operators eligible to retire compared to the total number of water operator positions was, respectively, 2 of 6, 3 of 8, and 4 of 44. A representative of only one of the five small utilities reported difficulties recruiting skilled workers in professions other than water operators, and those professions are administrative and bookkeeping. Skilled Technical Occupations Considerable attention has been given in recent years to the question of whether the U.S. economy has a shortage of workers in skilled technical occupations—occupations that require a high level of knowledge in a technical area but do not require a 4-year college degree. The National Academies of Sciences, Engineering, and Medicine convened a committee to examine the coverage, effectiveness, flexibility, and coordination of the policies and programs that prepare Americans for skilled technical jobs. The committee organized a national symposium, held in June 2015, bringing together researchers, industry representatives, policymakers, and other stakeholders involved in technical workforce education and training. The committee’s report, issued in 2017, contained many findings including: (1) the United States is experiencing, and will continue to experience, imbalances in the supply of and demand for skilled technical workers in certain occupations, industry sectors, and locations; (2) the nature of the problem differs across sectors and locations; (3) these imbalances arise from multiple sources; (4) the evidence suggests that, as a nation, the United States is not adequately developing and sustaining a workforce with the skills needed to compete in the 21st century. Representatives of the selected large utilities reported that, outside of water operators, the positions most difficult to fill are for other skilled workers such as machinists; electricians; pipefitters (also called “steamfitters”); and heating, ventilating, and air conditioning mechanics. The representatives of those utilities said that, in their experience, the number of young adults interested in the skilled technical occupations is decreasing. A representative of one small utility noted that it is difficult for trade schools and community colleges to offer courses in occupations for which student interest is declining. Because of projected reductions in the supply of such workers as the “baby boom” generation continues to retire over the next decade, the drinking water and wastewater industry has been one of many that have cited the “skills gap” and the need for a “pipeline” of future workers as developing problems as they attempt to fill vacancies caused by retirements. Representatives of some of the large utilities and industry associations we interviewed said that there are difficulties in filling certain skilled worker positions, particularly when local economic factors—including competition from other sectors such as construction—make it difficult to hire skilled technical workers if the local economy is near or at full employment. Key Federal Programs Have Several Ways to Assist Utilities with Workforce Needs, and Selected Utilities Accessed Some of Those Programs The five federal agencies we reviewed—EPA, USDA, Education, DOL, and VA—have programs that can assist utilities with their workforce needs in several ways, including through guidance, funding, and training. The selected utilities that we interviewed accessed federal programs to help meet their workforce needs in some instances. Five Federal Agencies Have Key Programs That Can Provide Utilities with Guidance, Funding, and Training to Help Meet Workforce Needs Key programs in EPA, USDA, Education, DOL, and VA can assist utilities with workforce needs in the ways described below. EPA has several programs that can provide funding, through the states, for technical assistance to help water utilities meet their workforce needs. For example, EPA’s national Training and Technical Assistance for Small Systems competitive grant provides, on average, $12 million per year to give managerial and financial training to utilities, particularly small utilities. Additionally, officials stated that between 1997 and 2012, EPA provided $134 million to help utilities train their water operator workforce and enable their water operators to gain certification through the Operator Certification Expense Reimbursement Grants program; however, this program ended in 2012. EPA’s Public Water System Supervision Grant program provides grants to states for activities to implement drinking water regulations—activities that have included providing technical assistance to utilities, such as training to operators to ensure they are knowledgeable about the best operation and treatment practices. In addition, states may use up to 10 percent of the funding they receive for the Drinking Water State Revolving Fund allotment for specified program management activities, including the development and implementation of water operator certification programs. In addition to funding technical assistance, EPA has assisted in efforts to attract new employees to the drinking water and wastewater industry. For example, in 2010 EPA partnered with the American Water Works Association and the Water Environment Federation to highlight the need for qualified professionals to enter the drinking water and wastewater industry. As part of those efforts, EPA produced a set of videos called “Water You Waiting For?” to encourage high school and vocational technical school students to consider employment in the industry. EPA officials also told us that based on industry requests, EPA has taken the lead in coordinating with other federal agencies to help develop a pool of potential certified water operators. EPA has also collaborated with DOL, USDA, and VA to assist drinking water and wastewater utilities in meeting their workforce replacement needs. For example, in 2009, EPA worked with DOL and industry groups to develop a competency model for the water sector, which was updated in 2016. The model defines the necessary knowledge, skills, and abilities for prospective water professionals and can be used by educational institutions and industries to encourage prospective job seekers to consider a career in the water and wastewater industry by helping job seekers develop a career pathway and associated training and career advancement strategies that meet industry skill needs. EPA has also entered into memorandums of understanding with USDA and VA, as discussed below. In 2011, USDA and EPA signed a memorandum of agreement to support a series of activities to help small water and wastewater systems face the challenges of aging infrastructure, increased regulatory requirements, workforce shortages, increasing costs, and declining rate bases. Part of that agreement focused on the water industry workforce. Among other things, USDA and EPA agreed to develop strategies for overcoming challenges specific to recruitment and retention of small utility water operators and to promote the use of contract water operators to fill workforce gaps in rural communities. As part of this effort, USDA and EPA also agreed to focus on the sustainability of rural utilities by coordinating activities and financial assistance resources to increase the technical, managerial, and financial capacity of rural drinking water and wastewater systems nationwide. This resulted in the development of a training workshop, the Sustainable Rural and Small Utility Management Initiative—”Workshop in a Box”—that covers a variety of topics, including some related to evaluating workforce needs. USDA reported that in fiscal year 2016, the technical assistance providers conducted more than 100 workshops, with at least one in each of the 50 states and Puerto Rico. USDA’s Rural Utilities Service provides technical assistance to small rural utilities through two programs: Technical Assistance and Training grants and the Circuit Rider program. The Technical Assistance and Training grants provide funds to private nonprofit organizations to help communities with water or wastewater systems by providing free technical assistance and training for rural water operators, other water utility staff and managers, and water utility board members. In fiscal year 2016, 24 nonprofit organizations received funding totaling about $20 million to provide technical assistance to rural water utilities. In addition, under the Circuit Rider program, the Rural Utilities Service contracts with the National Rural Water Association to provide staff in each of the 50 states who offer technical assistance on day-to-day operational, managerial, and financial issues. Specifically, according to the National Rural Water Association, staff known as “circuit riders” work on site with rural water utility personnel to troubleshoot problems, evaluate alternative technological solutions, recommend operational improvements, assist with leak detection, respond to natural disasters and other emergencies, and provide hands-on training, among other things. In fiscal year 2016, USDA provided about $16 million for the Circuit Rider program. DOL provides funding to states to operate the public workforce system under the Workforce Innovation and Opportunity Act. Under this act, DOL funds American Job Centers, where potential employees can seek information on job openings. Employers, such as industries or utilities, can notify the centers of the need for applicants, and the centers can then refer potential applicants to the industry. In addition, if requested to do so by industry associations or companies, DOL can work with them to develop registered apprenticeship programs through DOL’s Office of Apprenticeship. As of September 2017, 24 water utilities across the country were training new employees through registered apprenticeships that combined structured learning with on-the-job training with an assigned mentor. (See app. II for a list of apprenticeships in the water industry that are registered with DOL’s Office of Apprenticeship.) In addition, the National Rural Water Association recently developed a registered apprenticeship program for rural utilities. According to DOL officials, the program began in Indiana on August 10, 2017, and as of September 7, 2017 two additional states—California and Colorado—were expected to join the apprenticeship program. In addition to funding under the Workforce Innovation and Opportunity Act, between 2011 and 2014, DOL awarded $1.9 billion in capacity- building grants to community colleges through the Trade Adjustment Assistance Community College and Career Training grant program. Grantees identified in-demand industries and sectors in their proposals and were required to partner with workforce boards. At least seven grantee colleges proposed to develop or upgrade programs of study related to water and wastewater utilities. For example, Salina Area Technical College (Kansas) developed an environmental technology associate’s degree program focusing on water quality and wastewater treatment management. Education Through multiple grant programs, Education provides funding for states and community and technical colleges, including a number of community and technical colleges that offer programs to prepare individuals for careers in the drinking water and wastewater industry. Examples of such colleges include Kirkwood Community College (Iowa), Moraine Park Technical College (Wisconsin), and Bay College (Michigan). According to agency documentation, three funding mechanisms can be used to fill the training and employment needs of the water and wastewater industry: Funding under the Perkins Act is available for state agencies and eligible local educational agencies and postsecondary education providers. Funding under the Adult Education and Family Literacy Act is available to state agencies and eligible providers for, among other things, integrated education and training, which is a service approach that provides adult education and literacy activities concurrently and contextually with workforce preparation activities and workforce training for a specific occupations. The Rehabilitation Act of 1973 provides funding for training and job placement services for individuals with disabilities through state vocational rehabilitation agencies. According to agency documentation, from fiscal years 2013 through 2016, nationwide in this program, 40 to 50 program participants per year obtained employment as operators in the drinking water and wastewater industry. secondary and postsecondary education students who elect to enroll in career and technical education programs. The Adult Education and Family Literacy Act provides funds to states, which grant these funds to eligible providers to assist adults in, among other things, becoming literate or achieving proficiency in English, obtaining the knowledge and skills necessary for employment and self-sufficiency, and completing a secondary school education. The Rehabilitation Act of 1973 provides funding to states for vocational rehabilitation services, such as counseling, job training, and job search assistance to eligible individuals with disabilities, with emphasis on individuals with significant disabilities. These programs are delivered through American Job Centers. improve employment opportunities for veterans with disabilities. According to the memorandum of understanding, veterans represent a major recruiting opportunity for water utilities. According to the EPA and VA memorandum of understanding, prior military experience gives veterans an understanding of teamwork, discipline, and personal accountability that can make them excellent employees in these fields. In addition, many veterans already have technical skills and training that are directly transferrable to careers in the drinking water and wastewater industry. EPA also worked with VA to create Military Occupational Specialty equivalent job descriptions for water-related military jobs to show how they equate to civilian water utility jobs. Under the memorandum with EPA, VA receives referrals of open positions from the water and wastewater industry and disseminates the information to disabled veterans who are looking for jobs. According to a VA official, over the past 5 years, the VA estimated sharing nearly 5,500 water utility job leads with its 56 regional offices and the National Capital Region Benefits Office. VA tracks the number of disabled veterans who have been rehabilitated to employment, but it does not track the number of disabled veterans who take jobs at water utilities. Selected Water Utilities Accessed Some Federal Programs to Help Meet Workforce Needs Strategies under the Workforce Innovation and Opportunity Act In implementing the Workforce Innovation and Opportunity Act, states are required to incorporate specified strategies in their state plans, including the following: Career pathways strategies help job seekers obtain education and job experience leading to a career. Career pathways strategies align and integrate education, job training, counseling, and support services to help individuals obtain postsecondary education credentials and employment in in- demand occupations. Sector partnership strategies engage related groups of stakeholders (including employers) in the workforce system. Such strategies organize multiple employers and key stakeholders, such as education and training programs, in a particular industry into a working group that focuses on the shared goals and human resources needs of that industry. working with DOL to identify standard workforce competencies and working with workforce investment boards in each state to integrate and fund training initiatives for the water utility industry; and working with Education to develop training requirements for the water utility industry. Representatives of the American Water Works Association told us that they had not provided tools or outreach to utilities to help them act on some of these recommendations, such as working with local workforce investment boards. In our interviews with selected utilities, we heard that there is variation in whether the utilities have accessed federal programs to help meet their workforce needs. Programs for Rural Utilities (USDA) Representatives from four of the selected small utilities we interviewed said they use training programs offered by the National Rural Water Association to train the water operators they hire. A representative from one small utility stated that his utility needed the National Rural Water Association to provide ongoing training for new operators. The representative also stated that the National Rural Water Association’s circuit riders helped the utility resolve problems that arose, which precluded the need for the utility to pay for expensive private services. Circuit riders can help small utilities resolve a range of problems, including assisting with leak detection and responding to natural disasters and other emergencies. American Job Centers (DOL) Representatives from two of the selected large utilities and two of the selected small utilities told us that they had used the American Job Centers to recruit potential workers. Representatives of those utilities described differing experiences in using their local job centers, with representatives from one large utility stating that the job center was a good resource for them while representatives from another large utility stated that they were not able find the type of candidates they wanted (such as those with a STEM background). Representatives of other selected utilities stated that they have not used the centers either because they were not familiar with the centers’ services or they did not believe that using the job centers would be beneficial for them. Sector Partnership Strategy (DOL) Get Into Water! The Colorado Department of Labor and Employment and the Colorado Workforce Development Council jointly awarded funding to plan a sector partnership strategy for the drinking water and wastewater industry. The funding provided by Colorado was part of federal Workforce Investment Act funds provided to the state for sector partnership strategies. The initiative, called “Get Into Water!” involved four counties in the Denver metro region. Although the drinking water and wastewater industry was not among the top three industries in those counties, a study of the region’s drinking water and wastewater utilities identified workforce challenges and opportunities in the region. The initiative, which was active between 2009 and 2011, developed entry-level training courses to introduce high school students and adults to career opportunities in the drinking water and wastewater industry. One of the programs that was developed—at Emily Griffith Technical College—remains active after the conclusion of the initiative. One of the selected large utilities that we interviewed was involved in a sector partnership strategy called “Get Into Water!” funded by the Colorado Department of Labor and Employment and the Colorado Workforce Development Council. The funding provided by Colorado was a part of federal Workforce Investment Act funds provided to the state for sector partnership strategies. None of the other selected large or small utilities reported taking part in a federally funded sector partnership strategy. Registered Apprenticeships (DOL) One of the selected large utilities we interviewed used DOL’s registered apprenticeship program as a way to recruit and hire water operators. It also used the apprenticeship to cover plumbers. None of the other selected large utilities had registered apprenticeship programs for water operators. Representatives from some of the selected large utilities stated that they did not use registered apprenticeships because of the expense of meeting the apprenticeship rules—particularly having to pay almost the market rate to an apprentice, who may not be fully productive for the first few years on the job. Representatives from some of the selected small utilities stated that they did not need an apprenticeship program because of their small size or lack of openings. Initiatives to Employ Veterans (VA, DOL) The selected utilities used various methods to recruit veterans, including working with state and local veterans offices, job fairs, and coordinating with local military installations. Four of the selected large and small utilities we interviewed sought to hire veterans, but none of them sought employees through the VA’s disabled veterans program. DOL noted that American Job Centers offer additional ways to recruit and hire veterans, including the Jobs for Veterans State Grants program, which funds Disabled Veteran Outreach Program specialists and Local Veterans’ Employment Representatives. Representatives from one of the large utilities stated that although it did not have a program specifically for recruiting veterans, it periodically sent its employees to talk to groups of veterans about the nature of its work and how to navigate the civil service hiring process. Conclusions Having an adequate number of trained and qualified employees, particularly water operators, is key to the safe operation of the nation’s water utilities. Water utilities face an upcoming wave of retiring baby boomers, similar to other industries in the economy. Federal programs offer many resources that, if accessed, have the capability to support and supplement—but not replace—utilities’ individual and collective efforts to recruit for difficult-to-fill positions. EPA has coordinated efforts with DOL and other federal agencies that can help utilities and industry groups identify ways for utilities to access federal programs. EPA’s inspection guidance documents recognize the importance of utilities having an adequate number of capable and qualified staff, and state regulators appear to be capturing some information on utilities’ existing workforce capacity and using this information to target technical assistance to utilities in need. However, EPA’s inspection guidance to states does not address future workforce issues that may affect utility operations. By adding questions to its inspection guidance documents on strategic workforce planning—such as the number of positions needed in the future, skills needed in the future, and any potential gaps in water operator positions—EPA could help ensure this information is available for states to assess future workforce needs. Information on future workforce needs could help states and utilities identity potential workforce issues and take action as needed. Recommendation for Executive Action The Assistant Administrator for Water should direct EPA’s Office of Water to amend its Safe Drinking Water Act and Clean Water Act inspection guidance documents to add questions on strategic workforce planning topics—such as the number of positions needed in the future, skills needed in the future, and any potential gaps in water operator positions. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this product to EPA, USDA, Education, DOL, and VA for comment. Education, DOL, and VA provided technical comments, which we incorporated as appropriate. In a written response, USDA indicated that it did not have comments and generally agreed with the report findings and content. EPA provided written comments, reproduced in appendix III, in which it generally agreed with our findings and provided comments regarding the conclusions and recommendation. While EPA generally agreed with our findings, the agency stated that the report does not highlight some factors that differentiate water and wastewater sector workforce needs from the workforce needs of all occupations. EPA stated that, for example, the location of the drinking water system or wastewater treatment plant can significantly impact the owner’s ability to recruit and retain certified operators. We examined workforce needs in terms of projected growth and occupational separations rates as reported by BLS. We did not specifically assess the impact of geographic location. However, in our discussion of responses from selected small utilities, we outline some of the particular challenges facing small water utilities, which are typically located in more rural areas. We describe, for example, that representatives of small utilities often cited an inability to compete with larger utilities on compensation for certified water operators. With regard to our recommendation, EPA stated that it generally agrees with the recommendation with respect to sanitary surveys of public water systems. It further stated that EPA’s Office of Ground Water and Drinking Water is in the process of updating the sanitary survey guidance manual How to Conduct a Sanitary Survey of Drinking Water Systems – A Learner’s Guide. EPA noted that they will add questions related to workforce needs to the “Utility Management” section and anticipates finalizing the update by the summer of 2018. For compliance monitoring inspections under the Clean Water Act National Pollutant Discharge Elimination System (NPDES) program, EPA did not agree or disagree with the recommendation, but stated that inspectors may be limited in the information related to workforce planning they can assess and provide because there is no corollary to the Water System Management and Operation element of sanitary surveys in the NPDES compliance inspections. EPA stated that where the agency identifies studies or documents on adequate staffing of wastewater facilities, its Office of Enforcement and Compliance Assistance will incorporate that information into its existing guidance documents for inspectors. While we recognize that the sanitary surveys and NPDES compliance inspections have different goals, as we noted in the report, inspectors currently ask plant managers and staff questions about staffing, and we believe that there is an opportunity to ask additional questions about future staffing needs. In addition, we note that EPA already highlights the need for adequate staff in its compliance inspection guidance. By amending the compliance inspection guidance to instruct inspectors to also ask about future workforce issues, EPA would be emphasizing the fact that ensuring a trained workforce and continuity of operations is important for complying with NPDES permits. We are sending copies of this report to the appropriate congressional committees, the Administrator of EPA, the Secretary of Agriculture, the Secretary of Education, the Secretary of Labor, the Secretary of Veterans Affairs, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact us at (202) 512-3841, [email protected] or (202) 512-7215, [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines (1) what is known about workforce needs at water utilities compared with workforce needs nationwide and any effects of potential unmet workforce needs on the utilities’ abilities to comply with the Safe Drinking Water Act and the Clean Water Act; (2) what approaches selected water utilities have used to manage their workforce needs and challenges they have faced in managing those needs; and (3) in what ways, if any, key federal programs can assist water utilities with their workforce needs. To examine what is known about workforce needs at water utilities compared with workforce needs nationwide, we assessed and summarized data on workforce replacement rates provided by the Department of Labor’s Bureau of Labor Statistics (BLS) and examined projected retirement rates provided by industry studies. We focused on projections of workforce turnover from 2016 to 2026 and estimates of employee retirement eligibility published from 2008 to 2016, the most recent data available to us. Because BLS estimates of workforce replacement needs do not distinguish between workers who retire and workers who permanently leave an occupation for other reasons, it was not possible to isolate retirements from other separations. We identified two relevant BLS survey programs—the Occupational Employment Statistics program (May 2016) and the Current Population Survey (2016)—and one BLS projection program, the Employment Projection Program (2016-2026). To assess the reliability of BLS survey data, we reviewed relevant documentation and information from BLS staff for the most recent data available for the two relevant BLS survey programs. Through the Occupational Employment Statistics program, BLS conducts a mail survey in May and November of each year to collect data on wage and salary workers in nonfarm establishments. It uses these data to produce employment and wage estimates for about 800 occupations. BLS publishes relative standard errors to account for sampling errors in Occupational Employment Statistics survey estimates. All Occupational Employment Statistics estimates in this report are presented along with their 95 percent confidence level. The Current Population Survey is a monthly survey of households conducted by the U.S. Census Bureau for BLS. It is a sample survey of 60,000 eligible households representing the civilian noninstitutional population ages 16 and older in the 50 states and the District of Columbia. The basic monthly survey gathers demographic characteristics of people in each sampled household and information to determine whether they are employed, unemployed, or not in the labor force. The survey collects information on workers’ occupations and ages. The Current Population Survey estimates presented in this report are subject to sampling error. To account for this error, we present all Current Population Survey estimates in this report along with their 95 percent confidence intervals. Data that would allow us to calculate true sampling errors were not specifically provided by the Current Population Survey. Instead, we followed Current Population Survey guidance to estimate sampling errors. We used generalized variance functions, parameters, and factors published by the Current Population Survey to calculate approximate standard errors and confidence intervals. As a result, the confidence intervals presented in this report provide a general order of magnitude and are approximations of the true sampling errors. To assess the reliability of BLS projections, we reviewed relevant documentation and information from BLS staff for the most recent projections available and reviewed the BLS employment projections in the Occupational Outlook Handbook. The Handbook includes employment projections developed by BLS’ Employment Projections program; BLS develops its projections from statistical and econometric models, combined with subjective analysis, and designs these projections to provide a focused analysis of long-term trends based on a set of assumptions. The models and analyses BLS uses to develop the projections assume historical relationships and behavior will continue to hold over the projection period; however, there is inherent uncertainty about whether historical trends will continue into the future. BLS employment projections rely on assumptions about demographics, fiscal policy (including tax policies and government spending), and macroeconomic conditions over the 10-year projection period. For example, the BLS projections assume that the economy will be at full employment in the last year of the period (e.g., 2026). BLS notes, however, that fluctuations in the business cycle are not foreseeable over a decade. Therefore, BLS employment projections should be considered as likely outcomes, but subject to the accuracy of the underlying assumptions. We determined that the BLS survey and projection data were sufficiently reliable for purposes of our objective. To determine what data and information were available on workforce needs from industry, we reviewed reports and interviewed officials from industry associations, including the American Water Works Association, the Water Environment Federation, the National Rural Water Association, the Rural Community Assistance Partnership, the National Association of Clean Water Agencies, and the National Association of Water Companies. We identified a number of relevant industry studies, including three surveys published by the American Water Works Association between 2015 and 2017: the 2016 State of the Water Industry Report, Benchmarking Performance Indicators Water and Wastewater: 2015 Survey Data and Analyses Report, and Benchmarking Performance Indicators Water and Wastewater: 2013 Survey Data and Analyses Report. To assess the reliability of the industry studies, we reviewed their scope and methodology. We determined that although the industry estimates were not generalizable, the studies were sufficiently reliable for illustrating industry perspectives on workforce planning. To review the effects of potential unmet workforce needs on water utilities’ abilities to comply with the Safe Drinking Water Act and the Clean Water Act, we selected a sample of 11 water utilities—6 large and 5 small—based on geography, size, and indications of hiring challenges in the past. We included both large and small utilities in our selection based on our initial interviews with industry representatives that suggested that large utilities and small utilities experienced different challenges. To select the large utilities, we compiled a list of cities that were mentioned in interviews and other communications with industry groups, and in EPA documents, as experiencing difficulty replacing retiring workers or having put in place programs to train and recruit new workers. We then divided the list of cities geographically among the four Census regions—West, Midwest, Northeast, and South—and tallied the number of times each city was mentioned. In the West and South regions, we selected the city with the greatest number of mentions. In the Midwest and Northeast regions, each of the cities had only one mention, so we selected the largest city within each region. For each of these four cities we then identified the drinking water and wastewater utilities for the city. One of the cities had separate drinking water and wastewater utilities, while the other three cities had one utility that provided both drinking water and wastewater services. We also included the water utility for a fifth city because early in our research we conducted a site visit to that city and conducted an interview with the local water utility. To select the small utilities, we reached out to the National Rural Water Association and the Rural Community Assistance Partnership for suggestions on utilities to interview. The National Rural Water Association provided us with a list of 10 small water and wastewater utilities from 6 states. We divided the list of cities among the four Census regions. In the West region, one utility was recommended. For the Midwest, Northeast, and South regions, we selected utilities from cities with populations less than 10,000. In the South region, we selected a second city in order to bring the total number of small utilities up to five. One of the small utilities that we contacted was not able to participate in an interview with us but instead referred us to a nearby utility. That utility served a population less than 30,000, which for the purposes of this report we included in the category of small water utilities. Table 2 shows the locations and sizes of the 11 utilities we interviewed. We asked officials of the selected utilities whether workforce challenges had affected their abilities to comply with the Safe Drinking Water Act and the Clean Water Act at their utilities or whether they anticipated such effects in the future. The information from those interviews is not generalizable to the national population of water utilities; it was intended to provide illustrative examples of any difficulties water utilities were experiencing in complying with the Safe Drinking Water Act and the Clean Water Act that they attributed to workforce challenges. We also obtained EPA data on compliance with the Safe Drinking Water Act and the Clean Water Act for the selected utilities. We have previously reviewed the quality of EPA compliance data for the Safe Drinking Water Act. Specifically, we have interviewed EPA officials and reviewed EPA data reliability assessments, a 2017 OIG report on the reliability of data in EPA’s Safe Drinking Water Information System (SDWIS), data verification reports, and our past reports on the reliability of the data in SDWIS. According to these recent EPA assessments, the EPA OIG report, and our January 2006 and June 2011 reports, some of the data in SDWIS are not complete. We also interviewed an EPA official and reviewed documentation on compliance data for the Clean Water Act. We determined that although the data are incomplete, they were useful to provide a rough indication of whether selected water utilities had any Safe Drinking Water Act or Clean Water Act compliance violations over the past 10 years (between 2007 and 2016). To describe the approaches that selected water utilities have used to manage their workforce needs and the challenges they have faced in managing those needs over the past 5 years (from 2012 through 2016), we spoke with utility officials, during the interviews described above, to learn about their hiring and retirement numbers, challenges in managing workforce needs, and approaches for hiring staff. The information from those interviews is not generalizable to the national population of water utilities; it was intended to provide illustrative examples of any difficulties water utilities were experiencing in complying with the Safe Drinking Water Act and the Clean Water Act that they attributed to workforce challenges. To describe how key federal programs can assist water utilities with their workforce needs, we conducted background research and initial interviews with federal officials. We identified five federal agencies that conduct activities or provide funding related to the water utility workforce: EPA, USDA, Education, DOL, and VA. We interviewed officials with these agencies about current or past federal programs and policies related to water utilities’ workforce needs. We did not attempt to identify all programs that can provide assistance to water utilities for workforce planning or recruitment, but we determined based on interviews at the five federal agencies that we had identified the programs for which these activities were a primary purpose or likely use. Additionally, we interviewed representatives from the selected utilities we contacted to determine whether and how they had used various federal programs or assistance to augment other planning and recruitment strategies and what problems, if any, they had in using the programs. The information from those interviews is not generalizable to the national population of water utilities but provides illustrative examples of how, if at all, water utilities are using federal programs to help with workforce planning and recruitment. We conducted this performance audit from September 2016 to January 2018, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: List of Federally Registered Apprenticeships at Drinking Water and Wastewater Utilities Table 3 provides a list of apprenticeships in the water industry that are registered with the Department of Labor’s (DOL) Office of Apprenticeship. As of September 7, 2017, DOL reported that 24 utilities across the country were training new employees through registered apprenticeships that combined structured learning with on-the-job training with an assigned mentor. Appendix III: Comments from the Environmental Protection Agency Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contacts named above, Susan Iott (Assistant Director), Betty Ward-Zukerman (Assistant Director), Darnita Akers, Mark Braza, Caitlin Cusati, Alex Galuten, Tom Gilbert, Gina Hoover, Rich Johnson, Cynthia Norris, Rhiannon Patterson, Sarah Sullivan, and Paul Wright made key contributions to this report.
Why GAO Did This Study Safe operation of the nation's water utilities depends on access to a qualified workforce, particularly certified water operators. Industry reports have cited high rates of retirement eligibility and raised concerns about the water industry's ability to fill job openings. GAO was asked to review workforce needs within the drinking water and wastewater industry. This report describes (1) what is known about workforce needs at water utilities compared with workforce needs nationwide and effects of potential unmet workforce needs on the utilities' compliance with the Safe Drinking Water Act and Clean Water Act; (2) approaches used by selected utilities to manage their workforce needs and challenges they have faced in managing those needs; and (3) ways in which federal programs can assist water utilities with workforce needs. GAO reviewed workforce projections, relevant laws and regulations, agency documents, and industry studies and interviewed federal, local, and industry officials. GAO also conducted semi-structured interviews with a nongeneralizable sample of 11 water utilities, selected by size, location, and indications of workforce needs. What GAO Found Projections from the Department of Labor's Bureau of Labor Statistics (BLS) suggest that workforce replacement needs for water operators are roughly similar to workforce needs nationwide across all occupations; however, little is known about the effects of any unmet needs on compliance with the Safe Drinking Water Act and the Clean Water Act. BLS has projected that 8.2 percent of existing water operators will need to be replaced annually between 2016 and 2026. Although BLS projections are intended to capture long-run trends, rather than to forecast precise outcomes in specific years, this predicted replacement rate is roughly similar to the predicted rate of 10.9 percent for all workers across the U.S. economy. Limited information is available to determine whether retirements, or other workforce needs, are affecting drinking water and wastewater utilities' ability to comply with the Safe Drinking Water and Clean Water acts. At a national level, neither the water utilities' industry associations nor the Environmental Protection Agency (EPA) has analyzed whether there is a relationship between unmet workforce needs and compliance problems. EPA relies on states to inspect utilities to ensure compliance with the acts. EPA's inspection guidance documents, for both drinking water and wastewater, advise states to examine the quality and quantity of staff operating and maintaining water utilities. However, the guidance does not advise states to examine future workforce needs. GAO has found that future workforce needs can be identified through strategic workforce planning, which involves developing long-term strategies for acquiring, developing, and retaining staff to achieve program goals. By adding questions to EPA's inspection guidance on strategic workforce planning, such as the number of positions needed in the future, EPA could help make this information available for states to assess future workforce needs. Information on future workforce needs could help states and utilities identity potential workforce issues and take action as needed. Representatives from 11 selected water utilities reported that by using various approaches, they were generally able to meet their current workforce needs but faced some challenges in doing so. Representatives from the selected utilities said that they recruit operators using word of mouth, websites, newspapers, and partnering with local technical schools. However, representatives from small utilities said that even with these approaches, they had difficulty hiring certified operators and instead hired and trained entry-level employees. Additionally, representatives from large utilities said they face difficulties in recruiting skilled workers, such as electricians and mechanics, part of a larger national pattern. Five federal agencies that GAO reviewed—EPA and the Departments of Agriculture (USDA), Labor (DOL), Education, and Veterans Affairs (VA)—have programs or activities that can assist utilities with their workforce needs in several ways, including through guidance, funding, and training. EPA has worked with DOL and industry groups to develop a water-sector competency model to support industry training and with VA to help place disabled veterans in water industry jobs. In addition, USDA funds personnel who travel to rural utilities to provide hands-on assistance through its Circuit Rider program. Four of five small utilities GAO interviewed said they used this program and other USDA technical assistance for training operators. What GAO Recommends GAO recommends that EPA add strategic workforce planning questions, such as the positions and skills needed in the future, to its inspection guidance documents. EPA generally agreed with GAO's recommendation as it related to drinking water, but neither agreed nor disagreed regarding wastewater. GAO believes the entire recommendation should be implemented.
gao_GAO-18-592
gao_GAO-18-592_0
Background DOD’s 19 defense agencies and 8 DOD field activities are defense organizations separate from the military departments. They are intended to provide a common supply or service across more than one DOD organization. The services and supplies provided by the DAFAs are broad; they range from intelligence to human resources services, to providing secure networks and buildings, to developing cutting edge research and technological advancements, to missile defense, to providing groceries for military families. DOD estimates that the DAFAs employ more than 380,000 military and civilian personnel across the department, not including contractors. Each head of a DAFA reports to a principal staff assistant within the Office of the Secretary of Defense, who in turn reports directly to the Deputy Secretary of Defense or the Secretary of Defense. The principal staff assistants who provide oversight to the DAFAs include the CMO, the Chief Information Officer, the heads of DOD’s Offices of General Counsel and Public Affairs, and all of the Under Secretaries within the department, depending on the mission of the DAFA. In addition to providing advice to the Secretary on assigned matters, each principal staff assistant plays an important role in the development and review of key aspects of the DAFA’s submissions as part of DOD’s annual budget process, called the Planning, Programming, Budgeting, and Execution process. A subset of the DAFAs consists of the combat support agencies, which have, in addition to their other functions, focused missions to support the combatant commands. These eight agencies are jointly overseen by their respective principal staff assistants and the Chairman of the Joint Chiefs of Staff. Figure 1 details the organizational structure and reporting relationships of the DAFAs, including the eight combat support agencies. DOD’s CMO Reorganization and Related Reform Efforts Section 901 of the Fiscal Year 2017 National Defense Authorization Act established a CMO within DOD, effective on February 1, 2018, and the Secretary established the position, as directed, on that date. The Fiscal Year 2018 National Defense Authorization Act, Section 910, clarified the role and expanded the responsibilities of the DOD CMO. Further, it elevated the position to take precedence in the department after the Secretary of Defense and the Deputy Secretary of Defense. This section also gave the CMO authority to direct the secretaries of the military departments and the heads of other defense organizations with regard to business operations and department-wide shared services. The expanded authority of the CMO includes oversight, direction, and control over DAFAs providing shared business services for the department, to be determined by the Secretary of Defense or the Deputy Secretary of Defense. In January 2019 the CMO will assume some of the Chief Information Officer responsibilities, duties, and powers related to business systems or management, including the management of the enterprise business operations and shared services of the department, as required by law. Additionally, the CMO will serve as the DOD performance improvement officer. Fragmentation, Overlap, and Duplication Since 2011, we have issued annual reports on opportunities to reduce or better manage fragmentation, overlap, and duplication, as well as to achieve cost savings and enhance revenue for the federal government. The federal government faces a long-term, unsustainable fiscal path based on an imbalance between federal revenues and spending. Figure 2 defines fragmentation, overlap, and duplication. DOD Has Not Comprehensively or Routinely Assessed the Continuing Need for Its DAFAs DOD’s Past and Current Efforts to Assess the DAFAs Have Limitations Although DOD has taken some steps to assess the continuing need for the DAFAs, we found that these steps have been neither comprehensive nor routine, especially since 2012. At the time of our review, section 192(c) of title 10 of the United States Code required the Secretary of Defense to review the services and supplies each DAFA provides to ensure that (1) there is a continuing need for each DAFA; and (2) the provision of services and supplies by each DAFA, rather than by the military departments, is a more effective, economical, or efficient manner of providing those services and supplies or of meeting the requirements for combat readiness. From 1987 to 2012, DOD issued biennial reports to Congress to record its response to this statute, but the methodology and quality of those reports varied. Regarding the methodology of the past reports, for the first five biennial reports, from 1987 to 1995, DOD relied on a research team to identify findings and provide recommendations on the structure and composition of the DAFAs. The four reports issued from 1997 to 2004 relied on a survey of the DAFAs’ customers across DOD. From 2005 to 2010, DOD issued three reports that alternated between a senior management assessment of the DAFAs and the customer survey approach. In addition, the 2009-2010 report recorded activities relevant to the statutory review requirement, with a focus on a major DOD efficiency initiative that was ongoing at that time. Regarding quality, we found that the most recent report, dated 2012, generally did not reflect key elements of quality evaluations, which we identified in our prior work and compiled as part of this review. Table 1 below details these key elements. We found that some key elements were included in the most recent report, but other key elements were not reflected. We reviewed that report against all elements and found that the report’s purpose was aligned with the relevant statutory requirements, which is a key element. Further, the report relied on data obtained from appropriate sources for the evaluation, to include survey information from the DAFA directors and military department officials. However, we found that the report did not assess the reliability of the data used, define key terms, clearly state criteria used for analysis, or make recommendations. For example, OCMO officials familiar with the report told us that some DAFAs and military departments surveyed for the report provided more detail and information in their responses than others, but there was no assessment of the reliability of this information. Overall, OCMO officials acknowledged that the report was more of a collection of information, rather than an in-depth assessment. At the time of our review, section 192(c) of title 10, United States Code, did not explicitly require that DOD develop and issue a written report as part of the required periodic review. According to DOD officials, they discontinued issuing biennial reports in 2012 because the reports were not a leadership priority, given the resources required to produce them. In addition, OCMO officials acknowledged that the department does not currently record fulfillment of the statutory requirement through a centralized process, such as the development of a report that responds to the requirement. However, a DOD directive tasks the former Director of Administration and Management, whose functions have now been integrated into the CMO office, to oversee the biennial review of the DAFAs and to record the fulfillment of that review. Further, Standards for Internal Control in the Federal Government states that documentation is a necessary part of an effective internal control system and is required for effective operations. OCMO officials told us that they are considering renewing the issuance of biennial reports, but that there are no firm plans to do so at this time, nor are there any associated time frames. In the absence of biennial reports, OCMO officials stated that since 2012 they have relied on existing departmental processes to address the statutory requirement to review the DAFAs. Senior level OCMO officials expressed some disagreement about which of these existing processes ensure that they have fulfilled the statutory requirement. When we assessed the processes, we determined that DOD did not provide sufficient evidence that it has met the statutory requirement. These processes include the following: Annual budget process: Some OCMO officials stated that DOD’s annual budget process is a means of addressing the statutory requirement to review the DAFAs, but one senior official from the OCMO disagreed. Although DOD reviews the budget proposals for each DAFA, DOD could not provide evidence that the annual budget process includes a specific review of the continuing need for each DAFA, or that the use of the DAFAs ensures the most efficient provision of services across DOD. Day-to-day management of the DAFAs: One OCMO official stated that day-to-day management of the DAFAs provides a means of addressing the statutory requirement to review the DAFAs. However, we found that the documentation provided by OCMO officials does not demonstrate that a review and recording of DAFA services and supplies takes place through day-to-day management of the department. Moreover, some OCMO officials stated that the day-to- day management activities of a large organization can actually detract from leadership’s ability to focus on needed reviews and reform. Reform or efficiency initiatives: Some OCMO officials stated that prior reform efforts that were focused on the DAFAs exemplify the department’s response to the statute. However, although certain reform initiatives, such as the Business Process and Systems Reviews, affected the DAFAs, we found that the stated purposes of these reform initiatives, discussed in more detail later in this report, do not reference the continuing need for DAFAs or examine whether services should be performed instead by the military departments. Further, some OCMO officials acknowledged that prior reform efforts did not examine the continuing need for DAFAs. DAFA reorganizations: OCMO officials cited certain reorganizations of the department as evidence that they review the DAFAs. However, the examples they cited were congressionally mandated reorganizations, such as the replacement of the Under Secretary of Defense for Acquisition, Technology, and Logistics with two new Under Secretary positions. As these were congressionally mandated reorganizations and therefore required, we found that they do not demonstrate that changes resulted from an internal comprehensive assessment of the continuing need for the DAFAs or their provision of services and supplies. Management of services through executive agents: Finally, OCMO officials stated that the existence of executive agents throughout the department shows that DOD focuses on ensuring efficient delivery of services and supplies. Multiple heads of DAFAs serve as designated executive agents. However, OCMO officials did not provide documentation that these executive agents assess the continuing need for the DAFAs. Further, we have previously reported on weaknesses in the use of DOD executive agents in management arrangements. For example, we previously reported that DOD had not defined continued need, currency, effectiveness, or efficiency in satisfying requirements for executive agents. DOD Has Established Guidance That Results in Quality Evaluations of Its Combat Support Agencies but Lacks Guidance for Its Review of All DAFAs Under a separate statute, 10 U.S.C. § 193(a), DOD is required to periodically report on the responsiveness and readiness of the eight combat support agencies, a subset of the DAFAs. In contrast to DOD’s biennial reports on DAFAs for 10 U.S.C. § 192(c), we found that the DOD combat support agency reports for 10 U.S.C. § 193(a) we reviewed generally reflect key elements of quality evaluations that we identified. For example, the most recent combat support agency reports we reviewed generally have clear evaluation questions, use sufficient and appropriate data, and support conclusions with data and analysis. Last, all of the DOD combat support agency reports we reviewed contain actionable recommendations. Recommendations from the Joint Staff included in combat support agency reports resulted in reported efficiencies. For example, in response to the findings and recommendations of a combat support agency report, officials from the Defense Information Services Agency created a new office to serve as a single point of contact for its customers. These officials reported that the office has reduced paperwork and helped to build relationships with customers. Joint Staff officials reported a variety of other positive results from combat support agency report recommendations. These results include an increase in the speed of specific deliveries from the Defense Logistics Agency (DLA) to DOD customers outside the continental United States; improved navigational charts provided by the National Geospatial Intelligence Agency to the Combatant Commands to ensure safety; and the establishment of clear policy related to fuel additives, including the clarification of specific roles and responsibilities. OCMO officials stated that the statutory requirement for combat support agency reports is more specific and smaller in scope than the statutory requirement to review the DAFAs. As a result, the officials told us that they have not been able to conduct targeted and potentially more useful analysis for DAFAs, such as the evaluations they conduct of the combat support agencies. However, we found that while the statutes differed in some ways—for example, a report is specifically required for the combat support agencies, but was not for the DAFAs—both statutes prescribed broad requirements for the review processes. While each statute requires a periodic assessment, we found differences in the direction that DOD provides to guide the department’s response to these statutes. Specifically, a Joint Staff Instruction describes requirements for the combat support agency reports and provides direction for the associated process. In many cases, the Joint Staff Instruction requirements reflect the key elements for evaluations that we identified. For example, the instruction provides general guidance on the criteria that reports should use, as well as specific examples. To ensure data reliability, the instruction requires validation of findings, issues, recommendations, and observations. Further, the instruction describes key terms included in the statute, such as responsiveness, readiness, and operating forces. In contrast, DOD has not issued internal guidance that details requirements for the required review of DAFAs. The Joint Staff has also developed a strategy for scoping and timing its combat support agency reviews to make the work manageable and the outcome of the reviews useful to the combatant command. Specifically, the Joint Staff focuses each report on one combat support agency at a time, rotating the focus so that each agency is reviewed every several years. Joint Staff officials stated that the focus areas of the reports also vary depending on the needs of warfighter, senior leader direction, and actions taken as a result of the previous assessments. Additionally, when conducting its reviews, the Joint Staff primarily assesses the combat support missions within each combat support agency, rather than all functions implemented by the agency. Conversely, DOD has not developed any internal guidance for a similar process that would allow for a more manageable approach to the requirement to review the DAFAs. As a result, previous biennial reviews examined all services and supplies of all DAFAs in each report, an approach that CMO officials acknowledged prohibited more detailed analysis. Through the development of internal guidance that provides clear direction for conducting and recording DOD’s response to the required review of the DAFAs, the department could more clearly define or target the scope of those reviews and any resulting reports to make effective use of the resources devoted to that process. For example, DOD could choose to follow a risk-based approach, focus on the department’s key priorities for reform, or rotate the focus of each report as the Joint Staff does with the combat support agency reports. Without clear internal guidance that defines the requirements for a high- quality review of its DAFAs and the associated recording of the results of those reviews, DOD and congressional decision makers may not have reasonable assurance that there is a continuing need for the DAFAs and that the provision of services and supplies is effective, economical, and efficient. Such information could assist decision makers when considering any future reorganizations of the DAFAs, or the realignment of functions among the DAFAs or other defense organizations, or when seeking greater efficiencies. Fragmentation and Overlap among the DAFAs That Provide Human Resources Services Have Negative Effects, and Related Reform Efforts Have Limitations Fragmentation and Overlap Occur among the DAFAs That Provide Human Resources Services DOD currently has a service delivery model in which there are numerous human resources providers offering varying levels of quality and transparency of costs. Section 191 of title 10, United States Code, states that the Secretary of Defense may provide for the performance of a supply or service activity that is common to more than one military department by a single agency of DOD when it would be more effective, economical, or efficient. Nevertheless, at least six organizations within DOD, including three DAFAs and the three military departments, provide human resources services to other defense agencies or organizations. Specifically, DLA, the Defense Finance and Accounting Service (DFAS), and the Washington Headquarters Service (WHS) perform human resources services for other organizations, such as other DAFAs; offices within the Office of the Secretary of Defense; or parts of the military departments. All perform the same types of human resources services, such as those related to civilian workforce hiring across DOD. Additionally, the Departments of the Army, Navy, and Air Force each has a human resources command or personnel center. Below is a count of the number of customers served by the DOD agencies providing human resources services as of May 2018, as reported by agency officials. DLA provides human resources services for about 70,000 customers, including 25,000 of its own employees and 45,000 civilians from across DOD outside of DLA. DFAS provides human resources services for about 26,000 DOD civilians, including 12,000 DFAS employees and about 14,000 customers from across DOD. WHS performs nearly all types of human resources services for some DAFAs, such as the Defense POW/MIA Accounting Agency and the Defense Legal Services Agency, as well as all senior executives and presidential appointees across the department, totaling about 170,000 individuals. However, WHS performs only certain human resources services for its own employees, such as recruitment and training. WHS pays DLA to perform other types of human resources services, such as personnel action processing, pre-employment drug testing, and the processing of certain travel orders and allowances, among other functions, for more than 7,000 WHS employees. Through our assessment of documents detailing the human resources service customer bases of DFAS, DLA, and WHS, we found that there is overlap in the human resources services that they provide. For example, DOD officials reported that three DAFAs and the military departments provide human resources servicing to personnel employed by the Defense Security Cooperation Agency, depending on the location, rank, or other characteristics of the staff (see figure 3). Moreover, although each military department has its own human resources command or personnel center, we have identified some instances of DAFAs providing human resources services to military department civilian employees or servicemembers. For example, the Army pays DFAS to provide broad human resources support to the Army’s Financial Management Command, even though it could use its own human resources servicing organization. Additionally, WHS officials stated that the agency provides certain human resources services to all presidential appointee civilian positions across the military departments, rather than having the appointees’ military departments’ own human resources commands or personnel centers do so. Also, DLA provides human resources services to the military department civilians and servicemembers assigned to DLA. Inefficient Overlap and Fragmentation Have Resulted in Negative Effects to the Department The fragmentation and overlap among the DAFAs that provide human resources services to other defense offices or organizations have resulted in negative effects, such as inconsistent performance information, inefficiencies resulting from fragmented information technology (IT) systems, and inefficiencies related to overhead costs. Inconsistent Performance Information In the current service delivery model with multiple human resources service providers, DOD agencies choose a human resources provider. DFAS, DLA, and WHS differ in how they measure and report their performance data, which results in inconsistent information and limits customers’ ability to make informed choices about selecting a human resources service provider to meet their needs. DFAS, DLA, and WHS submit data in department-wide information systems, as required. This information is used to develop an overall DOD time-to-hire measure of the department’s performance against the government-wide goal of 80 days to fill a job opening. However, the ways in which each DAFA develops this measure, and other measures to assess its own performance, differ. For instance, one DAFA measures 12 different phases of the entire process to fill a job opening, with a different measure for each of the 12 phases. Other DAFAs choose to begin or end their measurement process at different points within the hiring process. As such, the measures used by human resources providers to determine the timeliness and quality of the services provided to customers are not consistent across the providers. The inconsistent performance data do not allow DOD customers to make fully informed comparisons in selecting a service provider. Table 2 shows the differences among the respective reported time-to-hire averages of the three DAFAs that provide human resources services for civilians who are hired by the three military departments. The averages range from 65 days to 120 days, which shows a considerable variance in performance. However, as described previously, these reported averages were not calculated in a consistent manner across the department’s human resources providers. In addition, these time-to-hire averages do not reflect the quality of the hiring or reflect that some types of positions are difficult to fill, which could affect results. For example, DOD reports that it takes an average of 118 days to fill a civilian intelligence and counterintelligence position department-wide. With more consistent information, DOD leadership could better assess what changes, if any, need to be made to improve hiring practices. As DOD officials told us, delays in hiring can result in failing to hire the best candidates and can negatively affect program success. Further, DOD organizations could better weigh decisions on obtaining human resources services. Fragmented Information Technology (IT) Systems Each human resources provider within DOD uses a common IT system, called the Defense Civilian Personnel Data System, to store and process civilian human resources data. However, each uses a separate connection to the system, resulting in some inefficiency. For example, when an employee in a defense agency serviced by multiple human resources providers transfers to a different part of the same agency or another part of DOD, the employee is treated as if he or she has been newly hired. The employee’s personnel data must be re-entered through a different connection to the data system, and other administrative steps are re-performed, such as providing the employee a new Common Access Card, the department’s identification badge used for facility and computer system access. Additionally, DOD officials stated that there are more than 800 learning management systems employed across the department, which are used to deliver training to personnel and store and record training records. DAFA and OCMO officials stated that these fragmented learning management IT systems are duplicative in nature and are costly to the department to maintain, although officials were not able to provide an estimate of those costs. In January 2018, DOD officials stated that all human resources providers were expected to move to a common connection to the IT system by October 2018, which was expected to eliminate redundant data entry and other duplicative administrative inefficiencies. However, as of June 2018, DOD officials stated that this effort is on hold, as the department is currently reexamining the best strategy to provide IT solutions for human resources. According to officials, that strategy might be to use a cloud- based solution, as opposed to changes to the legacy system of the Defense Civilian Personnel Data System. Inefficiencies Resulting from Multiple Providers Charging Overhead We found that defense agencies or other organizations that use more than one human resources service provider are paying overhead costs charged by each provider, which results in unnecessary expenses and inefficiencies. DOD officials agreed that the fragmented system of service delivery with multiple providers allows for possibly redundant overhead charges, and that a more consolidated service delivery model could reduce expenses associated with overhead. The DAFAs that charge human resources customers by using a fee-for-service structure apply a certain percentage of the total cost as a “general and administrative cost” or “non-labor costs” to each customer. Agency officials stated that these overhead costs pay for management salaries, other personnel-related costs, and administrative costs, such as IT support and facilities costs. These overhead costs are separate from the “direct labor” costs that represent the personnel and other expenses required to perform the service requested. For example, DFAS officials stated that about 7 percent of the fees charged by DFAS to human resources service customers goes for “general and administrative costs” that are separate from the direct labor expense required to perform services. Similarly, about 20 percent of the costs charged to DLA’s human resources customers covers indirect costs. As such, organizations pay overhead and administrative expenses for several human resources providers, thereby using financial resources that could be diverted to higher priority needs. According to DOD officials, using one provider would likely reduce inefficient expenses for human resources services paid by defense organizations. However, according to those officials, more comprehensive information and analysis is needed to determine the extent of inefficient overhead costs that occur. Comprehensive information about the extent of these and other possibly redundant or otherwise inefficient expenses would help identify a human resources service delivery model that is effective, economical, and efficient. DOD’s Efforts to Reform Human Resources Have Some Limitations In January 2018, the Deputy Secretary of Defense established a Human Resources Management Reform Team to initiate key reform efforts within the department. This team is one of nine cross-functional teams established by the Deputy Secretary of Defense to drive reform throughout the department. The human resources management reform team is led by a senior DOD human resources official and comprised of representatives from DFAS; DLA; WHS, the Departments of the Army, Air Force, and Navy; and the OCMO, among others. According to the team’s charter, the team will work to modify human resources processes and move toward enterprise service delivery of human resources services, which is expected to reduce costs. Team members told us that their initial focus is to carry out projects focusing on high-priority challenges, such as pursuing the optimal IT systems for DOD human resources services department-wide and identifying legislative and regulatory changes needed to streamline processes and procedures. After progress is made in these areas, the team plans to review service delivery across the department and determine the most effective and efficient system. Senior leaders from the human resources directorates of DFAS, DLA, and WHS all stated that increased consolidation was possible, if properly reviewed and implemented, especially for tasks such as entering personnel data and other hiring-related tasks, which could be conducted through a shared service model. This work may lead to increased coordination among, or consolidation of one or more, organizations. DOD has not assessed or identified the most effective, economic, or efficient provision of this business function. DOD officials stated that assessing the provision of human resources in the department has not previously been a priority of senior leadership. A memorandum from the Deputy Secretary of Defense that established the human resources management reform team required that the team move the department toward a shared service delivery model. Specifically, this required a “time- phased way forward,” with outcomes and time frames for converting the mission to an enterprisewide service delivery model. The new reform team reflects a commitment from senior leaders within the department to address longstanding problems in the human resources area. However, we identified limitations in how the human resources management reform team is planning and managing its work. First, one goal of the reform team is to reduce the time-to-hire averages across the department and determine a method to measure the quality of hiring. DOD officials stated that performance measure improvements are an important focus of their efforts and that they will share best practices for time-to-hire and will require a standard measure of quality of hiring. However, team plans we reviewed do not include steps for ensuring that the DAFAs and military departments adopt standardized processes to develop a consistent time-to-hire measure. Standardized quality information would be valuable in determining which organizations may be best placed to provide department-wide human resources service delivery, and without this information DOD may not have assurance that its hiring practices are effective and efficient. Standards for Internal Control in the Federal Government emphasizes that managers should identify the information required and obtain it from relevant and reliable sources. Second, the team has not set clear time frames for some of its work. As we reported in July 2018, agency reform efforts should have implementation plans with key milestones and deliverables to track implementation progress, and clear outcome-oriented goals and performance measures for the proposed reforms. While one of the team’s projects is to determine the best strategy for providing IT solutions for human resources, the team has not identified time frames for completing the assessments needed to inform a new strategy, or deliverables for finalizing and implementing the IT strategy. DOD officials stated that they will develop project plans for completing assessments needed and identify time-frames with the reform team focused on broader department-wide IT. The human resources management reform team has also not set clear time frames or deliverables for developing and moving toward an optimal service delivery model for the department, which may be a long-term effort that goes beyond the expected 2 year duration of the reform team. Draft documents of the team we reviewed discussed obtaining relevant data in 2018, reviewing the effects of policy changes in 2019, and pursuing undefined pilot projects in 2020. However, DOD officials told us that the team plans to begin focusing on assessing optimal service delivery models possibly in 2019. No specific time frames for completion of this effort have been identified, and team members stated that completion of IT efforts and regulatory reforms takes precedence. Further, it is unclear how implementation of long-term efforts will be managed. Third, although one of the team’s charges is to determine the optimal model for department-wide delivery of human resources services, team members are not considering key pieces of information that would be useful in doing so. For example, team members we contacted were not aware that some DOD organizations were making potentially redundant and inefficient payments to the DAFAs for human resources services as overhead charges collected by multiple providers. As discussed previously, Standards for Internal Control in the Federal Government emphasizes the importance of quality performance information. When we raised the issue of overhead charges with team members, they noted that if such redundant payments are occurring, that would occur only within the department’s “Fourth Estate,” and that they are initially focusing on issues that affect the department as a whole. However, considering the size and scope of the Fourth Estate, which DOD reported includes more than $100 billion in funding annually, identifying comprehensive information regarding the extent of inefficient overhead costs would be important information for the reform team to consider in addressing inefficiencies and pursuing enterprise-wide solutions to determine the most effective, economical, and efficient model of service delivery. With consistent human resources performance information, clear time frames in place, and comprehensive information on overhead costs, the team would be better positioned to thoroughly assess the department’s system for human resources service delivery, and to develop and implement long-term solutions for better coordination or consolidation of this function. Further, DOD decision-makers would have assurance that any changes they make, such as consolidation of certain organizations or functions, would be based on sound and complete analysis. DOD Has Not Consistently Monitored and Evaluated the Results of Its Efficiency Initiatives That Affect the DAFAs DOD Has Implemented Several Previous Efficiency Initiatives Related to the DAFAs DOD has undertaken several efficiency initiatives since 2011 that are intended to improve the efficiency of headquarters organizations, including the DAFAs, and to identify related cost savings. These initiatives include the Secretary Gates Efficiencies, the More Disciplined Use of Resources, the Core Business Process Review, the Business Process and Systems Reviews, and a series of initiatives related to the savings required by the National Defense Authorization Act for Fiscal Year 2016. Table 3 describes each efficiency initiative we assessed as part of this review and includes an estimated cost savings that the department expected to achieve for each initiative. DOD Has Taken Some Steps to Monitor its Efficiency Initiatives but Does Not Consistently Establish a Baseline and Evaluate Results DOD has taken some steps to monitor and evaluate the results of its efficiency initiatives, but it has not consistently done so. For some of the efficiency initiatives, DOD ensured that there was ongoing monitoring and worked to evaluate results. For example, as part of the former Secretary Gates Efficiencies initiative, the military departments and the Special Operations Command were required to prepare briefings on the status of initiatives, and the offices of the then Deputy CMO and Comptroller directed them to enter information regarding their efficiency initiatives into a database designed to capture performance management data. Officials stated that this information was designed to allow them to track the progress of the initiatives, including milestones, risk assessments, and the roles and responsibilities of those implementing the initiatives. While implementing its More Disciplined Use of Resources initiative, DOD took some ad hoc steps to evaluate the effect of some of the efforts, such as establishing performance measures to assess their effect on achieving desired outcomes. An official in the office of the Under Secretary of Defense (Comptroller) later issued a memorandum that established a requirement to report on the initiatives, including performance goals, measures, and accomplishments. This memorandum was issued based on a recommendation we made in a prior report that the military departments and the Special Operations Command develop approaches for evaluating the effect of their efficiency initiatives, such as establishing performance measures or other indicators, collecting related performance information, and using this information to measure progress in achieving intended outcomes associated with their initiatives until implemented. However, for other efficiency initiatives, DOD did not consistently ensure that the agency established a baseline from which to measure progress, use ongoing monitoring, or evaluate results. For example, in the case of DOD’s Core Business Process Review initiative, DOD has not evaluated whether the effort achieved any of its intended savings or led to expected efficiencies. According to OCMO officials, DOD ultimately concluded that potential savings opportunities identified as part of this review could not entirely be achieved through these means. As a result, it is unclear what savings, if any, the department achieved. DOD’s Business Process and Systems Reviews ended with a briefing to the Deputy Secretary of Defense and Vice Chairman of the Joint Chiefs of Staff that included a summary of how the organizations would measure progress toward outcomes. While the office of the then Deputy CMO and the principal staff assistants were responsible for monitoring the effort up to the briefing, officials from the Deputy CMO’s office stated that following the briefing any monitoring that occurred would be the responsibility of the principal staff assistants. However, not all principal staff assistants continued monitoring. For example, although the CMO is the principal staff assistant for two of the agencies reviewed—WHS and the Pentagon Force Protection Agency—OCMO officials were unable to provide a list of initiatives related to each agency and the status of those initiatives. DOD also did not consistently ensure that the agency monitored and evaluated efforts associated with the National Defense Authorization Act for Fiscal Year 2016 requirement to save at least $10 billion from headquarters, administrative, and support activities for fiscal years 2015 through 2019. One of the efforts that DOD took pursuant to this requirement was for DAFAs to review their service contracts and present recommendations for cuts to a Senior Review Panel. Under this initiative, called the Service Requirement Review Boards, the panel either approved the proposed cuts or directed alternative reductions, and DCMO then monitored the organizations to ensure that the cuts were taken. However, other efforts DOD took pursuant to the requirement were not well monitored. For example, as part of the required savings, DOD identified approximately $5.3 billion that it later determined to be “not auditable” because the baseline for the reductions had not been established. Congress mandated DOD to report on its efforts with its budget submissions for fiscal years 2017 through 2019. DOD submitted its first report on May 22, 2018, and it included the $5.3 billion in savings that it had deemed “not auditable.” According to Standards for Internal Control in the Federal Government, agencies should monitor and evaluate the quality of performance over time. As part of this effort, agencies should establish a baseline from which to measure progress, use ongoing monitoring, and evaluate results. Further, the GPRA Modernization Act of 2010 requires agencies to regularly monitor their progress in achieving goals. Our previous work has noted that having a process with written guidance for monitoring achieved savings from efficiency initiatives can help organizations evaluate actual performance against planned results. We have also previously noted that without guidance that clearly outlines the information to be provided for evaluation, DOD cannot be assured that senior leaders are getting complete information needed to enhance their visibility over the status of efficiency initiatives. Although DOD has not consistently ensured that the agency established a baseline from which to measure progress, use ongoing monitoring, or evaluate results, OCMO officials stated that the department is working to do so. The officials stated that previous efforts to track reform had been more focused on assessing whether steps had been taken, rather than on measuring progress and evaluating the results. In its most recent budget request, DOD emphasized the importance of using goals and performance measures to assess the benefit and value of reforms, along with the importance of relevant, accurate, and timely data. In addition, the chartering documents for DOD’s reform teams highlight the importance of monitoring and evaluation, and senior DOD officials are echoing this point. We recently reported that outcome-oriented goals and performance measures and an implementation plan with key milestones and deliverables are important when considering agency reform. While the reform teams’ focus on monitoring and evaluation is a positive step, officials stated that the teams are expected to exist for approximately 2 years, and monitoring and evaluating results of some reform efforts may take a significant amount of time to appropriately assess the effects of the reform. In addition, OCMO officials have not provided evidence of plans to fully monitor efforts that began before the reform teams were created and should still be in process. These efforts include savings related to the requirement to save at least $10 billion from headquarters, administrative, and support activities for fiscal years 2015 through 2019. Without ensuring that efficiency initiatives are fully monitored and evaluated against established baselines over time, DOD lacks a systematic basis for evaluating whether its various initiatives have improved the efficiency or effectiveness of its programs or activities. Conclusions While DOD has long been required to periodically review the DAFAs to ensure, among other things, that the provision of their services and supplies are economical, efficient, and effective, it has relied on existing processes to fulfill this requirement, rather than with comprehensive and routine assessment. Without internal guidance that results in quality evaluations of the DAFAs, DOD decision makers remain limited in the information they have about what efficiencies the DAFAs could pursue and how they could cut costs. With establishment of the new CMO position, the department has an opportunity to address long-standing weaknesses in its business operations, including those performed by the DAFAs. The department’s effort to establish reform teams that can drive change, as well as a senior-level reform management group to direct and oversee these efforts, is a positive step forward. Having comprehensive and quality information would help the CMO and other senior leaders make important decisions regarding the direction of reform efforts and to assess whether efforts are achieving desired results. However, the human resources management reform team has not collected comprehensive information, such as performance information on hiring time frames and overhead costs for providing human resources services and time frames for these efforts, which would enable the department to best address inefficiencies among the DAFAs that provide human resources services. Moreover, DOD has not consistently ensured that the agency established a baseline from which to measure progress, used ongoing monitoring, or evaluated results. While OCMO officials are focused on the reform teams, full monitoring is necessary for all efficiency initiatives. Without routinely and comprehensively monitoring and evaluating ongoing efficiency initiatives across all of its reform efforts, DOD cannot have assurance as to whether its efforts have achieved desired outcomes, are saving resources, and are improving effectiveness. Recommendations for Executive Action We are making five recommendations to DOD. The Secretary of Defense should ensure that the CMO develops internal guidance that defines the requirements and provides clear direction for conducting and recording reviews of the DAFAs in response to10 U.S.C. § 192(c). This guidance, which could be similar to the guidance that exists for assessments of the combat support agencies, should reflect the key elements of quality evaluations. (Recommendation 1) The Secretary of Defense should ensure that the CMO, with input from the human resources management team, requires that all DOD human resources providers adopt consistent time-to-hire measures, as one process for assessing performance. (Recommendation 2) The Secretary of Defense should ensure that the CMO, through the human resources management reform team, identifies time frames and deliverables for identifying and adopting optimal IT solutions for human resources and fully assessing, identifying, and implementing the most effective and efficient means of human resources service delivery. (Recommendation 3) The Secretary of Defense should ensure that the CMO, through the human resources management reform team, collects information on the overhead costs charged by all DOD human resources service providers to assist in determining the most effective, economical, and efficient model of human resources service delivery within the department. (Recommendation 4) The Secretary of Defense should ensure that the CMO routinely and comprehensively monitors and evaluates ongoing efficiency initiatives within the department, including those related to the reform teams. This monitoring should include establishing baselines from which to measure progress, periodically reviewing progress made, and evaluating results. (Recommendation 5) Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. DOD concurred with our five recommendations and noted planned actions to address each recommendation. In its written comments, DOD stated that the National Defense Authorization Act for Fiscal Year 2019 gives the CMO additional specific authorities; substantially rewrites the requirements of section 192(c); and addresses the findings and recommendations in our report. Further, DOD stated the department is on track to achieve substantial savings through its reform team efforts and CMO emphasis on strong management practices, integrated processes, and best value business investments. DOD’s comments are reprinted in their entirety in appendix II. DOD also provided technical comments, which we incorporated into the report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and DOD’s Chief Management Officer. In addition, the report is available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2775 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report evaluates the extent to which (1) the Department of Defense (DOD) has assessed the continuing need for each defense agency and DOD field activity (DAFA); (2) any overlap or fragmentation among the DAFAs that provide human resources services has affected the delivery of those services; and (3) DOD has monitored and evaluated the results of its efficiency initiatives that affect the DAFAs. For our first objective, we reviewed DOD’s biennial reports on the DAFAs from 1987, the first year after enactment of the requirement, through 2012, the most recent year of DOD’s reporting. We also interviewed officials from the Chief Management Officer’s (CMO) office regarding DOD’s current processes for reviewing and recording its assessment of the DAFAs. Further, we reviewed the most recent DOD reports on combat support agencies, as there is a comparable statutory requirement for DOD to review this subset of the DAFAs, and the corresponding Joint Staff Instruction that guides those reports. We also spoke to relevant Joint Staff officials regarding the processes used to develop those reports. We compared DOD’s biennial reports and combat support agency reports against key elements of quality evaluations, which we identified in prior work and compiled as part of this review, as specified below. To analyze the quality of biennial reports and combat support agency reports, we identified and selected key elements of quality evaluations and compared reports against these key elements. We took four major steps to identify and select key elements. First, we identified criteria that assess the quality of agency evaluations and resulting reports based on a review of relevant GAO reports and discussions with a methodologist. Second, in collaboration with a methodologist, we assessed the appropriateness of identified criteria for this analysis, and we concluded that no single assessed criterion met the needs of this review. Third, we identified relevant areas of overlap across the criteria, and we excluded topics not relevant for our purposes, such as statistical modeling for technical evaluations. Fourth, we selected a set of elements encompassing relevant areas of overlap, and we discussed and revised these elements in collaboration with a methodologist. For the analysis of reports against key elements, we gathered and recorded evidence related to each question from a variety of DOD sources including DOD reports, statements from DOD officials representing the research team, and relevant DOD guidance related to the reports. One analyst assessed the extent to which the reports reflected the key elements, and a second analyst reviewed their assessment. Where there was disagreement in the assessment, analysts discussed their analysis and reached a consensus. Last, for the first objective, we assessed DOD’s response to the statutory requirement that it periodically review the continuing need for its DAFAs, and whether the provision of services and supplies by the DAFAs, rather than by the military departments, is more effective, economical, and efficient. We interviewed Office of the Chief Management Officer (OCMO) officials about the existing departmental processes that they stated addressed the statute, and we reviewed associated documentation provided by the OCMO officials, such as budget materials. For our second objective, we reviewed the business functions of selected DAFAs to identify possible inefficient duplication, overlap, or fragmentation in the services provided by those selected DAFAs to other organizations within the department. For our selection from the 27 DAFAs within DOD, we excluded DAFAs that have been previously identified as focus areas from our body of work on duplication, overlap, and fragmentation. We selected 7 DAFAs that are larger in size and budget than others and that focus on the traditional business areas of DOD, such as logistics or financial management. From those 7 DAFAs we reviewed the chartering directives for each of those agencies and DOD’s most recent biennial report on DAFAs to identify terms and phrases that appeared duplicative or repetitive in nature. Using that strategy, we selected human resources as the business line of effort for the focus of our review. We reviewed the provision of human resources services by DAFAs to identify any potential inefficient duplication, overlap, or fragmentation. For example, we reviewed the client bases serviced by each DAFA to identify inefficient duplication or overlap and reviewed the performance measures used by each DAFA to examine for fragmentation in approach to performance measurement. Pursuant to 10 U.S.C. § 191 , whenever the Secretary of Defense determines that it would be more effective, economical, or efficient to provide for the performance of a supply or service common to multiple military departments by a single agency, then the Secretary can create a DAFA to provide that supply or service. Further, at the time of our review, section 192(c) of title 10, United States Code, required, among other things, that the Secretary of Defense periodically ensure that the provision of services and supplies by the DAFAs, rather than by the military departments, is more effective, economical, and efficient. As such, we assessed DOD’s provision of human resources by DAFAs against GAO’s Duplication Evaluation Guide to assess DOD’s provision of human resources. We interviewed officials from DOD’s CMO office, the 3 DAFAs that provide human resources services for the department (DFAS, DLA, and WHS), and the lead and members of DOD’s human resources management reform team, and we reviewed documents such as DOD’s human capital operating plan and documents provided by the DAFAs that detailed their human resources business functions. For our third objective, we selected efficiency initiatives that affect DAFAs, and that we have previously reported on since 2011. We reviewed a selection of reform initiatives because DOD does not have a comprehensive listing of the reform initiatives it has undertaken. For the purposes of this review, we define “efficiency” as maintaining federal government services or outcomes using fewer resources (such as time and money) or improving or increasing the quality or quantity of services or outcomes while maintaining (or reducing) resources. We obtained documentation and spoke with officials from CMO and the DAFAs selected for the second objective of this report regarding DOD’s monitoring, assessing, and tracking of the selected reform initiatives. We obtained information and documentation from CMO officials regarding DOD’s ongoing reform efforts, including plans for monitoring and assessing these efforts. We compared this information and documentation against Standards for Internal Control in the Federal Government, which states that management should establish a baseline from which to measure progress, use ongoing monitoring, and evaluate results. We conducted this performance audit from August 2017 to September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Margaret Best (Assistant Director), Miranda Cohen, Alexandra Gonzalez, Amanda Manning, Richard Powelson, Suzanne Perkins, Andrew Stavisky, Amie Lesser, Sarah Veale, and Cheryl Weissman made key contributions to this report.
Why GAO Did This Study DOD spends billions of dollars annually to maintain business functions that support the warfighter. Many of these functions are performed by the DAFAs—DOD's 19 defense agencies and 8 field activities intended to provide department-wide consolidated support functions. GAO has previously identified instances of fragmentation, overlap, and duplication among the DAFAs. Senate Report 115-125, accompanying a bill for the National Defense Authorization Act for Fiscal Year 2018, included a provision that GAO review the DAFAs. This report evaluates the extent to which (1) DOD has assessed the continuing need for each DAFA; (2) any overlap or fragmentation among the DAFAs that provide human resources services has affected service delivery; and (3) DOD has monitored and evaluated the results of its efficiency initiatives that affect the DAFAs. GAO reviewed legal requirements, assessed prior DOD reports, and analyzed DOD's human resources activities and documentation tracking past efficiency initiatives. What GAO Found The Department of Defense (DOD) does not comprehensively or routinely assess the continuing need for its defense agencies and DOD field activities (DAFAs). DOD was statutorily required to review the services and supplies each DAFA provides to ensure there is a continuing need for each and that the provision of services and supplies by each DAFA, rather than by the military departments, is more effective, economical, or efficient. A DOD directive requires the recording of the review. DOD previously issued biennial reports to Congress to record its review. Since 2012, DOD has relied on existing processes to fulfill the requirement; such as the annual budget process and the day-to-day management of the DAFAs. However, DOD did not provide sufficient evidence that these processes satisfy the statute. For example, while DOD reviews the DAFAs during the budget process, it does not specifically review the provision of services by the DAFAs rather than the military departments. Further, DOD does not have internal guidance that provides clear direction for conducting and recording its response to the statutory requirement. Without such guidance, DOD is limited in its ability to clearly define or target the scope of its reviews and any resulting reports. As such, DOD and congressional decision makers may not have reasonable assurance of a continuing need for the DAFAs, or that the provision of services and supplies is effective, economical, and efficient. There is fragmentation and overlap within the DAFAs that provide human resources services to other defense agencies or organizations within DOD. At least six DOD organizations, including three DAFAs, perform human resources services for other parts of the department. One DAFA receives human resources services from all six organizations. This has resulted in negative effects, such as inconsistent performance information regarding hiring, fragmented information technology systems, and inefficiencies associated with overhead costs. For example, DOD officials stated that there are over 800 fragmented information technology systems used to store and record training records across the department, which are costly to maintain. DOD established a reform team to reduce inefficiencies within this business function. However, the team lacks comprehensive information on overhead costs that could guide reform and does not have time frames or deliverables for completing certain reform initiatives. With consistent human resource performance information, comprehensive information on overhead costs, and clear time frames in place, the team would be better positioned to thoroughly assess the department's system for human resources service delivery and develop and implement long-term solutions for better coordination or consolidation of this function. DOD has taken some steps to monitor and evaluate the results of key efficiency initiatives that affect the DAFAs. However, DOD has not always established baselines or performed ongoing monitoring of its initiatives. Further, DOD has focused on whether steps have been taken, rather than outcomes achieved. For example, DOD did not evaluate whether a prior efficiency initiative called the Core Business Process Review achieved any of its intended savings or led to expected efficiencies. Without ensuring that efficiency initiatives are fully monitored and evaluated against established baselines over time, DOD lacks a systematic basis for evaluating whether its various initiatives have improved the efficiency or effectiveness of its programs or activities. What GAO Recommends GAO is making five recommendations, including for DOD to develop internal guidance to conduct and record its reviews of DAFAs; collect consistent performance information and comprehensive overhead cost information; establish time frames and deliverables for key reform efforts; and ensure routine and comprehensive monitoring and evaluation of ongoing efficiency initiatives. DOD concurred with GAO's recommendations.
gao_GAO-18-585T
gao_GAO-18-585T_0
CMS Delegated Monitoring of Beneficiaries who Receive Opioid Prescriptions to Plan Sponsors, but Did Not Have Sufficient Information on Those Most at Risk for Harm CMS Delegated Monitoring of Individual Beneficiaries’ Opioid Prescriptions to Plan Sponsors Our October 2017 report found that CMS provided guidance to Medicare Part D plan sponsors on how they should monitor opioid overutilization problems among Part D beneficiaries. The agency included this guidance in its annual letters to plan sponsors, known as call letters; it also provided a supplemental memo to plan sponsors in 2012. Among other things, these guidance documents instructed plan sponsors to implement a retrospective drug utilization review (DUR) system to monitor beneficiary utilization starting in 2013. As part of the DUR systems, CMS required plan sponsors to have methods to identify beneficiaries who were potentially overusing specific drugs or groups of drugs, including opioids. Also in 2013, CMS created the Overutilization Monitoring System (OMS), which outlined criteria to identify beneficiaries with high-risk use of opioids, and to oversee sponsors’ compliance with CMS’s opioid overutilization policy. Plan sponsors may use the OMS criteria for their DUR systems, but they had some flexibility to develop their own targeting criteria within CMS guidance. At the time of our review, the OMS considered beneficiaries to be at a high risk of opioid overuse when they met all three of the following criteria: 1. received a total daily MED greater than 120 mg for 90 consecutive 2. received opioid prescriptions from four or more health care providers in the previous 12 months, and 3. received opioids from four or more pharmacies in the previous 12 months. The criteria excluded beneficiaries with a cancer diagnosis and those in hospice care, for whom higher doses of opioids may be appropriate. We found that through the OMS, CMS generated quarterly reports that list beneficiaries who met all of the criteria and who were identified as high- risk, and then distributed the reports to the plan sponsors. Plan sponsors were expected to review the list of identified beneficiaries, determine appropriate action, and then respond to CMS with information on their actions within 30 days. According to CMS officials, the agency also expected plan sponsors to share any information with CMS on beneficiaries that they identified through their own DUR systems. We found that some actions plan sponsors may take included the following: Case management. Case management may include an attempt to improve coordination issues, and often involves provider outreach, whereby the plan sponsor will contact the providers associated with the beneficiary to let them know that the beneficiary is receiving high levels of opioids and may be at risk of harm. Beneficiary-specific point-of-sale (POS) edits. Beneficiary-specific POS edits are restrictions that limit these beneficiaries to certain opioids and amounts. Pharmacists receive a message when a beneficiary attempts to fill a prescription that exceeds the limit in place for that beneficiary. Formulary-level POS edits. These edits alert providers who may not have been aware that their patients are receiving high levels of opioids from other doctors. Referrals for investigation. According to the six plan sponsors we interviewed, the referrals can be made to CMS’s National Benefit Integrity Medicare Drug Integrity Contractor (NBI MEDIC), which was responsible for identifying and investigating potential Part D fraud, waste, and abuse, or to the plan sponsor’s own internal investigative unit, if they have one. After investigating a particular case, they may refer the case to the HHS-OIG or a law enforcement agency, according to CMS, NBI MEDIC, and one plan sponsor. Based on CMS’s use of the OMS and the actions taken by plan sponsors, CMS reported a 61 percent decrease from calendar years 2011 through 2016 in the number of beneficiaries meeting the OMS criteria of high risk—from 29,404 to 11,594 beneficiaries—which agency officials considered an indication of success toward its goal of decreasing opioid use disorder. In addition, we found that CMS relied on separate patient safety measures developed and maintained by the Pharmacy Quality Alliance to assess how well Part D plan sponsors were monitoring beneficiaries and taking appropriate actions. In 2016, CMS started tracking plan sponsors’ performance on three patient safety measures that were directly related to opioids. The three measures were similar to the OMS criteria in that they identified beneficiaries with high dosages of opioids (120 mg MED), beneficiaries that use opioids from multiple providers and pharmacies, and beneficiaries that do both. However, one difference between these approaches was that the patient safety measures separately identified beneficiaries who fulfill each criterion individually. CMS Did Not Have Sufficient Information on Most Beneficiaries Potentially at Risk for Harm Our October 2017 report also found that CMS tracked the total number of beneficiaries who met all three OMS criteria as part of its opioid overutilization oversight across the Part D program. However, the agency did not have comparable information on most beneficiaries who receive high doses of opioids—regardless of the number of providers and pharmacies used—and who therefore may be at risk for harm, according to CDC’s 2016 guidelines. These guidelines noted that long-term use of high doses of opioids—those above a MED of 90 mg per day—are associated with significant risk of harm and should be avoided if possible. Based on the CDC guidelines, outreach to Part D plan sponsors, and CMS analyses of Part D data, CMS has revised its current OMS criteria to include more at-risk beneficiaries beginning in 2018. The new OMS criteria define a high user as an individual having an average daily MED greater than 90 mg for any duration; receiving opioids from four or more providers and four or more pharmacies, or from six or more providers regardless of the number of pharmacies, for the prior 6 months. Based on 2015 data, CMS found that 33,223 beneficiaries would have met these revised criteria. While the revised criteria would help identify beneficiaries who CMS determined are at the highest risk of opioid misuse and therefore may need case management by plan sponsors, they did not provide information on the total number of Part D beneficiaries who may be at risk of harm. In developing the revised criteria, CMS conducted a one-time analysis that estimated there were 727,016 beneficiaries with an average MED of 90 mg or more, for any length of time during a 6 month measurement period in 2015, regardless of the number of providers or pharmacies used. According to the CDC guidelines, these beneficiaries may be at risk of harm from opioids, and therefore tracking the total number of these beneficiaries over time could help CMS to determine whether it is making progress toward meeting the goals specified in its Opioid Misuse Strategy to reduce the risk of opioid use disorders, overdoses, inappropriate prescribing, and drug diversion. However, CMS officials told us that the agency did not keep track of the total number of these beneficiaries, and did not have plans to do so as part of OMS. (See fig. 1.) We also found that in 2016, CMS began to gather information from its patient safety measures on the number of beneficiaries who use more than 120 mg MED of opioids for 90 days or longer, regardless of the number of providers and pharmacies. The patient safety measures identified 285,119 such beneficiaries—counted as member-years—in 2016. However, this information did not include all at-risk beneficiaries, because the threshold was more lenient than indicated in CDC guidelines and CMS’s new OMS criteria. Because neither the OMS criteria nor the patient safety measures included all beneficiaries potentially at risk of harm from high opioid doses, we recommended that CMS should gather information over time on the total number of beneficiaries who receive high opioid morphine equivalent doses regardless of the number of pharmacies or providers, as part of assessing progress over time in reaching the agency’s goals related to reducing opioid use. HHS concurred with our recommendation. CMS Oversees Providers through its Contractor and Plan Sponsors, but Efforts Did Not Specifically Monitor Opioid Prescriptions Our October 2017 report found that CMS oversees providers who prescribe opioids to Medicare Part D beneficiaries through its contractor, NBI MEDIC, and the Part D plan sponsors. NBI MEDIC’s data analyses to identify outlier providers. CMS required NBI MEDIC to identify providers who prescribe high amounts of Schedule II drugs, which include but are not limited to opioids. Using prescription drug data, NBI MEDIC conducted a peer comparison of providers’ prescribing practices to identify outlier providers—the highest prescribers of Schedule II drugs—and reported the results to CMS. NBI MEDIC’s other projects. NBI MEDIC gathered and analyzed data on Medicare Part C and Part D, including projects using the Predictive Learning Analytics Tracking Outcome (PLATO) system. According to NBI MEDIC officials, these PLATO projects sought to identify potential fraud by examining data on provider behaviors. NBI MEDIC’s investigations to identify fraud, waste, and abuse. NBI MEDIC officials conducted investigations to assist CMS in identifying cases of potential fraud, waste, and abuse among providers for Medicare Part C and Part D. The investigations were prompted by complaints from plan sponsors; suspected fraud, waste, or abuse reported to NBI MEDIC’s call center; NBI MEDIC’s analysis of outlier providers; or from one of its other data analysis projects. NBI MEDIC’s referrals. After identifying providers engaged in potential fraudulent overprescribing, NBI MEDIC officials said they may refer cases to law enforcement agencies or the HHS-OIG for further investigation and potential prosecution. Plan sponsors’ monitoring of providers. CMS required all plan sponsors to adopt and implement an effective compliance program, which must include measures to prevent, detect, and correct Part C or Part D program noncompliance, as well as fraud, waste, and abuse. CMS’s guidance focused broadly on prescription drugs, and did not specifically address opioids. Our report concluded that although these efforts provided valuable information, CMS lacked information necessary to adequately oversee opioid prescribing. CMS’s oversight actions focused broadly on Schedule II drugs rather than specifically on opioids. For example, NBI MEDIC’s analyses to identify outlier providers did not indicate the extent to which they may be overprescribing opioids specifically. According to CMS officials, they directed NBI MEDIC to focus on Schedule II drugs, because these drugs have a high potential for abuse, whether they are opioids or other drugs. However, without specifically identifying opioids in these analyses—or an alternate source of data—CMS lacked data on providers who prescribe high amounts of opioids, and therefore cannot assess progress toward meeting its goals related to reducing opioid use, which would be consistent with federal internal control standards. Federal internal control standards require agencies to conduct monitoring activities and to use quality information to achieve objectives and address risks. As a result, we recommended that CMS require NBI MEDIC to gather separate data on providers who prescribe high amounts of opioids. This would allow CMS to better identify those providers who are inappropriately and potentially fraudulently overprescribing opioids. HHS agreed, and in April 2018 reported that it is working with NBI MEDIC to separately identify outlier prescribers of opioids. In addition, our 2017 report found that CMS also lacked key information necessary for oversight of opioid prescribing, because it did not require plan sponsors to report to NBI MEDIC or CMS cases of fraud, waste, and abuse; cases of overprescribing; or any actions taken against providers. Plan sponsors collected information on cases of fraud, waste, and abuse, and could choose to report this information to NBI MEDIC or CMS. While CMS receives information from plan sponsors who voluntarily reported their actions, it did not know the full extent to which plan sponsors had identified providers who prescribed high amounts of opioids, or the full extent to which sponsors had taken action to reduce overprescribing. We concluded that without this information, it was difficult for CMS to assess progress in this area, which would be consistent with federal internal control standards. In our report, we recommended that CMS require plan sponsors to report on investigations and other actions taken related to providers who prescribe high amounts of opioids. HHS did not concur with this recommendation. HHS noted that plan sponsors have the responsibility to detect and prevent fraud, waste, and abuse, and that CMS reviews cases when it conducts audits. HHS also stated that it seeks to balance requirements on plan sponsors when considering new regulatory requirements. However, without complete reporting—such as reporting from all plan sponsors on the actions they take to reduce overprescribing—we believe that CMS is missing key information that could help assess progress in this area. Due to the importance of this information for achieving the agency’s goals, we continue to believe that CMS should require plan sponsors to report on the actions they take to reduce overprescribing. Conclusions In conclusion, a large number of Medicare Part D beneficiaries use potentially harmful levels of prescription opioids, and reducing the inappropriate prescribing of these drugs has been a key part of CMS’s strategy to decrease the risk of opioid use disorder, overdoses, and deaths. Despite working to identify and decrease egregious opioid use behavior—such as doctor shopping—among Medicare Part D beneficiaries, CMS lacked the necessary information to effectively determine the full number of beneficiaries at risk of harm, as well as other information that could help CMS assess whether its efforts to reduce opioid overprescribing are effective. It is important that health care providers help patients to receive appropriate pain treatment, including opioids, based on the consideration of benefits and risks. Access to information on the risks that Medicare patients face from inappropriate or poorly monitored prescriptions, as well as information on providers who may be inappropriately prescribing opioids, could help CMS as it works to improve care. Chairman Toomey, Ranking Member Stabenow, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contacts and Staff Acknowledgements If you or your staff members have any questions concerning this testimony, please contact me at (202) 512-7114 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions to this testimony include Will Simerl (Assistant Director) and Carolyn Feis Korman (Analyst-in-Charge). Also contributing were Amy Andresen, George Bogart, Andrew Furillo, Drew Long, and Vikki Porter. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Misuse of prescription opioids can lead to overdose and death. Medicare and Medicaid, two of the nation's largest health care programs, provide prescription drug coverage that can include opioids. GAO and others have reported on inappropriate activities and risks associated with these prescriptions. This statement is based on GAO's October 2017 report (GAO-18-15) and discusses (1) CMS oversight of Medicare beneficiaries who receive opioid prescriptions under Part D, and (2) CMS oversight of providers who prescribe opioids to Medicare Part D beneficiaries. For the October 2017 report, GAO reviewed CMS opioid utilization and prescriber data, CMS guidance for plan sponsors, and CMS's strategy to prevent opioid misuse. GAO also interviewed CMS officials, the six largest Part D plan sponsors, and 12 national associations selected to represent insurance plans, pharmacy benefit managers, physicians, patients, and regulatory and law enforcement authorities. What GAO Found In October 2017, GAO found that the Centers for Medicare & Medicaid Services (CMS) provided guidance on the monitoring of Medicare beneficiaries who received opioid prescriptions to plan sponsors—private organizations that implement the Medicare drug benefit, Part D—but it lacked information on most beneficiaries at risk of harm from opioid use. Specifically, GAO found that CMS provided guidance to plan sponsors on how they should monitor opioid overutilization among Medicare Part D beneficiaries, and required them to implement drug utilization review systems that use criteria similar to CMS's. Prior to 2018, the agency's criteria focused on beneficiaries who did all the following: (1) received prescriptions of high doses of opioids, (2) received prescriptions from four or more providers, and (3) filled prescriptions at four or more pharmacies. According to CMS, this approach focused actions on beneficiaries the agency determined to have the highest risk of harm. For 2018, CMS revised the criteria to include more at-risk beneficiaries. CMS's criteria, including recent revisions, did not provide sufficient information about the larger population of potentially at-risk beneficiaries. CMS estimated that, in 2015, 727,016 beneficiaries would have received high doses of opioids regardless of the number of providers or pharmacies, but only 33,223 would have met its revised criteria. In 2016, CMS began to collect information on some of these beneficiaries using a higher dosage threshold for opioid use. However, based on Centers for Disease Control and Prevention guidelines, CMS's approach also missed some who could be at risk of harm. As a result, CMS had limited information to assess progress against the goals of the Medicare and Medicaid programs' Opioid Misuse Strategy, which includes activities to reduce risk of harm to beneficiaries. CMS provided oversight on prescribing of drugs at high risk of abuse through a variety of projects, but did not analyze data specifically on opioids. According to CMS officials, CMS and plan sponsors identified providers who prescribed large amounts of drugs with a high risk of abuse, and those suspected of fraud or abuse may be referred to law enforcement. However, GAO found that CMS did not identify providers who may be inappropriately prescribing large amounts of opioids separately from other drugs, and did not require plan sponsors to report actions they take when they identified such providers. As a result, CMS lacked information that it could use to assess how opioid prescribing patterns are changing over time, and whether its efforts to reduce harm are effective. What GAO Recommends In the October 2017 report, GAO made three recommendations that CMS (1) gather information on the full number of at-risk beneficiaries receiving high doses of opioids, (2) identify providers who prescribe high amounts of opioids, and (3) require plan sponsors to report to CMS on actions related to providers who inappropriately prescribe opioids. HHS concurred with the first two recommendations, but not with the third. GAO continues to believe the recommendation is valid, as discussed in the report and in this statement.
gao_GAO-18-660T
gao_GAO-18-660T_0
Background Medicare is one of four principal health-insurance programs administered by CMS; it provides health insurance for persons aged 65 and over, certain individuals with disabilities, and individuals with end-stage renal disease. See table 1 for information about Medicare’s component programs. Medicare is the largest CMS program, at $702 billion in fiscal year 2017. As discussed earlier, according to CBO, Medicare outlays are projected to rise to $1.5 trillion in 2028 (see fig. 1). Fraud Vulnerabilities and Improper Payments in Medicare Fraud involves obtaining something of value through willful misrepresentation. There are no reliable estimates of the extent of fraud in the Medicare program, or in the health-care industry as a whole. By its very nature, fraud is difficult to detect, as those involved are engaged in intentional deception. Further, potential fraud cases must be identified, investigated, prosecuted, and adjudicated—resulting in a conviction— before fraud can be established. As I mentioned earlier, we designated Medicare as a high-risk program in 1990 because its size, scope, and complexity make it vulnerable to fraud, waste, and abuse. Similarly, the Office of Management and Budget (OMB) designated all parts of Medicare a “high priority” program because they each report $750 million or more in improper payments in a given year. We also highlighted challenges associated with duplicative payments in Medicare in our annual report on duplication and opportunities for cost savings in federal programs. Improper payments are a significant risk to the Medicare program and may include payments made as a result of fraud. However, I would note that improper payments are not a proxy for the amount of fraud or extent of fraud risk in a particular program as improper payment measurement does not specifically identify or estimate such payments due to fraud. Improper payments are those that are either made in an incorrect amount (overpayments and underpayments) or those that should not have been made at all. CMS’s Fraud Risk Management Approach Our December 2017 report found that CMS manages its fraud risks as part of a broader program-integrity approach working with a broad array of stakeholders. CMS’s program-integrity approach includes efforts to address waste, abuse, and improper payments as well as fraud across its four principal programs. In Medicare, CMS collaborates with contractors, health-insurance plans, and law-enforcement and other agencies to carry out its program-integrity responsibilities. According to CMS officials, this broader program-integrity approach can help the agency develop control activities to address multiple sources of improper payments, including fraud. Fraud Risk Management Standards and Guidance According to federal standards and guidance, executive-branch agency managers are responsible for managing fraud risks and implementing practices for combating those risks. Federal internal control standards call for agency management officials to assess the internal and external risks their entities face as they seek to achieve their objectives. The standards state that as part of this overall assessment, management should consider the potential for fraud when identifying, analyzing, and responding to risks. Risk management is a formal and disciplined practice for addressing risk and reducing it to an acceptable level. In July 2015, GAO issued the Fraud Risk Framework, which provides a comprehensive set of key components and leading practices that serve as a guide for agency managers to use when developing efforts to combat fraud in a strategic, risk-based way. The Fraud Risk Framework describes leading practices in four components: commit, assess, design and implement, and evaluate and adapt, as depicted in figure 2. The Fraud Reduction and Data Analytics Act of 2015, enacted in June 2016, requires OMB to establish guidelines for federal agencies to create controls to identify and assess fraud risks and design and implement antifraud control activities. The act further requires OMB to incorporate the leading practices from the Fraud Risk Framework in the guidelines. In July 2016, OMB published guidance about enterprise risk management and internal controls in federal executive departments and agencies. Among other things, this guidance affirms that managers should adhere to the leading practices identified in the Fraud Risk Framework. Further, the act requires federal agencies to submit to Congress a progress report each year for 3 consecutive years on the implementation of the controls established under OMB guidelines, among other things. CMS’s Efforts Managing Fraud Risks in Medicare Were Partially Aligned with the Fraud Risk Framework CMS’s antifraud efforts partially aligned with the Fraud Risk Framework. Consistent with the framework, CMS has demonstrated commitment to combating fraud by creating a dedicated entity to lead antifraud efforts. It has also taken steps to establish a culture conducive to fraud risk management, although it could expand its antifraud training to include all employees. CMS has taken some steps to identify fraud risks in Medicare; however, it has not conducted a fraud risk assessment or developed a risk-based antifraud strategy for Medicare as defined in the Fraud Risk Framework. CMS has established monitoring and evaluation mechanisms for its program-integrity control activities that, if aligned with a risk-based antifraud strategy, could enhance the effectiveness of fraud risk management in Medicare. CMS’s Organizational Structure Includes a Dedicated Entity for Program-Integrity and Antifraud Efforts The commit component of the Fraud Risk Framework calls for an agency to commit to combating fraud by creating an organizational culture and structure conducive to fraud risk management. This component includes establishing a dedicated entity to lead fraud risk management activities. Within CMS, the Center for Program Integrity (CPI) serves as the dedicated entity for fraud, waste, and abuse issues in Medicare, which is consistent with the Fraud Risk Framework. CPI was established in 2010, in response to a November 2009 Executive Order on reducing improper payments and eliminating waste in federal programs. This formalized role, according to CMS officials, elevated the status of program-integrity efforts, which previously were carried out by other parts of CMS. As an executive-level Center—on the same level with five other executive-level Centers at CMS, such as the Center for Medicare—CPI has a direct reporting line to executive-level management at CMS. The Fraud Risk Framework identifies a direct reporting line to senior-level managers within the agency as a leading practice. According to CMS officials, this elevated organizational status offers CPI heightened visibility across CMS, attention by CMS executive leadership, and involvement in executive-level conversations. CMS Has Taken Steps to Create a Culture Conducive to Fraud Risk Management but Could Enhance Antifraud Training for Employees The commit component of the Fraud Risk Framework also includes creating an organizational culture to combat fraud at all levels of the agency. Consistent with the Fraud Risk Framework, CMS has promoted an antifraud culture by, for example, coordinating with internal and external stakeholders. Consistent with leading practices in the Fraud Risk Framework to involve all levels of the agency in setting an antifraud tone, CPI has worked collaboratively with other CMS Centers. In addition to engaging executive-level officials of other CMS Centers through the Program Integrity Board, CPI has worked collaboratively with other Centers within CMS to incorporate antifraud features into new program design or policy development and established regular communication at the staff level. For example: Center for Medicare and Medicaid Innovation (CMMI). When developing the Medicare Diabetes Prevention Program, CMMI officials told us they worked with CPI’s Provider Enrollment and Oversight Group and Governance Management Group to develop risk-based screening procedures for entities that would enroll in Medicare to provide diabetes-prevention services, among other activities. The program was expanded nationally in 2016, and CMS determined that an entity may enroll in Medicare as a program supplier if it satisfies enrollment requirements, including that the supplier must pass existing high categorical risk-level screening requirements. Center for Medicare (CM). In addition to building safeguards into programs and developing policies, CM officials told us that there are several standing meetings, on monthly, biweekly, and weekly bases, between groups within CM and CPI that discuss issues related to provider enrollment, FFS operations, and contractor management. A senior CM official also told us that there are ad hoc meetings taking place between CM and CPI: “We interact multiple times daily at different levels of the organization. Working closely is just a regular part of our business.” CMS has also demonstrated its commitment to addressing fraud, waste, and abuse to its stakeholders. Representatives of CMS’s extensive stakeholder network whom we interviewed—contractors and officials from public and private entities—generally recognized the agency’s commitment to combating fraud. In our interviews with stakeholders, officials observed CMS’s increased commitment over time to address fraud, waste, and abuse and cited examples of specific CMS actions. CMS contractors told us that CMS’s commitment to combating fraud is incorporated into contractual requirements, such as requiring (1) data analysis for potential fraud leads and (2) fraud-awareness training for providers. Officials from entities that are members of the Healthcare Fraud Prevention Partnership (HFPP), specifically, a health-insurance plan and the National Health Care Anti-Fraud Association, added that CMS’s effort to establish the HFPP and its ongoing collaboration and information sharing reflect CMS’s commitment to combat fraud in Medicare. The Fraud Risk Framework identifies training as one way of demonstrating an agency’s commitment to combating fraud. Training and education intended to increase fraud awareness among stakeholders, managers, and employees serve as a preventive measure to help create a culture of integrity and compliance within the agency. The Fraud Risk Framework discusses requiring all employees to attend training upon hiring and on an ongoing basis thereafter. To increase awareness of fraud risks in Medicare, CMS offers and requires training for stakeholder groups such as providers, beneficiaries, and health-insurance plans. Specifically, through its National Training Program and Medicare Learning Network, CMS makes available training materials on combating Medicare fraud, waste, and abuse. These materials help to identify and report fraud, waste, and abuse in CMS programs and are geared toward providers, beneficiaries, as well as trainers and other stakeholders. Separately, CMS requires health- insurance plans working with CMS to provide annual fraud, waste, and abuse training to their employees. However, CMS does not offer or require similar fraud-awareness training for the majority of its workforce. For a relatively small portion of its overall workforce—specifically, contracting officer representatives who are responsible for certain aspects of the acquisition function—CMS requires completion of fraud and abuse prevention training every 2 years. According to CMS, 638 of its contracting officer representatives (or about 10 percent of its overall workforce) completed such training in 2016 and 2017. Although CMS offers fraud-awareness training to others, the agency does not require fraud-awareness training for new hires or on a regular basis for all employees because the agency has focused on providing process-based internal controls training for its employees. While fraud-awareness training for contracting officer representatives is an important step in helping to promote fraud risk management, fraud- awareness training specific to CMS programs would be beneficial for all employees. Such training would not only be consistent with what CMS offers to or requires of its stakeholders and some of its employees, but would also help to keep the agency’s entire workforce continuously aware of fraud risks and examples of known fraud schemes, such as those identified in successful HHS OIG investigations. Such training would also keep employees informed as they administer CMS programs or develop agency policies and procedures. Considering the vulnerability of Medicare and Medicaid programs to fraud, waste, and abuse, without regular required training CMS cannot be assured that its workforce of over 6,000 employees is continuously aware of risks facing its programs. In our December 2017 report, we recommended that the Administrator of CMS provide fraud-awareness training relevant to risks facing CMS programs and require new hires to undergo such training and all employees to undergo training on a recurring basis. In its March 2018 letter to GAO, HHS stated that CMS is in the process of developing Fraud, Waste, and Abuse Training for all new employees, to be presented at CMS New Employee Orientations. Additionally, CMS is also developing training to be completed by current CMS employees on an annual basis. As of July 2018, this recommendation remains open. CMS Has Taken Steps to Identify Fraud Risks but Has Not Conducted a Fraud Risk Assessment for Medicare The assess component of the Fraud Risk Framework calls for federal managers to plan regular fraud risk assessments and to assess risks to determine a fraud risk profile. Identifying fraud risks is one of the steps included in the Fraud Risk Framework for assessing risks to determine a fraud risk profile. In our December 2017 report, we discussed several examples of steps CMS has taken to identify fraud risks as well as control activities that target areas the agency has designated as higher risk within Medicare, including specific provider types and specific geographic locations. These examples include data analytics to assist investigations in Medicare FFS, including Medicare’s Fraud Prevention System (FPS ), prior authorization for Medicare FFS services or supplies, revised provider screening and enrollment processes for Medicare FFS, and temporary provider enrollment moratoriums for certain providers and geographic areas for Medicare FFS. CMS officials told us that CPI initially focused on developing control activities for Medicare FFS and consider these activities to be the most mature of all CPI efforts to address fraud risks. CMS Has Not Conducted a Fraud Risk Assessment for Medicare The assess component of the Fraud Risk Framework calls for federal managers to plan regular fraud risk assessments and assess risks to determine a fraud risk profile. Furthermore, federal internal control standards call for agency management to assess the internal and external risks their entities face as they seek to achieve their objectives. The standards state that, as part of this overall assessment, management should consider the potential for fraud when identifying, analyzing, and responding to risks. The Fraud Risk Framework states that, in planning the fraud risk assessment, effective managers tailor the fraud risk assessment to the program by, among other things, identifying appropriate tools, methods, and sources for gathering information about fraud risks and involving relevant stakeholders in the assessment process. Fraud risk assessments that align with the Fraud Risk Framework involve (1) identifying inherent fraud risks affecting the program, (2) assessing the likelihood and impact of those fraud risks, (3) determining fraud risk tolerance, (4) examining the suitability of existing fraud controls and prioritizing residual fraud risks, and (5) documenting the results (see fig. 3). Although CMS had identified some fraud risks posed by providers in Medicare FFS, the agency had not conducted a fraud risk assessment for the Medicare program as a whole. Such a risk assessment would provide the detailed information and insights needed to create a fraud risk profile, which, in turn, is the basis for creating an antifraud strategy. According to CMS officials, CMS had not conducted a fraud risk assessment for Medicare because, within CPI’s broader approach of preventing and eliminating improper payments, its focus has been on addressing specific vulnerabilities among provider groups that have shown themselves particularly prone to fraud, waste, and abuse. With this approach, however, it is unlikely that CMS will be able to design and implement the most-appropriate control activities to respond to the full portfolio of fraud risks. A fraud risk assessment consists of discrete activities that build upon each other. Specifically: Identifying inherent fraud risks affecting the program. As discussed earlier, CMS took steps to identify fraud risks. However, CMS has not used a process to identify inherent fraud risks from the universe of potential vulnerabilities facing Medicare, including threats from various sources. According to CPI officials, most of the agency’s fraud control activities are focused on fraud risks posed by providers. The Fraud Risk Framework discusses fully considering inherent fraud risks from internal and external sources in light of fraud risk factors such as incentives, opportunities, and rationalization to commit fraud. For example, according to CMS officials, the inherent design of the Medicare Part C program may pose fraud risks that are challenging to detect. A fraud risk assessment would help CMS identify all sources of fraudulent behaviors, beyond threats posed by providers, such as those posed by health-insurance plans, contractors, or employees. Assessing the likelihood and impact of fraud risks and determining fraud risk tolerance. CMS has taken steps to prioritize fraud risks in some areas, but it had not assessed the likelihood or impact of fraud risks or determined fraud risk tolerance across all parts of Medicare. Assessing the likelihood and impact of inherent fraud risks would involve consideration of the impact of fraud risks on program finances, reputation, and compliance. Without assessing the likelihood and impact of risks in Medicare or internally determining which fraud risks may fall under the tolerance threshold, CMS cannot be certain that it is aware of the most-significant fraud risks facing this program and what risks it is willing to tolerate based on the program’s size and complexity. Examining the suitability of existing fraud controls and prioritizing residual fraud risks. CMS had not assessed existing control activities or prioritized residual fraud risks. According to the Fraud Risk Framework, managers may consider the extent to which existing control activities—whether focused on prevention, detection, or response—mitigate the likelihood and impact of inherent risks and whether the remaining risks exceed managers’ tolerance. This analysis would help CMS to prioritize residual risks and to determine mitigation approaches. For example, CMS had not established preventive fraud control activities in Medicare Part C. Using a fraud risk assessment for Medicare Part C and closely examining existing fraud control activities and residual risks, CMS could be better positioned to address fraud risks facing this growing program and develop preventive control activities. Furthermore, without assessing existing fraud control activities and prioritizing residual fraud risks, CMS cannot be assured that its current control activities are addressing the most-significant risks. Such analysis would also help CMS determine whether additional, preferably preventive, fraud controls are needed to mitigate residual risks, make adjustments to existing control activities, and potentially scale back or remove control activities that are addressing tolerable fraud risks. Documenting the risk-assessment results in a fraud risk profile. CMS had not developed a fraud risk profile that documents key findings and conclusions of the fraud risk assessment. According to the Fraud Risk Framework, the risk profile can also help agencies decide how to allocate resources to respond to residual fraud risks. Given the large size and complexity of Medicare, a documented fraud risk profile could support CMS’s resource-allocation decisions as well as facilitate the transfer of knowledge and continuity across CMS staff and changing administrations. Senior CPI officials told us that the agency plans to start a fraud risk assessment for Medicare after it completes a separate fraud risk assessment of the federally facilitated marketplace. This fraud risk assessment for the federally facilitated marketplace eligibility and enrollment process is being conducted in response to a recommendation we made in February 2016. In April 2017, CPI officials told us that this fraud risk assessment was largely completed, although in September 2017 CPI officials told us that the assessment was undergoing agency review. CPI officials told us that they have informed CM officials that there will be future fraud risk assessments for Medicare; however, they could not provide estimated timelines or plans for conducting such assessments, such as the order or programmatic scope of the assessments. Once completed, CMS could use the federally facilitated marketplace fraud risk assessment and apply any lessons learned when planning for and designing fraud risk assessments for Medicare. According to the Fraud Risk Framework, factors such as size, resources, maturity of the agency or program, and experience in managing risks can influence how the entity plans the fraud risk assessment. Additionally, effective managers tailor the fraud risk assessment to the program when planning for it. The large scale and complexity of Medicare as well as time and resources involved in conducting a fraud risk assessment underscore the importance of a well-planned and tailored approach to identifying the assessment’s programmatic scope. Planning and tailoring may involve decisions to conduct a fraud risk assessment for Medicare as a whole or divided into several subassessments to reflect their various component parts (e.g., Medicare Part C). CMS’s existing fraud risk identification efforts as well as communication channels with stakeholders could serve as a foundation for developing a fraud risk assessment for Medicare. The leading practices identified in the Fraud Risk Framework discuss the importance of identifying appropriate tools, methods, and sources for gathering information about fraud risks and involving relevant stakeholders in the assessment process. CMS’s fraud risk identification efforts discussed earlier could provide key information about fraud risks and their likelihood and impact. Furthermore, existing relationships and communication channels across CMS and its extensive network of stakeholders could support building a comprehensive understanding of known and potential fraud risks for the purposes of a fraud risk assessment. For example, the fraud vulnerabilities identified through data analysis and information sharing with health-insurance plans, law-enforcement organizations, and contractors could inform a fraud risk assessment. CPI’s Command Center missions—facilitated collaboration sessions that bring together experts from various disciplines to improve the processes for fraud prevention in Medicare—could bring together experts to identify potential or emerging fraud vulnerabilities or to brainstorm approaches to mitigate residual fraud risks. As CMS makes plans to move forward with a fraud risk assessment for Medicare, it will be important to consider the frequency with which the fraud risk assessment would need to be updated. While, according to the Fraud Risk Framework, the time intervals between updates can vary based on the programmatic and operating environment, assessing fraud risks on an ongoing basis is important to ensure that control activities are continuously addressing fraud risks. The constantly evolving fraud schemes, the size of the programs in terms of beneficiaries and expenditures, as well as continual changes in Medicare—such as development of innovative payment models and increasing managed- care enrollment—call for constant vigilance and regular updates to the fraud risk assessment. In our December 2017 report we recommended that the Administrator of CMS conduct fraud risk assessments for Medicare and Medicaid to include respective fraud risk profiles and plans for regularly updating the assessments and profiles. In its March 2018 letter to GAO, HHS stated that it is currently evaluating its options with regards to implementing this recommendation. As of July 2018, the recommendation remains open. CMS Needs to Develop a Risk-Based Antifraud Strategy for Medicare, Which Would Include Plans for Monitoring and Evaluation The design and implement component of the Fraud Risk Framework calls for federal managers to design and implement a strategy with specific control activities to mitigate assessed fraud risks and collaborate to help ensure effective implementation. According to the Fraud Risk Framework, effective managers develop and document an antifraud strategy that describes the program’s approach for addressing the prioritized fraud risks identified during the fraud risk assessment, also referred to as a risk-based antifraud strategy. A risk- based antifraud strategy describes existing fraud control activities as well as any new fraud control activities a program may adopt to address residual fraud risks. In developing a strategy and antifraud control activities, effective managers focus on fraud prevention over detection, develop a plan for responding to identified instances of fraud, establish collaborative relationships with stakeholders, and create incentives to help effectively implement the strategy. Additionally, as part of a documented strategy, management identifies roles and responsibilities of those involved in fraud risk management activities; describes control activities as well as plans for monitoring and evaluation; creates timelines; and communicates the antifraud strategy to employees and stakeholders, among other things. As discussed earlier, CMS had some control activities in place to identify fraud risk in Medicare, particularly in the FFS program. However, CMS had not developed and documented a risk-based antifraud strategy to guide its design and implementation of new antifraud activities and to better align and coordinate its existing activities to ensure it is targeting and mitigating the most-significant fraud risks. Antifraud strategy. CMS officials told us that CPI does not have a documented risk-based antifraud strategy. Although CMS has developed several documents that describe efforts to address fraud, the agency had not developed a risk-based antifraud strategy for Medicare because, as discussed earlier, it had not conducted a fraud risk assessment that would serve as a foundation for such strategy. In 2016, CPI identified five strategic objectives for program integrity, which include antifraud elements and an emphasis on prevention. However, according to CMS officials, these objectives were identified from discussions with CMS leadership and various stakeholders and not through a fraud risk assessment process to identify inherent fraud risks from the universe of potential vulnerabilities, as described earlier and called for in the leading practices. These strategic objectives were presented at an antifraud conference in 2016, but were not announced publicly until the release of the Annual Report to Congress on the Medicare and Medicaid Integrity Programs for Fiscal Year 2015 in June 2017. Stakeholder relationships and communication. CMS has established relationships and communicated with stakeholders, but, without an antifraud strategy, stakeholders we spoke with lacked a common understanding of CMS’s strategic approach. Prior work on practices that can help federal agencies collaborate effectively calls for a strategy that is shared with stakeholders to promote trust and understanding. Once an antifraud strategy is developed, the Fraud Risk Framework calls for managers to collaborate to ensure effective implementation. Although some CMS stakeholders were able to describe various CMS program- integrity priorities and activities, such as home health being a fraud risk priority, the stakeholders could not communicate, articulate, or cite a common CMS strategic approach to address fraud risks in its programs. Incentives. The Fraud Risk Framework discusses creating incentives to help ensure effective implementation of the antifraud strategy once it is developed. Currently, some incentives within stakeholder relationships may complicate CMS’s antifraud efforts. Among contractors, CMS encourages information sharing through conferences and workshops; however, competition for CMS business among contractors can be a disincentive to information sharing. CMS officials acknowledged this concern and said that they expect contractors to share information related to fraud schemes, outcomes of investigations, and tips for addressing fraud, but not proprietary information such as algorithms to risk-score providers. Without developing and documenting an antifraud strategy based on a fraud risk assessment, as called for in the design and implement component of the Fraud Risk Framework, CMS cannot ensure that it has a coordinated approach to address the range of fraud risks and to appropriately target and allocate resources for the most-significant risks. Considering fraud risks to which Medicare is most vulnerable, in light of the malicious intent of those who aim to exploit the programs, would help CMS to examine its current control activities and potentially design new ones with recognition of fraudulent behavior it aims to prevent. This focus on fraud is distinct from a broader view of program integrity and improper payments by considering the intentions and incentives of those who aim to deceive rather than well-intentioned providers who make mistakes. Also, continued growth of the program, such as growth of Medicare Part C, calls for consideration of preventive fraud control activities across the entire network of entities involved. Furthermore, considering the large size and complexity of Medicare and the extensive stakeholder network involved in managing fraud in the program, a strategic approach to managing fraud risks within the programs is essential to ensure that a number of existing control activities and numerous stakeholder relationships and incentives are being aligned to produce desired results. Once developed, an antifraud strategy that is clearly articulated to various CMS stakeholders would help CMS to address fraud risks in a more coordinated and deliberate fashion. Thinking strategically about existing control activities, resources, tools, and information systems could help CMS to leverage resources while continuing to integrate Medicare program-integrity efforts along functional lines. A strategic approach grounded in a comprehensive assessment of fraud risks could also help CMS to identify future enhancements for existing control activities, such as new preventive capabilities for its Fraud Prevention System (FPS) or additional fraud factors in provider enrollment and revalidation, such as provider risk-scoring, to stay in step with evolving fraud risks. CMS Has Established Monitoring and Evaluation Mechanisms That Could Inform a Risk-Based Antifraud Strategy for Medicare The evaluate and adapt component of the Fraud Risk Framework calls for federal managers to evaluate outcomes using a risk-based approach and adapt activities to improve fraud risk management. Furthermore, according to federal internal control standards, managers should establish and operate monitoring activities to monitor the internal control system and evaluate the results, which may be compared against an established baseline. Ongoing monitoring and periodic evaluations provide assurances to managers that they are effectively preventing, detecting, and responding to potential fraud. CMS has established monitoring and evaluation mechanisms for its program-integrity activities that it could incorporate into an antifraud strategy. As described in the Fraud Risk Framework, agencies can gather information on the short-term or intermediate outcomes of some antifraud initiatives, which may be more readily measured. For example, CMS has developed some performance measures to provide a basis for monitoring its progress towards meeting the program-integrity goals set in the HHS Strategic Plan and Annual Performance Plan. Specifically, CMS measures whether it is meeting its goal of “increasing the percentage of Medicare FFS providers and suppliers identified as high risk that receive an administrative action.” CMS does not set specific antifraud goals for other parts of Medicare; other CMS performance measures relate to measuring or reducing improper payments in the various parts of Medicare. CMS uses return-on-investment and savings estimates to measure the effectiveness of its Medicare program-integrity activities and FPS. For example, CMS uses return-on-investment to measure the effectiveness of FPS and, in response to a recommendation we made in 2012, CMS developed outcome-based performance targets and milestones for FPS. CMS has also conducted individual evaluations of its program-integrity activities, such as an interim evaluation of the prior-authorization demonstration for power mobility devices that began in 2012 and is currently implemented in 19 states. Commensurate with greater maturity of control activities in Medicare FFS compared to other parts of Medicare and Medicaid, monitoring and evaluation activities for Medicare Parts C and D and Medicaid are more limited. For example, CMS calculates savings for its program-integrity activities in Medicare Parts C and D, but not a full return-on-investment. CMS officials told us that calculating costs for specific activities is challenging because of overlapping activities among contractors. CMS officials said they continue to refine methods and develop new savings estimates for additional program-integrity activities. According to the Fraud Risk Framework, effective managers develop a strategy and evaluate outcomes using a risk-based approach. In developing an effective strategy and antifraud activities, managers consider the benefits and costs of control activities. Ongoing monitoring and periodic evaluations provide reasonable assurance to managers that they are effectively preventing, detecting, and responding to potential fraud. Monitoring and evaluation activities can also support managers’ decisions about allocating resources, and help them to demonstrate their continued commitment to effectively managing fraud risks. As CMS takes steps to develop an antifraud strategy, it could include plans for refining and building on existing methods such as return-on- investment or savings measures, and setting appropriate targets to evaluate the effectiveness of all of CMS’s antifraud efforts. Such a strategy would help CMS to efficiently allocate program-integrity resources and to ensure that the agency is effectively preventing, detecting, and responding to potential fraud. For example, while doing so would involve challenges, CMS’s strategy could detail plans to advance efforts to measure a potential fraud rate through baseline and periodic measures. Fraud-rate measurement efforts could also inform risk assessment activities, identify currently unknown fraud risks, align resources to priority risks, and develop effective outcome metrics for antifraud controls. Such a strategy would also help CMS ensure that it has effective performance measures in place to assess its antifraud efforts beyond those related to providers in Medicare FFS, and establish appropriate targets to measure the agency’s progress in addressing fraud risks. In our December 2017 report we recommended that the Administrator of CMS should, using the results of the fraud risk assessments for Medicare, create, document, implement, and communicate an antifraud strategy that is aligned with and responsive to regularly assessed fraud risks. This strategy should include an approach for monitoring and evaluation. In its March 2018 letter to GAO, HHS stated that it is currently evaluating its options with regards to implementing this recommendation. As of July 2018, the recommendation remains open. Chairman Jenkins and Ranking Member Lewis, this concludes my prepared statement. I look forward to the subcommittee’s questions. GAO Contacts and Staff Acknowledgments If you or your staff have any questions concerning this testimony, please contact Seto J. Bagdoyan, who may be reached at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions to this testimony include Tonita Gillich (Assistant Director), Irina Carnevale (Analyst-in- Charge), Colin Fallon, Scott Hiromoto, and Maria McMullen. Related GAO Reports Improper Payments: Actions and Guidance Could Help Address Issues and Inconsistencies in Estimation Processes. GAO-18-377. Washington, D.C.: May 31, 2018. Medicare: CMS Should Take Actions to Continue Prior Authorization Efforts to Reduce Spending. GAO-18-341. Washington, D.C.: April 20, 2018. Medicare and Medicaid: CMS Needs to Fully Align Its Antifraud Efforts with the Fraud Risk Framework. GAO-18-88. Washington, D.C.: December 5, 2017. Medicare: CMS Fraud Prevention System Uses Claims Analysis to Address Fraud. GAO-17-710. Washington, D.C.: August 30, 2017. Medicare Advantage Program Integrity: CMS’s Efforts to Ensure Proper Payments and Identify and Recover Improper Payments. GAO-17-761T. Washington, D.C.: July 19, 2017. Medicare Provider Education: Oversight of Efforts to Reduce Improper Billing Needs Improvement. GAO-17-290. Washington, D.C.: March 10, 2017. High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. Medicare Advantage: Limited Progress Made to Validate Encounter Data Used to Ensure Proper Payments. GAO-17-223. Washington, D.C.: January 17, 2017. Medicare: Initial Results of Revised Process to Screen Providers and Suppliers, and Need for Objectives and Performance Measures. GAO-17-42. Washington, D.C.: November 15, 2016. Medicare: Claim Review Programs Could Be Improved with Additional Prepayment Reviews and Better Data. GAO-16-394. Washington, D.C.: April 13, 2016. Medicare Advantage: Fundamental Improvements Needed in CMS’s Effort to Recover Substantial Amounts of Improper Payments. GAO-16- 76. Washington, D.C.: April 8, 2016. Health Care Fraud: Information on Most Common Schemes and the Likely Effect of Smart Cards. GAO-16-216. Washington, D.C.: January 22, 2016. A Framework for Managing Fraud Risks in Federal Programs. GAO-15-593SP. Washington, D.C.: July 28, 2015. Medicare Program Integrity: Increased Oversight and Guidance Could Improve Effectiveness and Efficiency of Postpayment Claims Reviews. GAO-14-474. Washington, D.C.: July 18, 2014. Medicare Fraud Prevention: CMS Has Implemented a Predictive Analytics System, but Needs to Define Measures to Determine Its Effectiveness. GAO-13-104. Washington, D.C.: October 15, 2012. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Medicare covered over 58 million people in 2017 and has wide-ranging impact on the health-care sector and the overall U.S. economy. However, the billions of dollars in Medicare outlays as well as program complexity make it susceptible to improper payments, including fraud. Although there are no reliable estimates of fraud in Medicare, in fiscal year 2017 improper payments for Medicare were estimated at about $52 billion. Further, about $1.4 billion was returned to Medicare Trust Funds in fiscal year 2017 as a result of recoveries, fines, and asset forfeitures. In December 2017, GAO issued a report examining how CMS managed its fraud risks overall and particularly the extent to which its efforts in the Medicare and Medicaid programs aligned with GAO's Framework. This testimony, based on that report, discusses the extent to which CMS's management of fraud risks in Medicare aligns with the Framework. For the report, GAO reviewed CMS policies and interviewed officials and external stakeholders. What GAO Found In its December 2017 report, GAO found that the Centers for Medicare & Medicaid Services' (CMS) antifraud efforts for Medicare partially align with GAO's 2015 A Framework for Managing Fraud Risks in Federal Programs (Framework). The Fraud Reduction and Data Analytics Act of 2015 required OMB to incorporate leading practices identified in this Framework in its guidance to agencies on addressing fraud risks. Consistent with the Framework, GAO determined that CMS had demonstrated commitment to combating fraud by creating a dedicated entity to lead antifraud efforts; the Center for Program Integrity (CPI) serves as this entity for fraud, waste, and abuse issues in Medicare. CMS also promoted an antifraud culture by, for example, coordinating with internal stakeholders to incorporate antifraud features into new program design. To increase awareness of fraud risks in Medicare, CMS offered and required training for stakeholder groups such as providers of medical services, but it did not offer or require similar fraud-awareness training for most of its workforce. CMS took some steps to identify fraud risks in Medicare; however, it had not conducted a fraud risk assessment or designed and implemented a risk-based antifraud strategy for Medicare as defined in the Framework. CMS identified fraud risks through control activities that target areas the agency designated as higher risk within Medicare, including specific provider types, such as home health agencies. Building on earlier steps and conducting a fraud risk assessment, consistent with the Framework, would provide the detailed information and insights needed to create a fraud risk profile, which, in turn, is the basis for creating an antifraud strategy. CMS established monitoring and evaluation mechanisms for its program-integrity control activities that, if aligned with an antifraud strategy, could enhance the effectiveness of fraud risk management in Medicare. For example, CMS used return-on-investment and savings estimates to measure the effectiveness of its Medicare program-integrity activities. In developing an antifraud strategy, consistent with the Framework, CMS could include plans for refining and building on existing methods such as return-on-investment, to evaluate the effectiveness of all of its antifraud efforts. What GAO Recommends In its December 2017 report, GAO made three recommendations, namely that CMS (1) require and provide fraud-awareness training to its employees; (2) conduct fraud risk assessments; and (3) create an antifraud strategy for Medicare, including an approach for evaluation. The Department of Health and Human Services agreed with these recommendations and reportedly is evaluating options to implement them. Accordingly, the recommendations remain open.
gao_GAO-19-14
gao_GAO-19-14_0
Background In November 2002, Congress passed and the President signed the Improper Payments Information Act of 2002 (IPIA), which was later amended by IPERA and the Improper Payments Elimination and Recovery Improvement Act of 2012 (IPERIA). The amended legislation requires executive branch agencies to (1) review all programs and activities and identify those that may be susceptible to significant improper payments (commonly referred to as a risk assessment), (2) publish improper payment estimates for those programs and activities that the agency identified as being susceptible to significant improper payments, (3) implement corrective actions to reduce improper payments and set reduction targets, and (4) report on the results of addressing the foregoing requirements. In addition to the agencies’ identifying programs and activities that are susceptible to significant improper payments, OMB designates as high priority the programs with the most egregious cases of improper payments. Specifically, under a provision added to IPIA by IPERIA, OMB is required to annually identify a list of high-priority federal programs in need of greater oversight and review. In general, for fiscal years 2014 through 2017, OMB implemented this requirement by designating high- priority programs based on a threshold of $750 million in estimated improper payments for a given fiscal year. OMB also plays a key role in implementing laws related to improper payment reporting. Specifically, OMB is directed by statute to provide guidance to federal agencies on estimating, reporting, reducing, and recovering improper payments. IPERA also requires executive agencies’ IGs to annually determine and report on whether their respective agencies complied with certain IPERA- related criteria. If an agency does not meet one or more of the six IPERA criteria for any of its programs or activities, the agency is considered noncompliant overall. The six criteria are as follows: 1. publish a report in the form and content required by OMB—typically an agency financial report (AFR) or a performance and accountability report (PAR)—for the most recent fiscal year, and post that report on the agency website; 2. conduct a program-specific risk assessment, if required, for each program or activity that conforms with IPIA as amended; 3. publish improper payment estimates for all programs and activities deemed susceptible to significant improper payments under the agency’s risk assessments; 4. publish corrective action plans for those programs and activities assessed to be at risk for significant improper payments; 5. publish and meet annual reduction targets for all programs and activities assessed to be at risk for significant improper payments; and 6. report a gross improper payment rate of less than 10 percent for each program and activity for which an improper payment estimate was published. Under IPERA, agencies reported by their IG as not in compliance with any of these criteria in a fiscal year are required to submit a plan to Congress describing the actions they will take to come into compliance, and such plans shall include measureable milestones, the designation of senior accountable officials, and the establishment of accountability mechanisms to achieve compliance. OMB guidance states that agencies are required to submit these plans to Congress and OMB in the first year of reported noncompliance. When agency programs are reported as noncompliant for consecutive years, IPERA and OMB guidance requires agencies and OMB to take additional actions. Specifically, an agency with a program reported as noncompliant for 3 or more consecutive years is required to submit to Congress within 30 days of the IG’s report either (1) a reauthorization proposal for the program or (2) the proposed statutory changes necessary to bring the program or activity into compliance. We previously recommended that when agencies determine that reauthorization or statutory changes are not necessary to bring the programs into compliance, the agencies should state so in their notifications to Congress. Effective starting with fiscal year 2018 reporting, OMB updated its guidance to instruct agencies with programs reported as noncompliant for 3 consecutive years to explain what the agency is doing to achieve compliance if a reauthorization proposal or proposed statutory change will not bring a program into compliance with IPERA. The updated guidance also instructs agencies with programs reported as noncompliant for 4 or more consecutive years to submit a report to Congress and OMB (within 30 days of the IG’s determination of noncompliance) detailing the activities taken and still being pursued to prevent and reduce improper payments. If agency programs are reported as noncompliant under IPERA for 2 consecutive years, and the Director of OMB determines that additional funding would help the agency come into compliance, the head of the agency must obligate additional funding in the amount determined by the Director to intensify compliance efforts. IPERA directs the agency to exercise any reprogramming or transfer authority that the agency may have to provide additional funding to meet the level determined by OMB and, if necessary, submit a request to Congress for additional reprogramming or transfer authority to meet the full level of funding determined by OMB. Table 1 summarizes agency and OMB requirements related to agency programs that are noncompliant under IPERA, as reported by their IGs. Over Half of the CFO Act Agencies Were Reported as Noncompliant under IPERA for Fiscal Years 2016 and 2017, and Consecutive Years of Noncompliance Continue for Certain Programs Over Half of the Agencies Were Reported as Noncompliant for Fiscal Years 2016 and 2017 Seven years after the initial implementation of IPERA, over half of the 24 CFO Act agencies were reported as noncompliant by their IGs for fiscal years 2016 and 2017. Specifically, 13 agencies were reported as noncompliant with one or more IPERA criteria for fiscal year 2016, and 14 agencies were reported as noncompliant for fiscal year 2017 (see fig. 1). Nine of these agencies have been reported as noncompliant in one or more programs every year since IPERA was implemented in 2011 (see app. II for additional details on CFO Act agencies’ compliance under IPERA for fiscal years 2011 through 2017, as reported by their IGs). Although the number of agencies reported as noncompliant under IPERA has varied slightly since fiscal year 2011, the total instances of noncompliance for all six criteria substantially improved after fiscal year 2011, when IPERA was first implemented. As shown in figure 2, the total instances decreased from 38 instances (for 14 noncompliant agencies) for fiscal year 2011 to 26 instances (for 14 noncompliant agencies) for fiscal year 2017. Also, for fiscal year 2017, 7 of 14 agencies were reported as noncompliant for only one criterion per noncompliant program. Of these, 6 agencies—the Departments of Homeland Security (DHS), Education (Education), Commerce, and Transportation; the General Services Administration; and the Social Security Administration (SSA)—were only reported as noncompliant with the IPERA criterion that requires agencies to publish and meet reduction targets. In addition, the Department of the Treasury (Treasury) was only reported as noncompliant with the IPERA criterion that requires agencies to report improper payment rates below 10 percent. Furthermore, the programs reported as noncompliant for fiscal year 2017 accounted for a significantly smaller portion of the total reported estimated improper payments as compared to the noncompliant programs for fiscal year 2015. Specifically, we previously reported that 52 noncompliant programs accounted for $132 billion (or about 96 percent) of the $137 billion total reported estimated improper payments for fiscal year 2015, whereas 58 noncompliant programs accounted for $80 billion (or about 57 percent) of the $141 billion total reported estimated improper payments for fiscal year 2017. Although improper payment estimates associated with noncompliant programs vary from year to year, this decrease (approximately $52 billion) was primarily due to two programs. Specifically, the Department of Health and Human Services’ (HHS) Medicare Fee-for-Service (Parts A and B) and Medicare Part C programs were reported as noncompliant and accounted for approximately $43 billion and $14 billion, respectively, of estimated improper payments for fiscal year 2015. These programs were reported as compliant for fiscal year 2017 and accounted for approximately $36 billion and $14 billion, respectively, or about 36 percent of the $141 billion total reported improper payments for fiscal year 2017. Certain Programs Continue to Be Reported as Noncompliant for Consecutive Years Almost a third (18 programs) of the 58 programs that contributed to 14 CFO Act agencies’ noncompliance under IPERA, as of fiscal year 2017, were reported as noncompliant for 3 or more consecutive years. The number of programs noncompliant for 3 or more consecutive years has continually increased since fiscal year 2015, as shown in figure 3. Specifically, 12 programs (associated with 7 agencies) were reported as noncompliant for 3 or more consecutive years, as of fiscal year 2015, and the number increased to 14 programs (associated with 8 agencies) and 18 programs (associated with 9 agencies), as of fiscal years 2016 and 2017, respectively. These programs accounted for a substantial portion of the $141 billion total estimated improper payments for fiscal year 2017. As shown in table 2, 14 of the 18 programs that were reported as noncompliant for 3 or more consecutive years reported improper payment estimates that accounted for an estimated $74.4 billion (about 53 percent) of the $141 billion, while the other 4 programs did not report improper payment estimates for fiscal year 2017 and were reported by their respective IGs as noncompliant with the IPERA criterion to publish improper payment estimates. The $74.4 billion is primarily composed of estimates reported for 2 noncompliant programs—HHS’s Medicaid program ($36.7 billion) and Treasury’s Earned Income Tax Credit program ($16.2 billion)— totaling $52.9 billion (or approximately 71 percent of the $74.4 billion). Improper payments associated with these two noncompliant programs are also a central part of two areas included in our 2017 High-Risk List, which includes federal programs and operations that are especially vulnerable to waste, fraud, abuse, and mismanagement, or that need transformative change. Eight of the 18 noncompliant programs have been reported as noncompliant since the implementation of IPERA in fiscal year 2011, for a total of 7 consecutive years, as shown in table 2. Reported compliance for Treasury’s Earned Income Tax Credit improved from being reported as noncompliant with multiple IPERA criteria in fiscal year 2013 to noncompliance with only one criterion for the last 4 years (fiscal years 2014 through 2017). CFO Act Agencies Did Not Always Notify Congress, and They Provided Varying Information on Programs Reported as Noncompliant for 3 or More Consecutive Years Eight CFO Act agencies’ programs were reported as noncompliant under IPERA for 3 or more consecutive years, as of fiscal year 2016. Three of these agencies did not notify Congress of their program’s continued noncompliance as required. In addition to submitting the required notifications for their noncompliant programs, the other five agencies also included additional information in their notifications—such as measurable milestones, designation of senior officials, and accountability mechanisms—useful for assessing their efforts to achieve compliance. In June 2018, OMB updated its guidance to clarify agency reporting requirements for each consecutive year a program is reported as noncompliant. However, OMB’s updated guidance did not direct agencies to include other types of quality information in their notifications for programs reported as noncompliant for 3 or more consecutive years that could help Congress to more effectively assess their efforts to address long-standing challenges and other issues affecting these programs and to achieve compliance. CFO Act Agencies with Programs Reported as Noncompliant for 3 or More Consecutive Years Did Not Always Notify Congress Of the eight agencies with programs reported as noncompliant under IPERA for 3 or more consecutive years as of fiscal year 2016, we found that five agencies notified Congress of their noncompliance as required. Specifically, the Department of Defense (DOD), Education, HHS, DHS, and SSA notified Congress of their programs’ reported noncompliance for 3 or more consecutive years as of fiscal year 2016 as required by IPERA and OMB guidance. The remaining three agencies—the U.S. Department of Agriculture (USDA), the Department of Labor (DOL), and Treasury— did not notify Congress as required. Additional information regarding the three agencies that did not submit their required notifications to Congress is summarized below: USDA: In May 2017, the USDA IG reported that four USDA Food and Nutrition Service programs—Child and Adult Care Food Program; National School Lunch Program; School Breakfast Program; and Special Supplemental Nutrition Program for Women, Infants, and Children—had been noncompliant for 6 consecutive years, as of fiscal year 2016. However, USDA has not notified Congress of these programs’ continued noncompliance with IPERA as of fiscal year 2016, despite prior recommendations that we, and the USDA IG, made to USDA to do so. USDA staff stated in May 2018 that USDA drafted, but had not submitted, a letter to Congress regarding these programs’ noncompliance. DOL: In June 2017, the DOL IG reported that the Unemployment Insurance Benefit program had been noncompliant for 6 consecutive years, as of fiscal year 2016. In October 2016, DOL included proposed legislation in its last notification to Congress regarding this program, approximately 8 months prior to the DOL IG’s IPERA compliance report. However, because the requirement for agencies to notify Congress is triggered by IG reporting of programs that are noncompliant for 3 or more consecutive years, DOL should have also notified Congress regarding the program’s continued noncompliance in fiscal year 2016 after the IG’s report was issued in June 2017. DOL staff stated in August 2018 that the proposed legislation included in its October 2016 notification had not been enacted and that DOL is currently working to develop a new report to Congress and OMB detailing corrective actions taken to bring the program into compliance. Treasury: In May 2017, the Treasury IG reported that the Earned Income Tax Credit (EITC) program had been noncompliant for 6 consecutive years, as of fiscal year 2016. We previously reported that Treasury submitted proposed statutory changes to Congress for this program in August 2014 and in June 2015. As stated in the Treasury IG’s fiscal year 2016 IPERA compliance report, the proposed statutory changes would help prevent the improper issuance of billions of dollars in refunds as it would provide the Internal Revenue Service (IRS) with expanded authority to systematically correct erroneous claims that are identified when tax returns are processed and allow IRS to deny erroneous EITC refund claims before they are paid. Further, Treasury stated that IRS has repeatedly requested authority to correct such errors in subsequent fiscal year budgets, including its fiscal year 2019 budget submission. In June 2018, Treasury staff stated that the Consolidated Appropriations Act, 2016 provided IRS with additional tools for reducing EITC improper payments; however, the act did not expand IRS’s authority to systematically correct the erroneous claims that are identified when tax returns are processed. Treasury staff also stated that the department has continued to coordinate with OMB on required reporting for the EITC program because of the program’s complexity, and that OMB has not requested additional actions or documentation regarding the program’s noncompliance. Although continued coordination with OMB is important, Treasury did not notify Congress regarding the EITC program’s continued noncompliance as required. In summary, despite reporting requirements in IPERA and OMB guidance, one agency (USDA) has not notified Congress about four programs being reported as noncompliant for 6 consecutive years, as of fiscal year 2016. The remaining two agencies (DOL and Treasury) that did not notify Congress of their programs’ consecutive noncompliance, as of fiscal year 2016, submitted notifications to Congress prior to their respective IGs’ fiscal year 2016 compliance results. However, IPERA requires agencies to notify Congress when programs are reported as noncompliant for more than 3 consecutive years and thus DOL and Treasury should have also notified Congress about their programs’ being reported as noncompliant for 6 consecutive years, as of fiscal year 2016. It is important that agencies continue to notify Congress of their programs’ consecutive noncompliance each year after the third consecutive year as the information related to their proposals or regarding their IPERA compliance efforts included in prior years’ notifications to Congress may significantly change over time. Unless agencies continue to notify Congress in subsequent years, Congress may lack the current and relevant information needed to effectively assess agencies’ proposals or monitor their efforts to address problematic programs in a timely manner. OMB updated its guidance in June 2018 to provide more clarity regarding the notification requirements for each consecutive year a program is reported as noncompliant. Effective implementation of this guidance may help ensure that agencies consistently provide required information to Congress on these programs in future years. Certain Agencies Provided Additional Quality Information on IPERA Compliance Efforts in Their Notifications to Congress We found that the five agencies—DOD, DHS, Education, HHS, and SSA—that notified Congress regarding their programs’ reported noncompliance for 3 or more consecutive years, as of fiscal year 2016, also included additional information about their efforts to achieve IPERA compliance. Although IPERA does not specifically require that agency proposals for reauthorization or other statutory change provide such information, including it could help Congress to better assess the agencies’ proposals included in these notifications and to oversee agency efforts to address long-standing challenges and compliance issues associated with these programs. In many instances, the types of additional information provided by these agencies are similar to information that agencies are required to provide to Congress or OMB in other required notifications or other reports, such as annual AFRs or PARs. For example, all improper payment estimates reported under IPIA, as amended, must be accompanied by information on what the agency is doing to reduce improper payments, including a description of root causes and the steps the agency has taken to ensure accountability. Further, IPERA and OMB guidance require agencies to provide corrective action plans to Congress for programs reported as noncompliant for 1 year. Such plans should include actions planned or taken to address the program’s noncompliance, measurable milestones, a senior official designated to oversee progress, and the accountability mechanisms in place to hold the senior official accountable. In addition, GAO’s Standards for Internal Control in the Federal Government emphasizes the importance of communicating quality information, such as significant matters related to risks, changes, or issues affecting agencies’ efforts to achieve compliance objectives, to external parties—such as legislators, oversight bodies, and the general public. Furthermore, in our fiscal year 2017 High-Risk Update, we also highlight the importance of these types of information when assessing agency efforts to address issues associated with programs included on our High-Risk List. Examples of such information include (1) action plans that are accessible and transparent with clear milestones and metrics, including established goals and performance measures to address identified root causes; (2) leadership commitment of top (or senior) officials to establish long-term priorities and goals and continued oversight and accountability; (3) monitoring progress against goals, assessing program performance, or reporting potential risks; and (4) demonstrated progress, through recommendations implemented, actions taken for improvement, and effectively addressing identified root causes and managing high-risk issues. Table 3 summarizes the types of additional information described above that the five agencies provided in their fiscal year 2016 notifications to Congress to address programs with 3 or more consecutive years of noncompliance. All five agencies informed Congress of (1) root causes that directly lead to improper payments or hindered the program’s ability to achieve compliance; (2) certain risks, significant changes, or issues affecting their efforts; and (3) their corrective actions or strategies to achieve compliance. Three of the five agencies—DOD, Education, and DHS—also included the other types of additional information described above in their notifications, including measurable milestones, designated senior officials to oversee progress, and accountability mechanisms established to help achieve compliance. For example, all three agencies designated their chief financial officers (CFO) to oversee progress toward achieving measurable milestones and expanded their official roles and responsibilities to hold them accountable. Education and DHS stated that these responsibilities were added to their respective CFOs’ individual performance plans. Although OMB updated its guidance in June 2018 to clarify agency reporting requirements related to programs reported as noncompliant for 3 or more consecutive years, the updated guidance did not direct agencies to include other types of quality information in their notifications, such as those described above. In addition, information related to measurable milestones, corrective actions, risks, issues, or other items affecting agencies’ efforts may change significantly over time. With this additional information, Congress could have more complete information to effectively oversee agency efforts to address long-standing challenges and other issues that have contributed to programs being reported as noncompliant for 3 or more consecutive years. OMB Updated Guidance for Determining Additional Funding Needs for Programs Reported as Noncompliant for 2 Consecutive Years Fifteen programs in seven agencies and 12 programs in six agencies were reported as noncompliant for 2 consecutive years as of fiscal years 2016 and 2017, respectively. For agencies reported as noncompliant under IPERA for 2 consecutive years for the same program, IPERA gives the Director of OMB the authority to determine whether additional funding would help the agencies come into compliance. If the OMB Director determines that such funding would help, the agency is required to use any available reprogramming or transfer authority to meet the funding level that the OMB Director specified and, if such authorities are not sufficient, submit a request to Congress for additional reprogramming or transfer authority. According to OMB staff, OMB determined that no additional funding was needed for programs reported as noncompliant for 2 consecutive years as of fiscal year 2016. As of September 2018, OMB was in the process of making funding determinations for 12 programs that were reported as noncompliant as of fiscal year 2017 and stated that any determinations made would be developed in the President’s Budget for fiscal year 2020. The 12 programs reported as noncompliant for 2 consecutive years, as of fiscal year 2017, accounted for approximately $3 billion (2 percent) of the $141 billion total improper payment estimate for that year. Of these 12 programs, more than half (or 7 of the 12) were attributable to DOD; however, Education’s Pell Grant program accounted for $2.2 billion (or 74 percent) of the $3 billion in improper payment estimates for programs reported as noncompliant programs for 2 consecutive years, for fiscal year 2017. In addition, as shown in table 4, the 12 programs reported as noncompliant for 2 consecutive years, as of fiscal year 2017, were primarily noncompliant with the IPERA criteria that required agencies to publish information in their PAR or AFR or publish and meet reduction targets. As noted previously, IPERA gives OMB authority to determine whether additional funding for intensified compliance efforts would help the agency come into compliance under IPERA. Therefore, an established process for making timely, well-informed funding determinations is an essential part of ensuring that agencies have sufficient resources and take steps to intensify their compliance efforts in a timely manner. In April 2018, OMB staff stated that when making funding determinations, they primarily rely on the IGs’ recommendations in their annual IPERA compliance reports. OMB staff also stated that for its fiscal year 2016 determinations, OMB determined that additional funding was not needed because the IGs’ recommendations did not specify that additional funding was needed to help resolve the programs’ noncompliance. The IGs’ annual reports provide information on agencies’ IPERA compliance and may be useful to OMB as a tool to help them make determinations for additional funding. However, IPERA does not require IGs to address funding levels in their annual compliance reports, and OMB’s guidance does not inform the IGs that their work might be relied upon in this manner. We reviewed the IGs’ fiscal years 2016 and 2017 IPERA compliance reports for the agencies with programs reported as noncompliant for 2 consecutive years and found that the IGs did not make any recommendations regarding additional funding needed to bring these programs into compliance. In addition, as specifically stated by the IGs for Education and USDA in their IPERA reports, OMB has the statutory responsibility to make these funding determinations. Education IG’s fiscal year 2017 IPERA compliance report stated that “If OMB recommends that the Department needs additional funding or should take any other actions to become compliant with IPERA, we recommend that the Department implement OMB’s recommendations.” Also, the USDA IG’s fiscal year 2016 IPERA compliance report stated, “For agencies that are not compliant for 2 consecutive years for the same program, the Director of OMB will determine if additional funding would help these programs come into compliance.” As a result, OMB’s reliance on IG recommendations as the source of information to support additional funding determinations may not provide sufficient information to effectively assess agencies’ funding needs to address noncompliance. OMB staff subsequently stated that they no longer need to conduct a detailed review of the IGs’ IPERA compliance reports to identify recommendations related to additional funding needs. Instead, OMB Memorandum M-18-20, issued in June 2018, updated OMB Circular No. A-123, Appendix C, and clarified that the funding determination process will unfold as part of the annual development of the President’s Budget, as described in OMB Circular No. A-11. This updated guidance also directs agencies to submit proposals to OMB regarding additional funding needs that may help them address IPERA noncompliance. To illustrate, under this new guidance, the IGs’ fiscal year 2018 IPERA compliance reports will be due in May 2019, and any funding needs to address noncompliance would be incorporated in the next annual budget preparation process, the results of which are due to be submitted to Congress in February 2020 for the President’s Budget for fiscal year 2021. Once OMB’s determinations have been made and communicated to agencies, agencies would respond by performing the required reprogramming and making transfers under existing authority, where available. Any requests for additional transfer authority may be incorporated into subsequent appropriations legislation. Conclusions Estimated improper payments reported government-wide total almost $1.4 trillion from fiscal year 2003 through fiscal year 2017. The number of programs reported as noncompliant under IPERA for 3 or more consecutive years has continued to increase, from 12 programs (associated with 7 agencies) to 18 programs (associated with 9 agencies) as of fiscal years 2015 and 2017, respectively. Including additional useful, up-to-date information—such as measurable milestones, risks, or other issues affecting agency efforts to achieve compliance—in notifications to Congress, which are required when programs are reported as noncompliant for 3 or more consecutive years, could help Congress better assess agency efforts to address long-standing challenges and other issues associated with them. Although certain agencies included certain types of additional information in their notifications as of fiscal year 2016, OMB guidance does not require agencies to include such information in their notifications. As a result, Congress may lack sufficient information to effectively oversee agency efforts and take prompt action to help address long-standing challenges or other issues associated with these programs. Recommendation for Executive Action The Director of OMB should take steps to update OMB guidance to specify other types of quality information that agencies with programs noncompliant for 3 or more consecutive years should include in their notifications to Congress, such as significant matters related to risks, issues, root causes, measurable milestones, designated senior officials, accountability mechanisms, and corrective actions or strategies planned or taken by agencies to achieve compliance. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this report to OMB and requested comments, and OMB said that it had no comments. We also provided a draft of this report to the 24 CFO Act agencies and their IGs and requested comments. We received letters from the DHS Office of Inspector General (OIG), SSA, and the United States Agency for International Development. These letters are reproduced in appendixes V through VII. We also received technical comments from DOL, the Department of Veterans Affairs, the General Services Administration, HHS, the Department of Housing and Urban Development, and the Treasury OIG, which we incorporated in the report as appropriate. The remaining agencies and OIGs either did not provide comments or notified us via email that they had no comments. In its comments, SSA stated that it provided information to Congress on measurable milestones, designated senior officials, and accountability mechanisms in its AFR. In the report, we acknowledge that these types of additional information are similar to information that agencies are required to provide to Congress or OMB in other reports, such as annual AFRs. However, our analysis was based on SSA’s fiscal year 2016 notifications to Congress for programs reported as noncompliant under IPERA, in which this specific information was not reported. As such, we continue to believe that OMB should take steps to update OMB guidance to help ensure that agencies report such significant information and include it in their notifications to Congress. We are sending copies of this report to the appropriate congressional committees, the Director of the Office of Management and Budget, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-2623 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Appendix I: Objectives, Scope, and Methodology Our objectives were to determine the following: 1. The extent to which the 24 agencies listed in the Chief Financial Officers Act of 1990, as amended (CFO Act), complied with the six criteria listed in the Improper Payments Elimination and Recovery Act of 2010 (IPERA), for fiscal years 2016 and 2017, and the trends evident since 2011, as reported by their inspectors general (IG). 2. The extent to which CFO Act agencies addressed requirements for programs and activities reported as noncompliant with IPERA criteria for 3 or more consecutive years, as of fiscal year 2016, and communicated their strategies to Congress for reducing improper payments and achieving compliance. 3. The extent to which the Office of Management and Budget (OMB) made determinations regarding whether additional funding would help CFO Act programs and activities reported as noncompliant with IPERA criteria for 2 consecutive years, as of fiscal years 2016 and 2017, come into compliance. Although the responsibility for complying with provisions of improper payment-related statutes rests with the head of each executive agency, we focused on the 24 agencies listed in the CFO Act because estimates of their improper payments represent over 99 percent of the total reported estimated improper payments for fiscal years 2016 and 2017. Our work did not include validating or retesting the data or methodologies that the IGs used to determine and report compliance. We corroborated all of our findings with OMB and all 24 CFO Act agencies and IGs. To address our first objective, we identified the requirements that agencies must meet by reviewing the Improper Payments Information Act of 2002 (IPIA), IPERA, and OMB guidance. We reviewed the CFO Act agency IGs’ IPERA compliance reports for fiscal years 2016 and 2017, which were the most current reports available at the time of our review. We summarized the overall agency and program-specific compliance determinations with the six IPERA criteria, as reported by the IGs. For fiscal years 2011 through 2015, we relied on and reviewed prior year supporting documentation and analyses of CFO Act agencies’ IPERA compliance, as reported in our prior reports, in order to identify compliance trends since 2011, as reported by the IGs. Based on these reports, we summarized the programs and the number of consecutive years that they were reported as noncompliant. For each IG report that did not specifically state that the agency had programs noncompliant for consecutive years, we compared the list of programs reported as noncompliant for fiscal years 2016 and 2017 to the list of programs reported as noncompliant for fiscal years 2014 and 2015 in our prior reports. Lastly, we corroborated our findings with OMB and all 24 CFO Act agencies and IGs. To address our second objective, we determined if the agencies responsible for programs and activities reported as noncompliant for 3 or more consecutive years as of fiscal year 2016 had submitted the required proposals (reauthorizations or statutory changes) to Congress by requesting and reviewing documentation of the required submissions and relevant notifications to Congress obtained from each applicable agency. Further, we reviewed the content of each agency notification to evaluate agencies’ efforts to communicate quality information to Congress concerning their strategies for achieving compliance consistent with Standards for Internal Control in the Federal Government. Principle 15 of these standards emphasizes the need for an entity’s management to communicate necessary quality information, such as significant matters related to risks, changes, or issues affecting agencies’ efforts, to achieve compliance objectives, to external parties—such as legislators, oversight bodies, and the general public. To identify other types of information useful for this purpose, we reviewed IPIA, as amended; IPERA; and OMB guidance for information agencies are required to provide to Congress or OMB in other notifications and reports, such as their corrective action plans or strategies, measurable milestones, designated senior officials, and accountability mechanisms for achieving compliance. We also reviewed information used to assess agency efforts to address issues associated with programs on our High-Risk List. To determine the extent to which agencies’ notifications to Congress included these additional types of useful information for their applicable program(s), we used a data collection instrument to document our determinations regarding the additional types of quality information included in each notification. In addition, two GAO analysts independently reviewed each agency’s notification and documented their determinations regarding the types of information included in the notifications. Differences between the analysts’ determinations were identified and resolved to ensure that the types of additional information were consistently identified and categorized. We did not evaluate the sufficiency and completeness of the agency-provided information. Lastly, we corroborated our findings with the respective agencies and IGs. To address our third objective, we identified provisions in IPIA, IPERA, and OMB guidance that are applicable to OMB for programs reported as noncompliant for 2 consecutive years. To determine if OMB made additional funding determinations for agency programs and activities reported as noncompliant for 2 consecutive years as of fiscal years 2016 and 2017, we requested relevant information and communications from OMB and the applicable agencies and IGs. We also interviewed key OMB staff on their process for determining additional funding needs for noncompliant programs and activities as of fiscal years 2016 and 2017 and related results. In addition, we reviewed the applicable fiscal years 2016 and 2017 CFO Act agency IG IPERA compliance reports, which OMB staff stated they relied on for determining whether noncompliant programs and activities required additional funding. We also asked the agencies whether they coordinated with OMB regarding their need for additional funding for programs and activities reported as noncompliant for 2 consecutive years as of fiscal years 2016 and 2017. Lastly, we corroborated our findings with OMB and the respective agencies and IGs. We conducted this performance audit from November 2017 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: CFO Act Agencies’ Overall IPERA Compliance for Fiscal Years 2011 through 2017, as Reported or Acknowledged by Their IGs Figure 4 details the 24 Chief Financial Officers Act of 1990 (CFO Act) agencies’ overall compliance under the Improper Payments Elimination and Recovery Act of 2010 (IPERA), as reported by their inspectors general, for fiscal years 2011 through 2017. We previously reported on CFO Act agencies’ overall reported compliance for fiscal years 2011 through 2015. Appendix III: CFO Act Agencies and Programs Reported as Noncompliant with IPERA for Fiscal Years 2016 and 2017 Tables 5 and 6 detail the Chief Financial Officers Act of 1990 (CFO Act) agencies and programs reported by their inspectors general as noncompliant with the six criteria specified by the Improper Payments Elimination and Recovery Act of 2010 (IPERA), for fiscal years 2016 and 2017. We previously reported on CFO Act agencies’ reported compliance with the six IPERA criteria for fiscal year 2015. Appendix IV: CFO Act Agencies with Programs Reported by Their IGs as Noncompliant for 2 or More Consecutive Years, as of Fiscal Years 2016 and 2017 Table 7 details the Chief Financial Officers Act of 1990 (CFO Act) agencies and programs reported by their inspectors general as noncompliant under the Improper Payments Elimination and Recovery Act of 2010 (IPERA) for 2 or more consecutive years, as of fiscal years 2016 and 2017. We previously reported on CFO Act agencies’ reported compliance for fiscal year 2015. Appendix V: Comments from the Department of Homeland Security Office of Inspector General Appendix VI: Comments from the Social Security Administration Appendix VII: Comments from the United States Agency for International Development Appendix VIII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Michelle Philpott (Assistant Director), Matthew Valenta (Assistant Director), Vivian Ly (Auditor in Charge), Juvy Chaney, John Craig, Caitlin Cusati, Francine DelVecchio, Patrick Frey, Maria Hasan, Maxine Hattery, Jason Kelly, Jim Kernen, Jason Kirwan, Sharon Kittrell, Lisa Motley, Heena Patel, Anne Rhodes- Kline, and Kailey Schoenholtz made key contributions to this report.
Why GAO Did This Study Government-wide estimated improper payments totaled almost $1.4 trillion from fiscal year 2003 through fiscal year 2017. IPERA requires IGs to annually assess and report on whether executive branch agencies complied with the six criteria to (1) publish an agency financial report or performance accountability report, (2) conduct program-specific improper payment risk assessments, (3) publish improper payment estimates, (4) publish corrective action plans, (5) publish and meet annual improper payment reduction targets, and (6) report a gross improper payment rate of less than 10 percent. This report examines the extent to which 1. CFO Act agencies complied with IPERA criteria for fiscal years 2016 and 2017, and the trends evident since 2011, as reported by their IGs; 2. CFO Act agencies addressed requirements for programs reported as noncompliant with IPERA criteria for 3 or more consecutive years, as of fiscal year 2016, and communicated their strategies to Congress for reducing improper payments and achieving compliance; and 3. OMB made determinations regarding whether additional funding would help CFO Act agency programs reported as noncompliant with IPERA criteria for 2 consecutive years, as of fiscal years 2016 and 2017, come into compliance. GAO analyzed the IGs' fiscal years 2016 and 2017 IPERA compliance reports; reviewed prior GAO reports on agencies' IPERA compliance; reviewed agency information submitted to Congress; and made inquiries to OMB, applicable agencies, and IGs; and assessed such information based on relevant IPERA provisions and OMB and other guidance. What GAO Found Over half of the 24 Chief Financial Officers Act of 1990 (CFO Act) agencies were reported by their inspectors general (IG) as noncompliant with one or more criteria under the Improper Payments Elimination and Recovery Act of 2010 (IPERA) for fiscal years 2016 and 2017. Nine CFO Act agencies have been reported as noncompliant in one or more programs every year since the implementation of IPERA in fiscal year 2011, totaling 7 consecutive years of noncompliance. The IGs of the 14 noncompliant agencies reported that a total of 58 programs were responsible for the identified instances of noncompliance in fiscal year 2017. Further, 18 of the 58 programs at 9 agencies were reported as noncompliant for 3 or more consecutive years. Fourteen of these 18 programs accounted for an estimated $74.4 billion of the $141 billion total estimated improper payments for fiscal year 2017; the other 4 programs did not report improper payment estimates. This sum may include estimates that are of unknown reliability. The $74.4 billion is primarily composed of estimates reported for two noncompliant programs, the Department of Health and Human Services' Medicaid program and the Department of the Treasury's (Treasury) Earned Income Tax Credit program; estimated improper payments for these two programs are also a central part of certain high-risk areas in GAO's 2017 High-Risk List. Agencies with any program reported as noncompliant for 3 or more consecutive years are required to notify Congress of their program's consecutive noncompliance and submit a proposal for reauthorization or statutory change to bring that program into compliance. GAO found that three agencies with one or more programs reported as noncompliant for 3 or more consecutive years, as of fiscal year 2016, did not notify Congress or submit the required proposals. The Departments of Labor and the Treasury submitted proposed legislative changes in response to their programs being previously reported as noncompliant, but did not notify Congress of the programs' continued noncompliance as of fiscal year 2016. The U.S. Department of Agriculture (USDA) has not notified Congress despite prior GAO and USDA IG recommendations to do so. To address these issues, in June 2018 the Office of Management and Budget (OMB) updated its guidance to clarify the notification requirements for each consecutive year a program is reported as noncompliant. GAO found that five agencies did notify Congress as required, and included additional quality information that is not specifically required, but could be useful in updating Congress on their compliance efforts. For example, all five agencies provided information on the root causes, risks, changes, or issues affecting their efforts and corrective actions or strategies to address them; three agencies provided other quality information on accountability mechanisms, designated senior officials, and measurable milestones. In June 2018, OMB updated its guidance to clarify agency reporting requirements for programs reported as noncompliant for 3 or more consecutive years. However, the updated guidance does not direct agencies to include the types of quality information included in these five agencies' notifications for fiscal year 2016. GAO's Standards for Internal Control in the Federal Government emphasizes the importance of communicating quality information, such as significant matters affecting agencies' efforts to achieve compliance objectives. Such information could be useful in understanding the current challenges of these programs and is essential for assessing agency efforts to address high-risk and other issues. As a result, Congress could have more complete information to effectively oversee agency efforts to address program noncompliance for 3 or more consecutive years. When programs are reported as noncompliant for 2 consecutive years, IPERA gives OMB authority to determine whether additional funding is needed to help resolve the noncompliance. In April 2018, OMB staff stated that they determined that no additional funding was needed for the 15 programs that were reported as noncompliant for 2 consecutive years, as of fiscal year 2016, and that they primarily rely on the IGs' recommendations in their annual IPERA compliance reports when making funding determinations. OMB staff subsequently stated that they no longer need to conduct a detailed review of the IGs' IPERA compliance reports to identify recommendations related to additional funding needs. Instead, OMB updated its guidance in June 2018 to direct agencies to submit proposals to OMB regarding additional funding needs to help address IPERA noncompliance and clarified that the funding determination process will unfold as part of the annual development of the President's Budget. As of September 2018, OMB was in the process of making funding determinations for 12 programs that were reported as noncompliant as of fiscal year 2017 and stated that any determinations made would be developed in the President's Budget for fiscal year 2020. What GAO Recommends GAO recommends that OMB update its guidance to specify other types of quality information that agencies with programs noncompliant for 3 or more consecutive years should include in their notifications to Congress, such as significant matters related to risks, issues, root causes, measurable milestones, designated senior officials, accountability mechanisms, and corrective actions or strategies planned or taken by agencies to achieve compliance. GAO provided a draft of this report to OMB and requested comments, and OMB said that it had no comments. GAO also provided a draft of this report to the 24 CFO Act agencies and their IGs and requested comments. In its written comments, the Social Security Administration (SSA) stated that it provided information on measurable milestones, designated senior officials, and accountability mechanisms in its agency financial report. However, SSA did not provide this information in its notifications to Congress for programs reported as noncompliant under IPERA as of fiscal year 2016. GAO believes that OMB should take steps to update OMB's guidance to help ensure that agencies report such significant information and include it in their notifications to Congress. In addition, several agencies and IGs provided technical comments, which were incorporated in the report as appropriate.
gao_GAO-18-212T
gao_GAO-18-212T_0
Background An amphibious force is comprised of an (1) amphibious task force and a (2) landing force together with other forces that are trained, organized, and equipped for amphibious operations. The amphibious task force is a group of Navy amphibious ships, most frequently deployed as an Amphibious Ready Group (ARG). The landing force is a Marine Air- Ground Task Force—which includes certain elements, such as command, aviation, ground, and logistics—embarked aboard the Navy amphibious ships. A Marine Expeditionary Unit (MEU) is the most-commonly deployed Marine Air-Ground Task Force. Together, this amphibious force is referred to as an ARG-MEU. An ARG consists of a minimum of three amphibious ships, typically an amphibious assault ship, an amphibious transport dock ship, and an amphibious dock landing ship. Navy ships train to a list of mission- essential tasks that are assigned based on the ship’s required operational capabilities and projected operational environments. Most surface combatants, including cruisers, destroyers, and all amphibious ships, have mission-essential tasks related to amphibious operations. Figure 1 shows the current number of amphibious ships by class and a description of their capabilities. An MEU consists of around 2,000 Marines, their aircraft, their landing craft, their combat equipment, and about 15 days’ worth of supplies. The MEU includes a standing command element; a ground element consisting of a battalion landing team; an aviation element consisting of a composite aviation squadron of multiple types of aircraft; and a logistics element consisting of a combat logistics battalion. Marine Corps units also train to accomplish a set of mission-essential tasks for the designed capabilities of the unit. Many Marine Corps units within the command, aviation, ground, and logistics elements have an amphibious-related mission- essential task. To be certified in the mission-essential task of amphibious operations, Marine Corps units must train to a standard that may require the use of amphibious ships. The Marine Corps’ use of virtual training devices has increased over time, and advances in technology have resulted in the acquisition of simulators and simulations with additional capabilities designed to help individual Marines and units acquire and refine skills through more concentrated and repetitive training. For example, the Marine Corps utilizes a constructive simulation that provides commanders with training for amphibious operations, among other missions. The Marine Corps has introduced other virtual training devices to prepare Marines for operational conditions and for emerging threats, such as devices to replicate a variety of vehicles for driver training and egress trainers, among others. The Navy stated it does not utilize virtual training devices that simulate amphibious operations, including ship-to-shore movement. Navy and Marine Corps Units Completed Training for Certain Amphibious Operations Priorities but Not Others Due to Several Factors In our September 2017 report, we found that Navy and Marine Corps units deploying as part of ARG-MEUs completed required training for amphibious operations, but the Marine Corps has been unable to consistently accomplish training for other service amphibious operations priorities. Specifically, based on our review of deployment certification messages from 2014 through 2016, we found that each deploying Navy ARG completed training for the amphibious operations mission in accordance with training standards. Similarly, we found that each MEU completed all of its mission-essential tasks that are required during the predeployment training program. These mission-essential tasks cover areas such as amphibious raid, amphibious assault, and noncombatant evacuation operations, among other operations. However, we also reported that based on our review of unit-level readiness data from fiscal year 2014 through 2016, Marine Corps units were unable to fully accomplish training for other amphibious operations priorities. These shortfalls include home-station unit training to support contingency requirements, service-level exercises, and experimentation and concept development for amphibious operations. For example, Marine Corps officials cited shortfalls in their ability to conduct service- level exercises that train individuals and units on amphibious operations- related skills, as well as provide opportunities to conduct experimentation and concept development for amphibious operations. In our September 2017 report, we identified several factors that created shortfalls in training for amphibious operations priorities. Based on our analysis of interviews with 23 Marine Corps units, we found that all 23 units cited the lack of available amphibious ships as the primary factor limiting training for home-station units. The Navy’s fleet of amphibious ships has declined by half in the last 25 years, from 62 in 1990 to 31 today, with current shipbuilding plans calling for four additional amphibious ships to be added by fiscal year 2024, increasing the total number of amphibious ships to 35 (see fig. 2). Marine Corps officials from the 23 units we interviewed also cited other factors that limit opportunities for amphibious operations training, including the following: Access to range space. Seventeen of 23 Marine Corps units we interviewed identified access to range space as a factor that can limit their ability to conduct amphibious operations training. Unit officials told us that priority for training resources, including range access, is given to units that will be part of a MEU deployment, leaving little range time available for other units. Maintenance delays, bad weather, and transit time. Ten of 23 Marine Corps units told us that changes to an amphibious ship’s schedule resulting from maintenance overruns or bad weather have also reduced the time available for a ship to be used for training. The transit time a ship needs to reach Marine Corps units has further reduced the time available for training. High pace of deployments. Five of 23 Marine Corps units told us that the high pace of deployments and need to prepare for upcoming deployments limited their opportunity to conduct training for amphibious operations. The Navy and Marine Corps Have Taken Some Steps to Identify and Address Amphibious Training Shortfalls, but These Efforts Are Incomplete Services’ Approach Does Not Incorporate Strategic Training and Leading Risk Management Practices In our September 2017 report, we identified some steps that the Navy and Marine Corps have taken to mitigate the training shortfall for their amphibious operations priorities, such as by better defining the amount of amphibious operations capabilities and capacity needed to achieve the services’ wartime requirements. However, we found these efforts are incomplete because the services’ current approach for amphibious operations training does not incorporate strategic training and leading risk-management practices. Specifically, we found that: The Marine Corps does not prioritize all available training resources. For Marine Corps units not scheduled for a MEU deployment, officials described an ad hoc process to allocate any remaining available amphibious ship training time among home- station units. Specifically, officials stated that the current process identifies units that are available for training when an amphibious ship becomes available rather than a process that aligns the next highest- priority units for training with available amphibious ships. The Navy and Marine Corps do not systematically evaluate a full range of training resource alternatives to achieve amphibious operations priorities. Given the limited availability of amphibious ships for training, the Navy and Marine Corps have not systematically incorporated selected training resource alternatives into home-station training plans. During our review, we identified a number of alternatives that could help mitigate the risk to the services’ amphibious capability due to limited training opportunities. These alternatives could include utilizing additional training opportunities during an amphibious ship’s basic phase of training; using alternative platforms for training, such as Marine Prepositioning Force ships; utilizing smaller Navy craft or pier-side ships to meet training requirements; and leveraging developmental and operational test events. The Navy and Marine Corps have not developed a process or set of metrics to monitor progress toward achieving its amphibious operations training priorities and mitigating existing shortfalls. Current reporting systems do not allow officials to assess the services’ progress in achieving amphibious operations priorities or to monitor efforts to establish comprehensive amphibious operations training programs. For example, we found that the Marine Corps does not capture complete data on the full demand for training time with Navy amphibious ships that could be used for such assessments. In our September 2017 report, we recommended that the Navy and Marine Corps develop an approach to prioritize available training resources, systematically evaluate among training resource alternatives to achieve amphibious operations priorities, and monitor progress toward achieving them. DOD concurred with our recommendation and stated that the Secretary of the Navy would develop an amphibious operations training construct capitalizing on the application of primary and alternative training resources. The Marine Corps Has Not Fully Integrated Virtual Training Devices into Operational Training While the Marine Corps has stated that the use of virtual training could help mitigate some of the limitations of training in a live-only environment and taken some steps to integrate these devices into operational training, we identified gaps in its process to develop and use them. Specifically, based on our review of a selection of 6 virtual training devices, we found weaknesses in three key areas: Front-end planning. The Marine Corps’ process for conducting front- end planning and analysis to support the acquisition of its virtual training devices does not include consideration of critical factors for integrating virtual training devices into operational training, such as the specific training tasks the device is intended to address, how the device would be used to meet proficiency goals, or available time for units to train with the device. As a result, the Marine Corps does not have a reasonable basis to ensure that it is acquiring the right number and type of virtual training devices to meet its operational training needs. Expected and actual usage data. The Marine Corps does not consistently consider expected and actual usage data for virtual training devices to support its investment decisions. In the absence of these data, the Marine Corps risks sustained investment in virtual training devices that do not meet operational training needs. Training effectiveness. The Marine Corps does not consistently evaluate the effectiveness of its virtual training devices to accomplish operational training. Without a well-defined process to consistently evaluate the effectiveness of virtual training devices for training, the Marine Corps risks investing in devices whose value to operational training is undetermined. In our September 2017 report, we recommended that the Marine Corps develop guidance for the development and use of virtual training devices to address these gaps. DOD concurred with the recommendation and stated it would work with the Commandant of the Marine Corps in its development and implementation actions associated with the use of virtual training devices. Incorporating Collaboration Practices would Further Naval Integration Efforts for Amphibious Operations The Navy and Marine Corps have taken some steps to improve coordination between the two services, to include issuing strategic documents that discuss the importance of improving naval integration and establishing mechanisms to coordinate their amphibious operations training capabilities. However, in our September 2017 report we found that the services have not fully incorporated leading collaboration practices that would help drive efforts to improve naval integration. Our prior work on interagency collaboration has found that certain practices can help enhance and sustain collaboration among federal agencies. I would like to highlight a few practices that would especially benefit the Navy and Marine Corps’ efforts to improve integration for amphibious operations. Common outcomes and joint strategy. The Navy and Marine Corps have issued strategic documents that discuss the importance of improving naval integration, but the services have not developed a joint strategy that defines and articulates common outcomes to achieve naval integration. This first critical step will enable them to fully incorporate other leading collaboration practices aimed at achieving a common purpose. Compatible policies, procedures, and systems. The Navy and Marine Corps have not fully established compatible policies and procedures, such as common training tasks and standards and agreed-upon roles and responsibilities, to ensure their efforts to achieve improved naval integration are consistent and sustained. We also found that some of the Navy and Marine Corps’ systems for managing and conducting integrated training are incompatible, leading to inefficiencies in the process to manage unit-level training events. Leverage resources to maximize training opportunities. The services are looking to better leverage available training resources for amphibious operations. However, we identified examples of potential training opportunities during surface warfare tactical training and community relations events where enhancing the services’ collaborative efforts could take greater advantage of available training time for amphibious operations. Mechanisms to monitor results and reinforce accountability. The Navy and Marine have not developed mechanisms to monitor, evaluate, and report on results in improving naval integration and to align efforts to maximize training opportunities. Service-level strategy documents establish critical tasks to improve naval integration, but do not constitute a process or mechanism to jointly reinforce accountability for their naval integration efforts. In our September 2017 report, we recommended that the Navy and Marine Corps clarify the organizations responsible and set time frames to define and articulate common outcomes for naval integration, and use those outcomes to develop a joint strategy, more fully establish compatible policies, procedures, and systems, better leverage training resources, and establish mechanisms to monitor results. DOD concurred with the recommendation and stated it will develop mutual service naval integration terminology, and training resource application and organizational monitoring constructs to achieve common amphibious operations training outcomes. Chairman Wilson, Ranking Member Bordallo, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments For questions about this statement, please contact Cary Russell at (202) 512-5431, or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony are Matt Ullengren and Russell Bryan. Other staff who made contributions to the report cited in this testimony are identified in the source product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study This testimony summarizes the information contained in GAO's September 2017 report, entitled Navy and Marine Corps Training: Further Planning Needed for Amphibious Operations Training ( GAO-17-789 ). What GAO Found Navy and Marine Corps units that are deploying as part of an Amphibious Ready Group and Marine Expeditionary Unit (ARG-MEU) completed their required training for amphibious operations, but other Marine Corps units have been limited in their ability to conduct training for other amphibious operations–related priorities. GAO found that several factors, to include the decline in the fleet of the Navy's amphibious ships from 62 in 1990 to 31 today limited the ability of Marine Corps units to conduct training for other priorities, such as recurring training for home-station units (see figure). As a result, training completion for amphibious operations was low for some but not all Marine Corps units from fiscal years 2014 through 2016. The services have taken steps to address amphibious training shortfalls, such as more comprehensively determining units that require training. However, these efforts are incomplete because the services do not have an approach to prioritize available training resources, evaluate training resource alternatives, and monitor progress towards achieving priorities. Thus, the services are not well positioned to mitigate any training shortfalls. The Navy and Marine Corps have taken some steps to improve coordination between the two services, but have not fully incorporated leading collaboration practices to improve integration of the two services—naval integration—for amphibious operations. For example, the Navy and Marine Corps have not defined and articulated common outcomes for naval integration that would help them align efforts to maximize training opportunities for amphibious operations. The Marine Corps has taken steps to better integrate virtual training devices into operational training, but gaps remain in its process to develop and use them. GAO found that for selected virtual training devices, the Marine Corps did not conduct front-end analysis that considered key factors, such as the specific training tasks that a device would accomplish; consider device usage data to support its investment decisions; or evaluate the effectiveness of existing virtual training devices because of weaknesses in the service's guidance. As a result, the Marine Corps risks investing in devices that are not cost-effective and whose value to operational training is undetermined.
gao_GAO-18-15
gao_GAO-18-15_0
Background Opioids, such as hydrocodone, oxycodone, morphine, and methadone, can be prescribed to treat both acute and chronic pain. Because many opioids have a high potential for abuse and may lead to severe psychological or physical dependence, many of them are classified as Schedule II drugs under the Controlled Substances Act. The abuse of opioids has been associated with serious consequences, including addiction, overdose, and death. Responsibilities of Medicare Part D Plan Sponsors, CMS, and NBI MEDIC Medicare Part D plan sponsors are private organizations, such as health insurance companies and pharmacy benefit managers, contracted by CMS to provide outpatient drug benefit plans to Medicare beneficiaries. CMS provides guidance to plan sponsors that are responsible for establishing reasonable and appropriate drug utilization review (DUR) programs that assist in preventing misuse of prescribed medications in general, including the unsafe use of opioid pain medications. In 2013, CMS implemented the Medicare Part D opioid overutilization policy intended to improve medication safety. Through the Overutilization Monitoring System (OMS), CMS seeks to ensure that plan sponsors establish reasonable and appropriate DUR programs to prevent overutilization of opioids. CMS uses criteria in the OMS to identify high- risk use of opioids. Plan sponsors may, but are not required to, use these guidelines as part of their DUR. CMS’s Center for Program Integrity (CPI) oversees Part D program integrity and coordinates with other parts of CMS that monitor plan sponsor compliance with the Part D program. CPI has primary responsibility for overseeing NBI MEDIC, which is responsible for identifying and investigating potential Part D fraud, waste, and abuse, in general. NBI MEDIC handles complaints from beneficiaries and others, as well as requests from law enforcement; investigates providers and refers them to law enforcement as appropriate; and analyzes Part D program prescription drug event records and other data to identify patterns that may indicate fraud, waste, or abuse. NBI MEDIC’s responsibilities are for all Part D drugs and are not opioid-specific. Drug Diversion One concern associated with prescribed opioids is their diversion—that is, the redirection of prescription drugs for an illegal purpose such as recreational use or resale. Diversion can include selling prescription drugs that were obtained legally, transferring a legitimately prescribed opioid to family or friends who may be trying to self-medicate, or pretending to be in pain to obtain a prescription opioid due to an addiction. It is often associated with “doctor shopping,” the attempt to obtain large amounts of opioids through multiple providers, or from multiple pharmacies. Doctor shopping can be used to help support an individual’s addiction or to obtain opioids for resale on the black market. Drug diversion can also include illicit prescribing, whereby providers—commonly known as “pill mills”—write unnecessary prescriptions or prescribe larger quantities than are medically necessary. Opioids are among the drugs with the highest potential for drug diversion. CDC Guidelines for Prescribing Opioids In 2016, CDC issued guidelines with recommendations for prescribing opioids in outpatient settings for chronic pain, based on consultation with experts and a review of scientific evidence. CDC noted in the guidelines that primary care physicians have reported concerns about opioid misuse and addiction, and find managing patients with chronic pain a challenge, possibly because of insufficient training in prescribing opioids. According to the guidelines, most experts agreed that long-term opioid dosage of 50 milligrams (mg) morphine equivalent dose (MED) per day or more generally increases overdose risk without necessarily adding benefits for pain control or function. Experts also noted that daily opioid dosages close to or greater than 100 mg MED per day are associated with significant risks. The guidelines therefore recommended that providers use caution when prescribing opioids at any dose, carefully reassess evidence of individual benefits and risks when increasing the dosage to 50 mg MED per day or more, and either avoid or carefully justify dosage at 90 mg MED or more. In making these recommendations, CDC noted that there is not a dosage threshold below which the risk of overdose is eliminated, but found that dosages less than 50 mg MED would reduce the risk for a large portion of patients. CDC also noted that providers should use additional caution in prescribing opioids to patients aged 65 and older, because the drugs can accumulate in the body to toxic levels. CMS Delegates Monitoring of Beneficiaries who Receive Opioid Prescriptions to Plan Sponsors, but Does Not Have Sufficient Information on Most Beneficiaries at Risk for Harm CMS Delegates Monitoring of Individual Beneficiaries’ Opioid Prescriptions to Plan Sponsors CMS provides guidance to plan sponsors on how they should monitor opioid overutilization problems among Part D beneficiaries. The agency includes this guidance in its annual letters to plan sponsors, known as call letters; it also provided a supplemental memo to plan sponsors in 2012. Among other things, these guidance documents instructed plan sponsors to implement a retrospective drug utilization review (DUR) system to monitor beneficiary utilization starting in 2013. As part of the DUR systems, CMS requires plan sponsors to have methods to identify beneficiaries who are potentially overusing specific drugs or groups of drugs, including opioids. Also in 2013, CMS created the Overutilization Monitoring System (OMS), which outlines criteria to identify beneficiaries with high-risk use of opioids and to oversee sponsors’ compliance with CMS’s opioid overutilization policy. Plan sponsors may use the OMS criteria for their DUR systems, but they have some flexibility to develop their own targeting criteria, within CMS guidance. The OMS considers beneficiaries to be at a high risk of opioid overuse when they meet all three of the following criteria: (1) receive a total daily MED greater than 120 mg for 90 consecutive days, (2) receive opioids prescriptions from four or more providers in the previous 12 months, and (3) receive opioids from four or more pharmacies in the previous 12 months. The criteria exclude beneficiaries with a cancer diagnosis and those in hospice care, for whom higher doses of opioids may be appropriate. Officials from all six plan sponsors we interviewed confirmed they have a DUR system that specifically looks at opioids. In addition, to be consistent with CMS, all of the plan sponsors adopted criteria similar to the OMS, with some minor modifications—typically involving the number of months in which they measured beneficiaries’ opioid prescriptions. Through the OMS, CMS generates quarterly reports that list beneficiaries who meet all of the criteria and who are identified as high-risk and then distributes the reports to the plan sponsors. Plan sponsors are expected to review the list of identified beneficiaries, determine appropriate action, and then respond to CMS with information on their actions within 30 days. According to CMS officials, the agency also expects that plan sponsors will share any information with CMS on beneficiaries that they identify through their own DUR systems. Some actions plan sponsors may take include Case management. After plan sponsors identify beneficiaries with patterns of inappropriate opioid use and possible coordination of care issues through their DUR analysis, they may conduct case management. Case management may include an attempt to improve coordination issues, and often involves provider outreach, whereby the plan sponsor will contact the providers associated with the beneficiary to let them know that the beneficiary is receiving high levels of opioids and may be at risk of harm. In addition to outreach, officials from two of the six plan sponsors we interviewed told us they focus on provider education and one plan sponsor said they may direct the providers to the CDC guidelines or other information to help reduce overutilization. Officials from two plan sponsors reported that they also reach out to beneficiaries to let them know they are receiving high levels of opioids and may be at risk of harm. Beneficiary-specific point-of-sale (POS) edits. When plan sponsors determine that a beneficiary is at risk for opioid harm, they may choose to implement a beneficiary-specific POS edit to prevent overutilization. Beneficiary-specific POS edits are restrictions that limit these beneficiaries to certain opioids and amounts. Pharmacists receive a message when a beneficiary attempts to fill a prescription that exceeds the limit in place for that beneficiary. CMS expects plan sponsors to report on the POS edits they use through CMS’s Medicare Advantage and Prescription Drug System for information sharing and monitoring purposes. That way, if a beneficiary changes plans, the new plan sponsor will receive an alert about the beneficiary’s record of POS edits. From February 2014 through March 10, 2016, there were 2,693 POS edits reported in that system for 2,520 beneficiaries. Formulary-level POS edits. CMS expects plan sponsors to use formulary-level POS edits to prospectively prevent opioid overutilization. These edits alert providers who may not have been aware that their patients are receiving high levels of opioids from other doctors. CMS recommends these formulary-level edits to be used when a beneficiary has a cumulative opioid MED of at least 90 mg. Referrals for investigation. According to the six plan sponsors we interviewed, the referrals can be made to NBI MEDIC or to the plan sponsor’s own internal investigative unit, if they have one. After investigating a particular case, if a plan sponsor or NBI MEDIC determines that a beneficiary is suspected of diverting opioids, they may refer the case to the HHS-OIG, or a law enforcement agency, according to CMS, NBI MEDIC, and one plan sponsor. Pharmacy lock-ins. Beginning in 2019, Medicare Part D plan sponsors will be able to restrict certain beneficiaries identified as at- risk for prescription drug abuse to a single pharmacy for all their opioid prescriptions, known as a pharmacy “lock in.” Some plan sponsors explained that they use pharmacy lock-ins for their commercial and Medicaid lines of business, and generally found them to be a useful tool for controlling opioid use. Based on CMS’s use of the OMS and the actions taken by plan sponsors, CMS reported a decrease in the number of beneficiaries meeting the OMS criteria of high-risk—which agency officials consider an indication of success toward its goal of decreasing opioid use disorder. From calendar years 2011 through 2016, there was a 61 percent decrease in the number of beneficiaries meeting the OMS criteria. (See table 1.) In addition to using the OMS as a monitoring tool to oversee plan sponsors’ compliance with their DUR system requirements, CMS relies on patient safety measures to assess how well Part D plan sponsors are monitoring beneficiaries and taking appropriate actions. Specifically, CMS tracks data on plan sponsors’ performance for 15 measures related to Part D patient safety that are developed and maintained by the Pharmacy Quality Alliance, and CMS communicates with plan sponsors about their performance. In 2016, CMS started tracking plan sponsors’ performance on three Pharmacy Quality Alliance-approved patient safety measures that are directly related to opioids, which were 1. The proportion of beneficiaries that use opioids at high dosages (more than 120 mg MED for 90 days or longer) in persons without cancer or not in hospice care. 2. The proportion of beneficiaries that use opioids from multiple providers (four or more providers and four or more pharmacies) in persons without cancer or not in hospice care. 3. The proportion of beneficiaries that use opioids at high dosage and from multiple providers in persons without cancer or not in hospice care, and that meet both of the other measures. The three measures are similar to the OMS criteria in that they identify beneficiaries with high dosages of opioids (120 mg MED) from multiple providers and pharmacies (four or more of each). However, there are a number of differences between these measures and the OMS. For example, the OMS counts actual beneficiaries, while the patient safety measures report member-years, which are adjusted to account for beneficiaries who are enrolled in a plan for only part of a year. In addition, these measures separately identify beneficiaries who fulfill each of those criteria individually. For example, data gathered on the first measure indicate that about 285,119 beneficiaries, counted as member- years across all Part D plans, received high doses (more than 120 mg MED) of opioids for 90 days or longer during calendar year 2016. CMS also uses these data in different ways from how it uses OMS data. The OMS criteria were developed and maintained by CMS to identify patients at risk for harm who may warrant case management and to examine opioid use trends across the Part D program, including progress toward its goal of decreasing opioid use disorder. In contrast, CMS officials told us that the agency uses the patient safety measures to assess plan sponsor performance. The patient safety measures also serve as a tool for Part D sponsors to compare their performance to overall averages, and to track progress in improving these measures over time. CMS also tracks sponsors’ progress in improving the measures, according to agency officials. Each quarter, CMS contacts plan sponsors who have the lowest performance on each measure and expects them to respond about actions they take to improve performance. Beginning in April 2017, the agency began distributing to plan sponsors the beneficiary-level files for the patient safety measures. CMS officials said that these files provide a complete list of beneficiaries included in each of the measures. CMS Does Not Have Sufficient Information on Most Beneficiaries Potentially at Risk for Harm While CMS tracks the total number of beneficiaries who meet all three OMS criteria as part of its opioid overutilization oversight across the Part D program, it does not have comparable information on most beneficiaries who may be at risk for harm. CMS has goals to reduce the risk of opioid use disorders, overdoses, inappropriate prescribing, and drug diversion in its Opioid Misuse Strategy, but OMS does not track the number of beneficiaries with prescriptions for high doses of opioids unless those beneficiaries are also receiving them both from four or more providers and from four or more pharmacies; and agency officials told us that CMS has no plans for OMS to begin doing so. According to CDC guidelines, long-term use of high opioid dosages—those above a MED of 90 mg per day—are associated with significant risk of harm and should be avoided if possible. Based on the CDC guidelines, outreach to Part D plan sponsors, and CMS analyses of Part D data, CMS has revised its current OMS criteria to include more at-risk beneficiaries beginning in 2018. The new OMS criteria define a high user as having an average daily MED greater than 90 mg for any duration, and who receives opioids from four or more providers and four or more pharmacies, or from six or more providers regardless of the number of pharmacies, for the prior 6 months. According to CMS officials, the revised OMS criteria, like the current criteria, are intended to identify the beneficiaries it determined are at the greatest risk of harm: those who may lack coordinated care as a result of using multiple pharmacies and providers. CMS officials also noted that the revised criteria are intended to limit the increase in the number of beneficiaries for whom plan sponsors are expected to take action, such as case management, to avoid overburdening plan sponsors with unreasonable workload levels. While the revised criteria will help identify beneficiaries who CMS determined are at the highest risk of opioid misuse and therefore may need case management by plan sponsors, they will not provide information on most Part D beneficiaries who may also be at risk of harm. In developing the revised criteria, CMS conducted a one-time analysis that estimated there were 727,016 beneficiaries with an average MED of 90 mg or more, for any length of time during a 6 month measurement period in 2015, regardless of the number of providers or pharmacies used. These beneficiaries may be at risk of harm from opioids, according to CDC guidelines, and therefore tracking the number of these beneficiaries over time could help CMS to determine whether it is making progress toward meeting the goals specified in its Opioid Misuse Strategy. However, CMS officials told us that the agency does not keep track of these beneficiaries, and does not have plans to do so as part of OMS. Instead, CMS uses the number of beneficiaries who meet the OMS criteria as an indicator of progress toward its goals. CMS estimated that 33,223 beneficiaries would have met its revised criteria based on 2015 data, which is a much smaller number than the estimated 727,016 beneficiaries at risk of harm from opioids. (See fig. 1.) In 2016, CMS began to gather information from its patient safety measures on the number of beneficiaries who use more than 120 mg MED of opioids for 90 days or longer, regardless of the number of providers and pharmacies. However, this information does not include all at-risk beneficiaries, because the threshold is more lenient than indicated in CDC guidelines and CMS’s new criteria for OMS. Specifically, CMS’s one-time analysis of 2015 data indicated that 727,016 beneficiaries received prescriptions with an average MED of 90 mg or more for any length of time during a 6-month measurement period. In contrast, the 2016 patient safety measures reports identified significantly fewer beneficiaries, 285,119, in its most comparable measure—member years for opioid prescriptions at 120 mg MED for 90 consecutive days or longer. According to CMS officials, CMS shared feedback with the Pharmacy Quality Alliance to consider updating the threshold to 90 mg MED to align with CDC guidelines and the revised OMS criteria. CMS officials said the agency will consider adopting these updates once complete. In addition, while CMS monitors the patient safety measure data, these data are relatively new. CMS officials told us that, as a result, the agency does not yet have enough data to report changes over time toward its goals to reduce the risk of opioid use disorders, overdoses, and inappropriate prescribing. Neither the data gathered as part of OMS, nor patient safety measures gathered so far are adequate to provide CMS with the information necessary to track progress toward meeting its goal of reducing harm from opioids. While tracking a smaller number of beneficiaries in OMS is useful for targeting resource-intensive plan sponsor actions, keeping track of the larger number of beneficiaries at risk of harm from high doses of opioids—greater than 90 mg MED for any duration regardless of the number of providers and pharmacies—could provide CMS with information on progress toward its goals without additional monitoring by plan sponsors. Doing so would also be consistent with federal internal control standards, which require agencies to use quality information to achieve objectives and address risks. Without tracking the number of beneficiaries who receive potentially dangerous levels of opioids regardless of the number of providers or pharmacies, and then examining changes in that number over time, CMS lacks key information that would be useful to determine if it is making progress toward reducing the risk of opioid harm for Part D beneficiaries. CMS Oversees Providers through its Contractor and Plan Sponsors, but Efforts Do Not Specifically Monitor Opioid Prescriptions CMS’s Contractor and Plan Sponsors Seek to Identify Inappropriate Prescribing of Drugs with High Potential for Abuse, Including Opioids NBI MEDIC’s Data Analyses to Identify Outlier Providers CMS oversees providers who prescribe opioids to Medicare Part D beneficiaries through its contractor, NBI MEDIC, and the Part D plan sponsors. CMS requires NBI MEDIC to identify providers who prescribe high amounts of drugs classified as Schedule II under the Controlled Substances Act, which indicates a high potential for abuse and includes many opioids. Using prescription drug event data, NBI MEDIC conducts a peer comparison of providers’ prescribing practices to identify outlier providers—the highest prescribers of Schedule II drugs, which include, but are not limited to, opioids. NBI MEDIC’s initial analyses focuses on providers associated with at least 100 prescription drug event records or at least $100,000 in total Part D payments for Schedule II drugs over the course of one year. These providers are then classified as outliers if they are listed as high in both the number of prescription drug records per prescriber and prescriptions per beneficiary by specialty within each state. NBI MEDIC reports to CMS on the providers with the highest number of prescriptions identified by the analysis. Beginning with the October 2016 report, CMS began sharing NBI MEDIC’s prescriber outlier report with the plan sponsors quarterly to supplement their own investigations of potential fraud, waste, and abuse. According to data from NBI MEDIC, the number of outlier providers identified has generally remained stable except for an increase in 2015. NBI MEDIC and CMS officials said this increase occurred when a commonly used opioid, hydrocodone, was added to the analysis after it was reclassified as a Schedule II drug. NBI MEDIC’s Other Projects NBI MEDIC gathers data on Medicare Part C and Part D and uses its Predictive Learning Analytics Tracking Outcome (PLATO) system to conduct a number of data analysis projects. According to NBI MEDIC officials, these PLATO projects seek to identify potential fraud by examining data on provider behaviors. In addition, according to officials, PLATO is capable of allowing NBI MEDIC to share information on providers with plan sponsors. NBI MEDIC officials stated there are two current PLATO projects that include a focus on some opioids. The TRIO data project identifies providers who prescribe beneficiaries a combination of an opioid, a benzodiazepine, and the muscle relaxant Carisoprodol. This well-known combination of drugs is used to increase the effects of opioids. The Pill Mill data project identifies providers with abnormal prescribing behavior in authorizing controlled substances, including opioids, absent medical necessity. To identify providers potentially operating a pill mill, 17 risk factors are considered, including the number of beneficiaries for whom a provider prescribed controlled substances, the quantity of these medications, the number of beneficiaries who travel long distances to receive medications, and the number of beneficiaries treated for drug abuse or misuse at emergency rooms. Another analysis that NBI MEDIC conducts, according to its officials, is the Transmucosal Immediate Release Fentanyl project, which identifies potential improper payments for medicines containing fentanyl, a prescription opioid pain reliever. NBI MEDIC looks for instances of this drug being prescribed to beneficiaries who do not have cancer combined with breakthrough pain, the only approved use for this drug. NBI MEDIC’s Investigations to Identify Fraud, Waste, and Abuse NBI MEDIC officials said they conduct investigations to assist CMS in identifying cases of potential fraud, waste, and abuse among providers for Medicare Part C and Part D. The investigations are prompted by complaints from plan sponsors, calls to NBI MEDIC’s call center, NBI MEDIC’s analysis of outlier providers, or from one of its other data analysis projects. As part of its investigations, NBI MEDIC officials said they may access data from Medicare Part B, which includes coverage for doctors’ services and outpatient care, to determine whether providers’ diagnoses coincide with their prescriptions. Officials added that they investigate inappropriate prescribing by reviewing Part D prescription records, medical records, or PLATO data; or by conducting background checks, interviewing beneficiaries, or conducting site visits, among other activities. NBI MEDIC data indicates that the total number of its investigations decreased from 2013 to 2016, which, according to NBI MEDIC officials, occurred because it increased activities related to data analysis and collaboration with plan sponsors. NBI MEDIC’s Referrals After identifying providers engaged in potential fraudulent overprescribing, NBI MEDIC officials said they may refer cases to agencies for further investigation and potential prosecution, such as the HHS-OIG, state and local law enforcement, the Federal Bureau of Investigations, or the Drug Enforcement Administration. In 2016, NBI MEDIC data showed that it referred a total of 119 cases to the HHS-OIG and 48 to agencies within the Department of Justice, including the Federal Bureau of Investigations and the Drug Enforcement Agency. CMS officials told us that they do not routinely track the results of individual cases referred by NBI MEDIC to other agencies. A 2016 Senate committee report indicated that the HHS- OIG declined and returned more than half of the cases referred to it from 2013 through 2015. According to NBI MEDIC officials, cases may be rejected for reasons such as not meeting prosecutorial thresholds for evidence, or HHS-OIG does not having enough staff to take on the workload. NBI MEDIC officials told us that HHS-OIG does not always inform NBI MEDIC of its reasons for declining the referrals. Plan Sponsor Monitoring of Providers CMS requires all plan sponsors to adopt and implement an effective compliance program, which must include measures to prevent, detect, and correct Part C or Part D program noncompliance, as well as fraud, waste, and abuse. CMS communicates guidance for plan sponsor’s compliance programs through Chapter 9 of CMS’s Prescription Drug Benefit Manual and in annual letters. CMS’s guidance focuses broadly on prescription drugs, and does not specifically address opioids. To detect fraud, waste, and abuse among providers, plan sponsors told us they use their own data analysis and criteria, as well as NBI MEDIC’s list of outlier providers. For example, plan sponsors identify providers suspected of fraud, waste, or abuse by looking for certain characteristics, such as providers who have a large number of beneficiaries traveling from a different zip code to receive prescriptions, or providers who prescribe large quantities of commonly abused drugs with no associated medical claims to support the prescriptions. Once the suspected providers are identified, plan sponsors said that they conduct their own investigations to determine if there is sufficient evidence of inappropriate prescribing. Plan sponsors told us they may choose to take a number of actions based on these investigations, including choosing to refer the case to NBI MEDIC. Additionally, if appropriate, plan sponsors can educate providers about prescribing guidelines and best practices, or notify them that their patients may be doctor shopping, in order to improve coordination of care. They may also terminate a provider from their plan if they find evidence of fraud or abuse. CMS Lacks Information Necessary for Oversight of Opioid Prescribing and Plan Sponsors’ Monitoring Activities CMS lacks the information necessary to adequately determine the number providers potentially overprescribing opioids, and therefore cannot determine the effectiveness of efforts to achieve the agency’s goals of reducing the risk of opioid use disorders, overdoses, inappropriate prescribing, and drug diversion. CMS’s oversight actions focus broadly on Schedule II drugs rather than specifically on opioids. For example, NBI MEDIC’s analyses to identify outlier providers do not indicate the extent to which they may be overprescribing opioids specifically. According to CMS officials, they direct NBI MEDIC to focus on Schedule II drugs, because they have a high potential for abuse, whether they are opioids or other drugs. However, without specifically identifying opioids in these analyses—or an alternate source of data— CMS lacks data on providers who prescribe high amounts of opioids, and therefore cannot assess progress toward meeting its goals related to opioid use. CMS also lacks key information necessary for oversight of opioid prescribing, because it does not require plan sponsors to report to NBI MEDIC or CMS cases of fraud, waste, and abuse; cases of overprescribing; or any actions taken against providers. Plan sponsors collect information on cases of fraud, waste, and abuse, and can choose to report this information to NBI MEDIC or CMS. PLATO, a voluntary reporting system, is one way that plan sponsors can report information to NBI MEDIC or CMS, and share with other plan sponsors about providers they investigate and about actions they take. While CMS receives some information from plan sponsors who voluntarily report their actions, it does not know the full extent to which plan sponsors have identified providers who have prescribed high amounts of opioids and taken action to reduce overprescribing. Without this information, CMS cannot determine the extent to which plan sponsors are taking action to reduce overprescribing, making it difficult to assess progress in this area. CMS officials told us that they receive reports on what information plan sponsors enter into PLATO. However, according to these officials, they do not have information on all actions taken by plan sponsors; therefore, CMS does not know how often plan sponsors use PLATO or what proportion of actions they report. A 2015 HHS-OIG report recommended that CMS require plan sponsors to report all potential fraud and abuse to CMS and/or NBI MEDIC. CMS disagreed with this recommendation, and stated that plan sponsors currently have several options for referring incidents, that CMS has worked with plan sponsors to improve organizational performance, and that plan sponsors regularly share information on best practices for prevention and detection of fraud. The HHS-OIG continues to recommend that CMS require reporting due to the lack of a comprehensive set of data needed to monitor providers’ inappropriate prescribing. Without specifically monitoring providers’ overprescribing of opioids, CMS cannot determine if its efforts, or the efforts of NBI MEDIC and plan sponsors, are helping to contribute to its goals related to opioid use. Federal internal control standards require agencies to conduct monitoring activities and to use quality information to achieve objectives and address risks. Without adequate information on providers’ opioid prescribing patterns in Part D, CMS is unable to determine whether its related oversight efforts—including such efforts by NBI MEDIC or Part D plan sponsors—are effective or should be adjusted. Conclusions A large number of Medicare Part D beneficiaries use prescription opioids, and reducing the inappropriate prescribing of these drugs is a key part of CMS’s strategy to decrease the risk of opioid use disorder, overdoses, and deaths. Despite working to identify and decrease egregious opioid use behavior—such as doctor shopping—among beneficiaries in Medicare Part D, CMS lacks the necessary information to effectively determine the full number of beneficiaries at risk of opioid harm. CMS recently expanded the number of beneficiaries for whom it expects plan sponsors to conduct intervention efforts, such as case management, and has begun to collect additional patient safety measure data on beneficiaries at risk of harm from opioids. However, these efforts have not yet provided CMS with sufficient data to track how many beneficiaries are receiving large doses of opioids, and therefore are at risk of harm. Without expanding and enhancing its data collection efforts to include information on more at-risk beneficiaries, CMS cannot fully assess whether it is making sufficient progress toward its goals of reducing opioid use disorders, overdoses, inappropriate prescribing, and drug diversion. CMS’s efforts to oversee opioid prescribing specifically are also inadequate. CMS directs NBI MEDIC to focus its analyses on providers who prescribe any drugs with a high risk of abuse, but NBI MEDIC does not specifically track those providers who prescribe opioids. Absent opioid-specific monitoring, CMS cannot assess whether its efforts to reduce opioid overprescribing are effective, or if opioid prescribing patterns are changing over time. In addition, neither CMS nor NBI MEDIC can be sure they have complete information about providers potentially overprescribing opioids to Part D beneficiaries, because plan sponsors are not required to report to CMS or NBI MEDIC all potential fraud and abuse incidents or actions sponsors have taken against providers. As a result, CMS lacks information about plan sponsors’ monitoring of providers who overprescribe opioids, and is therefore unable to determine if the agency’s and plan sponsors’ efforts are successful in achieving CMS’s goals. Recommendations We are making the following three recommendations to CMS. The Administrator of CMS should gather information over time on the number of beneficiaries at risk of harm from opioids, including those who receive high opioid morphine equivalent doses regardless of the number of pharmacies or providers, as part of assessing progress over time in reaching the agency’s goals related to reducing opioid use. (Recommendation 1) The Administrator of CMS should require its contractor, NBI MEDIC, to identify and conduct analyses on providers who prescribe high amounts of opioids separately from providers who prescribe high amounts of any Schedule II drug. (Recommendation 2) The Administrator of CMS should require plan sponsors to report to CMS on investigations and other actions taken related to providers who prescribe high amounts of opioids. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to HHS for comment. HHS provided written comments, which are reprinted in appendix I, and technical comments, which we incorporated as appropriate. In its written comments, HHS described its efforts to reduce opioid overutilization in Medicare Part D. HHS noted that these efforts include a medication safety approach to improve care coordination for high-risk beneficiaries using opioids, quality metrics for plan sponsors, and data analysis of prescribing patterns to target potential fraud, waste, and abuse. For example, HHS noted that CMS adopted a Medicare Part D opioid overutilization policy in 2013 that provided specific guidance to Part D plans on effective drug utilization review programs to reduce overutilization of opioids. As described in our report, CMS’s opioid overutilization policy requires sponsors to implement retrospective drug utilization review programs to identify beneficiaries who are potentially overusing opioids. Among other things, sponsors may choose to implement beneficiary-specific edits that limit high-risk beneficiaries to certain opioids and amounts, and CMS expects them to use formulary- level edits to alert providers when their patients are receiving high levels of opioids from other doctors. HHS also concurred with two of our three recommendations. HHS concurred with our recommendation that CMS gather information over time on the number of beneficiaries at risk of harm from opioids, as part of assessing progress toward agency goals. HHS commented that CMS tracks beneficiaries who meet these criteria through the patient safety measures. However, while these patient safety measures are a potential source of this information, they currently do not include all at-risk beneficiaries, because the opioid use threshold they use (120 mg MED for 90 days or longer) is more lenient than indicated in CDC guidelines or in CMS’s revised OMS criteria. In addition, while CMS uses the patient safety measures to assess plan sponsor performance, the data are relatively new, and CMS has not yet used them to report progress over time toward its goals. HHS concurred with our recommendation that CMS require NBI MEDIC to gather separate data on providers who prescribe high amounts of opioids, and HHS noted that it intends to work with NBI MEDIC to identify trends in outlier prescribers of opioids. HHS did not concur with our recommendation that CMS require plan sponsors to report on investigations and other actions taken related to providers who prescribe high amounts of opioids. HHS noted that plan sponsors have the responsibility to detect and prevent fraud, waste, and abuse and that CMS reviews cases when it conducts audits. HHS also stated that it seeks to balance requirements on plan sponsors when considering new regulatory requirements. As noted in our report, plan sponsors conduct investigations and take actions against providers, and some plan sponsors report actions to CMS and NBI MEDIC. However, without complete reporting, such as reporting from all plan sponsors on the actions they take to reduce overprescribing, CMS is missing key information that could help assess progress in this area. Due to the importance of this information, we continue to believe that CMS should require plan sponsors to report on the actions they take. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of HHS and the Administrator of CMS. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Health and Human Services Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Will Simerl (Assistant Director), Carolyn Feis Korman (Analyst-in-Charge), Amy Andresen, Samantha Pawlak, and Patricia Roy made key contributions to this report. Also contributing were Muriel Brown, Drew Long, and Emily Wilson.
Why GAO Did This Study Misuse of prescription opioids can lead to overdose and death. In 2016, over 14 million Medicare Part D beneficiaries received opioid prescriptions, and spending for opioids was almost $4.1 billion. GAO and others have reported on inappropriate activities and risks associated with these prescriptions, such as receiving multiple opioid prescriptions from different providers. GAO was asked to describe what is known about CMS’s oversight of Medicare Part D opioid use and prescribing. This report examines (1) CMS oversight of beneficiaries who receive opioid prescriptions under Part D, and (2) CMS oversight of providers who prescribe opioids to Medicare Part D beneficiaries.GAO reviewed CMS opioid utilization and prescriber data, CMS guidance for plan sponsors, and CMS’s strategy to prevent opioid misuse. GAO also interviewed CMS officials, the six largest Part D plan sponsors, and 12 national associations selected to represent insurance plans, pharmacy benefit managers, physicians, patients, and regulatory and law enforcement authorities. What GAO Found The Centers for Medicare & Medicaid Services (CMS) provides guidance on the monitoring of Medicare beneficiaries who receive opioid prescriptions to plan sponsors—private organizations that implement the Medicare drug benefit, Part D—but lacks information on most beneficiaries at risk of harm. CMS provides plan sponsors guidance on how they should monitor opioid overutilization among Medicare Part D beneficiaries and requires them to implement drug utilization review systems that use criteria similar to CMS's. CMS's criteria focus on beneficiaries who (1) receive prescriptions of high doses of opioids, (2) receive prescriptions from four or more providers, and (3) fill the prescriptions at four or more pharmacies. According to CMS officials, this approach allows plan sponsors to focus their actions on those beneficiaries it determined to have the highest risk of harm from opioid use. CMS’s criteria, including recent revisions, do not provide sufficient information about the larger population of potentially at-risk beneficiaries. CMS estimates that while 33,223 beneficiaries would have met the revised criteria in 2015, 727,016 would have received high doses of opioids regardless of the number of providers or pharmacies. In 2016, CMS began to collect information on some of these beneficiaries using a higher dosage threshold for opioid use. This approach misses some who could be at risk of harm, based on Centers for Disease Control and Prevention guidelines. As a result, CMS is limited in its ability to assess progress toward meeting the broader goals of its Opioid Misuse Strategy, which includes activities to reduce the risk of harm from opioid use. CMS Estimates of 2015 Part D Beneficiaries with High Opioid Doses and Those Who Would Have Met Revised Overutilization Monitoring Criteria CMS oversees the prescribing of drugs at high risk of abuse through a variety of projects, but does not analyze data specifically on opioids. According to CMS officials, CMS and plan sponsors identify providers who prescribe large amounts of drugs with a high risk of abuse, and those suspected of fraud or abuse may be referred to law enforcement. However, GAO found that CMS does not identify providers who may be inappropriately prescribing large amounts of opioids separately from other drugs, and does not require plan sponsors to report actions they take when they identify such providers. As a result, CMS is lacking information that it could use to assess how opioid prescribing patterns are changing over time, and whether its efforts to reduce harm are effective. What GAO Recommends GAO recommends that CMS (1) gather information on the full number of at-risk beneficiaries receiving high doses of opioids, (2) identify providers who prescribe high amounts of opioids, and (3) require plan sponsors to report to CMS on actions related to providers who inappropriately prescribe opioids. HHS concurred with the first two recommendations, but not with the third. GAO continues to believe the recommendation is valid, as discussed in the report.
gao_GAO-18-310
gao_GAO-18-310_0
Background In the United States, both FRA and FTA regulate rail transportation safety. FRA oversees safety of railroads operating on what is known as the general system, a network of standard gage track over which goods may be transported and passengers may travel. This system includes freight railroads, which typically own their own tracks and locomotives, transporting products among states and regions. FRA also oversees safety of intercity passenger and commuter railroads that operate over tracks owned by freight railroads and other entities. FTA oversees safety of rail transit systems that typically serve individual metropolitan areas, using track not shared with freight and other passenger trains. Rail transit includes a variety of modes, such as heavy and light rail, streetcars, automated guideways, cable cars, and others. Rail Transit Rail transit is an important component of the nation’s transportation network, particularly in large metropolitan areas. Rail transit systems provided over 4.4 billion passenger trips in 2016. “Heavy rail” systems in large cities account for much of the total rail transit activity, including 88 percent of passenger trips in 2016. According to FTA, 61 rail transit systems within 28 states, the District of Columbia, and Puerto Rico are subject to safety oversight by one of the 31 agencies in FTA’s state safety oversight program (see fig. 1). The states have long played a central role in conducting safety oversight of rail transit systems. The Intermodal Surface Transportation Efficiency Act of 1991 required, among other things, that states with rail transit operators designate an agency to oversee the safety of those systems, known as a state safety oversight agency. In overseeing state safety agencies, FTA designed the program as one in which FTA, states, and rail transit operators collaborate to ensure the safety and security of rail transit systems. However, limitations have been identified in the state safety oversight program. In 2006, we reported on some state safety agency challenges in overseeing rail transit safety. Specifically, we found many of the state safety agencies lacked enough qualified staff and adequate levels of training to meet their responsibilities. In a 2009 hearing before the Subcommittee on Highways and Transit of the House Committee on Transportation and Infrastructure, then Secretary Ray LaHood of the Department of Transportation discussed some of the weaknesses under the current state safety oversight program and introduced a public transportation safety legislative proposal. In 2010, various bills were introduced in both houses of Congress that would have provided FTA with various enforcement mechanisms and with the authority to issue safety regulations. Additionally, the bills would have required the Secretary to establish a federal certification program for employees and contractors who carry out a state public transportation safety program. In the 112th Congress, the Senate amended a House bill to include a public transportation safety provision, which eventually became section 20021 of MAP-21, the federal public transportation safety program. MAP-21 enhanced FTA’s authority to oversee the safety of rail transit, potentially addressing some of the weaknesses identified by various stakeholders. Specifically, MAP-21 established a comprehensive Public Transportation Safety Program, which continues to rely on state safety agencies to monitor rail transit systems’ safety operations. MAP-21 required that, within 3 years of the effective date of a final state safety oversight program rule, each eligible state have in place a state safety oversight program certified by FTA. An eligible state must, among other things, establish a state safety agency and determine, in consultation with FTA, an appropriate staffing level for this state agency that is commensurate with the number, size, and complexity of the rail transit systems within the state. Additionally, a state safety agency must be financially and legally independent from any rail transit system it oversees and have investigative and enforcement authority with respect to the safety for its rail transit systems, among other things. Each eligible state has until April 15, 2019, to receive FTA approval of its state safety oversight program, or else FTA will be prohibited from obligating certain federal financial assistance to any entity in the state that is otherwise eligible to receive that federal financial assistance. After that approval, state safety agencies will be evaluated for continued compliance with FTA regulations a minimum of once every 3 years through a triennial review process. According to FTA, these requirements represent a dramatic increase in federal expectations for state safety oversight and for the rail transit industry. MAP-21 also established a state safety oversight grant program, offering federal funding to states for their state safety activities. FTA’s Office of Transit Safety and Oversight administers the state safety oversight program. Railroads Freight and passenger railroads have played a transformational role in the development of America and continue to be an important part of the economy. The general railroad system consists of a vast network of operations (see fig. 2). The $60 billion freight rail industry is operated by seven Class I, and hundreds of smaller, railroads. In addition, about 40 railroads move passengers, which carry greater than 670 million passengers per year. The federal government has long provided regulatory oversight of railroad safety, both passenger and freight, that operate on the general system. The Interstate Commerce Commission, the first federal regulatory commission in U.S. history, was established in 1887 to regulate interstate commerce by rail. The Commission’s safety functions were transferred to FRA, which was created by the Department of Transportation Act in 1966. In its role as federal regulator and overseer of railroad safety, FRA prescribes and enforces railroad safety regulations and conducts research and development in support of improved railroad safety and rail transportation policy. FRA utilizes safety inspectors and specialists, primarily covering five safety disciplines, to review and enforce compliance with these regulations. FRA’s safety disciplines are track, signal, and train control; motive power and equipment; operating practices; and hazardous materials. Following several fatal rail accidents between 2002 and 2008, the Rail Safety Improvement Act of 2008 was enacted, the first authorization of FRA’s safety programs since 1994. This act directed FRA to, among other things, issue new safety regulations for different aspects of railroad safety, such as hours of service requirements for passenger railroad workers, positive train control implementation, track inspection rules, and safety at highway-rail grade crossings. FRA’s Office of Railroad Safety administers the agency’s safety program. Rail Accidents and Incidents Rail transportation is a relatively safe way to transport people and products though serious incidents continue to occur on railroads and rail transit. According to an analysis of DOT’s Bureau of Transportation Statistics data by the American Public Transportation Association, travel by rail transit is far safer than automobile travel. From 2000-2014, for instance, there were 6.53 and 0.33 fatalities per billion passenger-miles traveled in cars or light trucks and rail transit, respectively. Within rail travel, the fatality rates on both railroads and rail transit operators have remained similar in recent years. Further, the rate of accidents and incidents—including collisions and derailments—also do not appear to differ substantially between railroads and rail transit in recent years. Nevertheless, serious incidents continue to occur on railroads and rail transit, posing safety risks to passengers, railroad employees, and the public. For example, in June 2009, two WMATA trains collided, resulting in 52 injuries and 9 deaths. A smoke incident on WMATA’s Metrorail system in January 2015 also resulted in the death of 1 person and injured over 90. In a 10-month period from May 2013 to March 2014, the Metro- North commuter railroad, which serves New York and Connecticut, was involved in five accidents that resulted in the death of 6 people and 126 injured. In June 2016, two BNSF Railway freight trains collided near Panhandle, Texas, resulting in the death of three crew members. Incidents such as these have prompted investigations into both the causes and contributing factors of the specific accidents as well as broader rail safety oversight. FRA Has a Centralized Safety Oversight Framework While FTA Is Implementing a State- Based Approach FRA has a more centralized safety oversight program for railroads, while FTA is implementing changes to the rail transit oversight program, established in federal statute, which relies on states to monitor and enforce safety. Key characteristics of both programs include: (1) the establishment of safety regulations, (2) inspections and other oversight activities, such as audits and investigations, based on those regulations, and (3) enforcement mechanisms to ensure that safety deficiencies are addressed (see fig. 3). Safety Regulations FRA has developed extensive railroad safety regulations over decades. FRA’s railroad safety regulations include requirements governing track design and inspection, grade crossings, signal and train control, mechanical equipment including locomotives, and railroad-operating practices including worker protection rules. For example, FRA’s regulations for track and equipment include detailed, prescriptive minimum requirements, such as formulas that determine the maximum allowable speeds on curved track. Many of FRA’s rail safety regulations establish minimum safety requirements, though railroads can apply for waivers. As FRA updates its safety regulations, it has proposed more performance-based regulations in recent years. Many of FRA’s current safety regulations specify the behavior or manner of compliance that railroads must adopt, such as inspecting each locomotive at least every 92 days. Performance-based regulations, however, specify a desired outcome rather than a behavior or manner of compliance. For example, FRA’s recent rulemaking to amend its passenger equipment safety regulations proposes performance-based crashworthiness and occupant protection requirements, rather than explicit targets or tolerances. According to FRA, establishing performance requirements in these areas would allow a more open rail market that incorporates recent technologies. FTA is currently assessing the need for rail transit safety regulations, having been provided the authority to issue safety regulations in 2012. Since MAP-21 was enacted, FTA has finalized regulations implementing the public transportation safety program authorized by statute. These include regulations that establish rules for FTA’s administration of a comprehensive safety program to improve rail transit safety as well as updated regulations governing state safety oversight of rail transit. In addition to its public transportation safety program regulations, FTA also has regulations governing its drug and alcohol testing program. MAP-21 also authorized FTA, for the first time, to issue rail transit safety regulations, which would establish minimum safety performance requirements for rail transit operators, as part of its requirement to develop a National Public Transportation Safety Plan. FTA initiated a regulatory development effort after the passage of MAP-21, which included a compilation and evaluation of existing transit safety standards, guidance, and best practices from the federal government, states, industry, and other sources. After the evaluation, FTA issued a report that concluded there was limited documentation or evidence of the effectiveness of these existing rail transit safety standards. The report included recommendations that are intended to enable FTA to undertake further data-driven, risk-based analysis of rail transit safety performance and the applicability and effectiveness of the identified safety standards. FTA is also currently analyzing specific focus areas to determine any areas that should be addressed by federal safety regulations. For example, FTA is studying the need for regulations related to rail transit vehicle crashworthiness. Since no federal rail transit safety regulations that establish minimum safety performance requirements for rail transit operators currently exist, rail transit operators are subject to different safety standards, depending largely on what voluntary standards they have chosen to adopt, according to American Public Transportation Association officials we spoke with. The American Public Transportation Association, for instance, has issued a variety of rail transit safety standards, addressing various aspects of the industry including operations, training, and inspections. In addition, states vary in the extent to which they have regulations for rail transit operators. For example, officials from the California Public Utilities Commission noted that it has issued a variety of safety regulations applicable to rail transit operators within the state of California to improve safety of rail operations. Both FRA and FTA have mechanisms to gather the input of stakeholders—including rail operators, labor unions, industry associations, and others—when considering development of safety regulations. In developing most of its safety regulations, FRA seeks input from stakeholders through its Railroad Safety Advisory Committee. In 1996, FRA established this committee to develop new regulations through a collaborative process, with the rail community working together to create mutually satisfactory solutions to safety issues. FTA is collaborating with stakeholders as it assesses the need for rail transit safety regulations. More specifically, FTA’s research partner, the Center for Urban Transportation Research, established a working group to collaborate with industry stakeholders to inform the safety regulations development process. FTA also solicited comments from industry stakeholders on its compilation of existing rail transit safety standards. More broadly, FTA also has a Transit Advisory Committee for Safety, which provides information, advice, and recommendations to FTA on safety matters. Oversight Activities FRA fulfills its mission, in part, through safety compliance audits and inspections, and investigations. FRA ensures compliance with its safety regulations through inspections, using a staff of railroad safety experts, inspectors, and other professionals assigned to eight regional offices across the nation. For example, to determine a railroad’s compliance with FRA safety regulations, inspectors examine track, equipment, signal devices, employee actions, and procedures and review maintenance and accident records. Additionally, 31 states have rail safety programs that partner with FRA. Under this approach, FRA enters into agreements with states to allow state inspectors to participate in investigative and surveillance activities concerning federal railroad safety laws. State inspectors who participate in this program submit inspection reports to FRA. More broadly, FRA’s inspections are guided by a risk-based model. Under this approach, FRA focuses its inspections on locations that, according to the data-driven model, are likely to have safety problems. Like other operating administrations within DOT, FRA has relatively few resources for overseeing railroads, compared with the size of the general system. The risk-based model is designed to help FRA target the greatest safety risks. FRA has begun utilizing automated inspections as well. In particular, according to FRA, new imaging technologies have the potential to better inspect track for cracks in the rail that could lead to breakage as well as measure the track’s geometry to ensure that rails are positioned to meet standards. To further promote safety in railroad operations, FRA conducts accident investigations. Separate from investigations conducted by NTSB, FRA investigates select railroad accidents to determine root causation, and any contributing factors, so that railroad properties can implement corrective actions to prevent similar incidents in the future. Resources for railroad safety oversight activities have increased in recent years. FRA was appropriated about $218 million in fiscal year 2017, an increase over the approximately $187 million it received in fiscal year 2015, for safety and operations, which funds FRA’s personnel, including inspectors, and safety programs. According to FRA, Congress provided FRA with increased funding in recent years for the purpose of increasing staffing related to specific safety issues, such as trespasser prevention and passenger rail safety. As part of this effort, FRA has hired additional inspectors, going from 347 inspectors in fiscal year 2013 to over 360 currently, out of the nearly 930 total full-time equivalent staff. FRA officials told us, as we have reported in the past, that it can be difficult to recruit, train, and certify qualified inspectors in a timely manner, especially in certain areas of expertise. Further, according to FRA, its inspectors have the ability to inspect less than 1 percent of the general system annually. Though FTA now has more robust inspection authorities, states will continue to conduct front-line rail transit safety oversight activities. MAP- 21 provided FTA with new authorities to inspect, audit, and investigate practices at rail transit agencies, including safety practices, while also preserving the role of state safety agencies to monitor rail transit systems’ safety operations. According to FTA officials, any federal inspections of rail transit operators are intended to supplement a state safety agency’s oversight activities, except where FTA assumes temporary, direct oversight of a rail transit system from an inadequate state safety agency. FTA officials told us that establishing a nationwide safety inspection program at the federal level is inconsistent with the statutory framework of the state safety oversight program and with congressional intent, which contemplates preserving the primary role of state safety agencies in providing direct safety oversight of rail transit systems. The officials also noted that the state-based approach to rail transit safety oversight is valuable because states are generally closer to, and more familiar with, rail transit operators. To date, FTA has utilized its new inspection authorities only on WMATA’s rail system. As part of oversight activities, some state safety agencies have conducted inspections of the rail transit systems they oversee, though they were not required to do so, according to FTA officials we spoke with. To strengthen states’ abilities to conduct oversight activities, FTA has recommended that state safety agencies develop risk-based inspection programs. Further, to ensure the independence of state safety agencies, these agencies cannot receive funding from the rail transit entities they oversee. Resources for FTA’s rail transit safety oversight administrative expenses have remained relatively stable in recent years, though more are needed, according to FTA. Since fiscal year 2012, FTA’s appropriations for administrative expenses, which funds FTA personnel and support activities including the Office of Transit Safety and Oversight, has increased $14 million, to about $113 million in fiscal year 2017, according to FTA. However, for several years, FTA has averaged about 508 total full-time equivalent staff agency-wide, and a little over 30 safety staff in the Office of Transit Safety and Oversight. According to FTA, the Office of Transit Safety and Oversight has been under-resourced since it was established in response to new safety authority provided in MAP-21. For fiscal year 2018, FTA requested in their submission for the President’s Budget proposal funding to hire up to an additional 20 positions for various lines of safety work. FRA’s and FTA’s oversight activities also include regular audits of, and communication with, the rail operators under their oversight. Given finite resources and large rail networks, FRA and FTA audit rail operators’ own inspections rather than conducting comprehensive federal inspections. More specifically, FRA inspectors, and state safety agencies in FTA’s oversight program, regularly examine records of rail operators’ internal inspections to identify safety deficiencies. Officials from FRA, FTA, and five rail stakeholders we spoke with told us that FRA and FTA rail safety oversight programs also rely on collaboration and communication between rail operators and regulators to ensure safety. For example, regular meetings between FRA and railroad staff to discuss safety trends and industry developments are important to ensuring safety, according to officials we spoke with from FRA and the railroads. FRA specialists and inspectors participate, with representatives of railroad labor and management, in the implementation of voluntary safety programs. For example, FRA sponsors the Confidential Close Call Reporting System, a voluntary, confidential program allowing railroads and their employees to report accident and incident “close calls.” According to FRA officials, voluntary programs such as this increase industry awareness of railroad safety and engagement with it. FTA also collaborates with state safety agencies as rail transit safety issues arise, according to FTA officials, using federal oversight and enforcement authorities as a “back-stop” against the oversight of state safety agencies. Additionally, according to officials we spoke with from FTA and two rail transit operators, state safety agency staff meet with rail transit operators regularly, using knowledge of local operating conditions to help ensure safety. Enforcement Mechanisms FRA uses a variety of tools, including civil penalties, to resolve safety issues. While some safety issues are resolved informally through discussion and collaboration between FRA and railroads, as noted above, some defects identified during inspections are classified as violations and subject to financial penalties. More specifically, when railroads do not resolve issues in a timely manner or identified defects are serious, FRA has the authority to cite violations and assess civil penalties, against either railroads or individuals. Further, as authorized by law, FRA negotiates settlements with railroads and other entities subject to its safety jurisdiction to resolve claims for civil penalties. In fiscal year 2016, FRA assessed over $11.8 million in civil penalties against railroads. According to FRA, fiscal year 2016 was the second year in a row that it took steps to increase penalty amounts paid by railroads, as part of a continued effort to increase consequences for violations that negatively affect safety. To ensure the safety of rail transit systems, states will continue to be the primary enforcers of safety requirements, according to FTA officials, though FTA now has more enforcement tools. MAP-21 preserved the role of state safety agencies as the primary enforcement body for rail transit. FTA has now required that state safety agencies have enforcement authorities sufficient to compel action from rail transit entities to address safety deficiencies. Though no specific authorities are required, FTA has suggested that a variety of mechanisms could be appropriate, such as the ability to remove deficient equipment from service or assess fines. According to FTA, this requirement is designed to overcome a long- standing vulnerability in state safety oversight, which allowed safety deficiencies to remain for long periods of time. MAP-21 and the FAST Act also provided FTA with more options for enforcement when rail transit operators are found to be out of compliance with safety requirements. In particular, FTA can withhold federal funding for rail transit operators or direct a rail transit operator to use federal funding for a specific purpose. Additionally, after FTA assumes temporary direct oversight of an inadequate state safety agency, FTA can withhold federal funds from the state until the state safety oversight program has been certified. To date, FTA has utilized this authority only with the states responsible for safety oversight of WMATA’s rail system. In February 2017, FTA announced that it would withhold 5 percent of fiscal year 2017 urbanized area formula funds from Maryland, Virginia, and the District of Columbia until a new state safety oversight program is certified for WMATA’s rail system. This action built upon FTA’s determination that WMATA’s state safety agency was ineffective at “providing adequate oversight consistent with prevention of substantial risk of death or personal injury.” FRA and FTA also have the authority to directly intervene in rail operations. In particular, both FRA and FTA can suspend the service of rail operators in response to certain safety concerns. Additionally, FTA can assume direct safety oversight of a rail transit operator if FTA determines the state safety oversight program is not adequate, among other things. In response to safety incidents on WMATA’s rail system, FTA assumed temporary and direct safety oversight of WMATA in October 2015, as previously noted. FRA’s and FTA’s Approaches to Rail Safety Oversight Have Strengths and Limitations, and FTA Can Improve Implementation of Its New Authorities FRA and FTA’s approaches to their rail safety oversight missions each have strengths and limitations, including how the agencies develop safety regulations, conduct inspections, and carry out enforcement. Compared to FRA’s long-standing role in providing safety oversight over railroads, FTA is in the process of implementing significant changes to its program for rail transit safety oversight after being granted new authorities in MAP- 21 and the FAST Act. With respect to regulations, FRA’s extensive and well-established safety regulations are a strength. FTA has made some progress toward developing appropriate safety regulations, such as identifying subjects for potential regulatory action. With respect to inspections, FRA’s use of a risk-based approach to distributing inspection resources is a strength. FTA has sought to address previously identified deficiencies in state safety oversight by recommending that state safety agencies develop risk-based inspection programs. FTA, though, has not provided states guidance for these efforts. With respect to enforcement, FRA’s use of its enforcement authorities is a strength. FTA is also implementing new statutory requirements that state safety agencies have enforcement authorities but does not have a process or methodology to evaluate the effectiveness of these enforcement practices. Regulations: FRA and FTA Are Working to Improve Rail Safety Oversight by Considering Performance- based Regulations Extensive and well-established safety regulations are a strength of FRA’s safety oversight program based on studies we reviewed and discussions with rail operators and stakeholder organizations. According to NTSB, FRA’s railroad safety regulations are an important and effective part of its oversight program. Our previous work reported that according to stakeholders, the Railroad Safety Advisory Committee provides a collaborative environment where stakeholders in the rail community work with FRA to identify issues and proposals for safety standards and regulations that improved the quality of railroads’ safety initiatives and fostered a greater level of compliance with safety regulations. This is consistent with views of stakeholders we spoke with, who characterized FRA’s safety regulations as a strength. An industry association told us that FRA’s regulations promote safety by helping to ensure that no operator falls below a minimum threshold for safe operations, while a rail operator told us that federal regulations help to standardize the operating environment and prevent a patchwork of various state regulations. Four stakeholders also characterized the Railroad Safety Advisory Committee, which plays a large role in crafting FRA’s railroad safety regulations, as effective and inclusive. However, based on studies we reviewed and discussions with rail operators’ and stakeholders’ organizations, FRA faces limitations in its efforts to regulate safety across railroad systems that differ from one another and sometimes change more quickly than the federal regulatory process. Five railroad operators and an industry association told us that some of FRA’s safety regulations do not account for differences in railroads or innovation in safety practices, with three railroad operators stating that this approach requires the extensive use of waivers for particular regulations. Further, two railroad operators and a rail transit operator we spoke with stated that additional federal regulations are needed to provide minimum baseline requirements in specific areas of railroad safety such as medical fitness for duty. In 2014, NTSB also found that FRA needs to do more to regulate particular safety issues including medical fitness for duty and signal protection. FRA officials acknowledged that time and resources are two of the primary challenges that the agency faces when developing safety regulations but also noted additional ways in which the agency can require railroads to adopt safety practices. FRA officials described the process of creating or significantly amending a regulation as involving years of work, even before the agency commences with the process of drafting a rule. The officials also noted that the agency has additional tools to compel railroads to adopt safety practices. For example, FRA officials discussed the use of compliance agreements, in which railroads can have fines reduced in exchange for adopting safety measures that go beyond what FRA regulations require. FRA officials are considering the use of performance-based regulations as they update their safety regulations. As noted above, FRA’s proposed regulations regarding passenger equipment safety incorporates performance -based safety requirements, rather than explicit safety targets or tolerances. FRA has promulgated performance-based regulations about the implementation of positive train control, a communications-based system designed to prevent certain types of train accidents, as well as system safety programs that set general safety parameters and thresholds by which successful performance is governed. FRA’s consideration of performance-based regulations is in line with federal guidance. OMB’s Circular A-4 states that performance standards “are generally superior to engineering or design standards because performance standards give the regulated parties the flexibility to achieve regulatory objectives in the most cost-effective way.” Additionally, under Executive Order 12866, agencies should (to the extent permitted by law and where applicable) identify and assess alternative forms of regulation and specify performance objectives, rather than specifying the behavior or manner of compliance that regulated entities must adopt. However, as the language of OMB’s Circular A-4 and Executive Order 12866 suggest that performance-based regulations are not always feasible, studies of performance-based regulations find that as with any other form of regulation, performance-based standards have trade-offs. FRA officials told us that under certain circumstances, performance- based regulations are appropriate for issues regarding design, maintenance, operation, and technology-driven safety requirements. FRA officials we spoke with did not think performance-based standards are appropriate for areas that require standardization. One example is track safety standards, where the need for different operators to use the same equipment precludes a performance-based approach that allows railroads to meet requirements through different means. FRA officials added that a key aspect of the success of performance-based regulations concerns how railroads demonstrate compliance. This concern is consistent with other studies of performance-based regulations, which find that these regulations are most appropriate when regulators have capacity to measure and monitor performance. Though FTA has made progress assessing the state of rail transit safety standards, a limitation of FTA’s rail transit safety oversight program is the lack of federal rail transit safety regulations, which may contribute to inconsistent safety practices across the rail transit industry, according to studies we reviewed and discussions with rail operators and stakeholder organizations. NTSB reported that the structure of FTA’s oversight process leads to inconsistent practices, inadequate standards, and marginal effectiveness. In addition, a 2016 DOT OIG report found that because FTA’s safety standards are voluntary, they are unenforceable. In 2012, FTA gained the authority to issue safety regulations, though it has not done so yet, and NTSB and other stakeholders we spoke with indicated that the lack of such federal safety regulations is a weakness in federal rail transit safety oversight. Despite differences across rail transit systems, there is value in establishing federal rail transit safety regulations, according to stakeholders from all categories of those we interviewed, including a state safety agency, three rail transit operators, a railroad operator, and two industry associations. Some stakeholders identified specific areas that would benefit from federal rail transit regulations. For example, two rail transit agencies called for federal regulations to address operator fatigue. Some of these officials stated that federal rail transit safety regulations could help ensure safety by establishing clear and consistent minimum standards. Officials from a rail transit entity and an industry association stated that voluntary standards are not enough to ensure that transit entities will adopt appropriate safety measures. According to our analysis, a past study, and stakeholders we spoke with, FTA’s ability to develop and implement performance-based regulations is limited by its lack of capacity to collect and analyze rail safety performance data. In 2017, DOT OIG found that data limitations of FTA’s National Transit Database results in limited safety performance criteria in FTA’s National Public Transportation Safety Plan. Further, two rail transit entities as well as a state safety agency we spoke with stated that they face challenges in analyzing data due to either the size of their systems or their capacity. FTA officials told us that they need more data to inform their decisions regarding whether to establish rail transit safety regulations, and also added that a limitation to their ongoing assessment of potential areas for rail transit safety regulation is the concern about public disclosure of safety data provided to FTA and its potential use in private litigation. According to FTA officials, they need more information to do a comprehensive evaluation of efficacy of current safety standards and practices. As required by the FAST Act, FTA has entered into an agreement with the National Academies of Sciences, Engineering, and Medicine, to conduct a study to evaluate whether it is in the public interest to withhold from federal or state court proceedings any information collected by DOT through its public transportation safety program oversight activities. The National Academies of Sciences is expected to complete this study in 2018. FTA is taking positive steps toward developing safety regulations that may address inconsistent safety practices across rail transit operators. FTA officials stated that the agency is considering issuing rail transit safety regulations and also employs additional tools to compel rail transit entities to adopt safety measures. As noted above, FTA is currently studying whether federal regulations are appropriate for specific areas of rail transit safety. Executive Order 12866 and OMB’s Circular A-4 direct federal agencies to consider performance-based regulations when developing regulations. Further, as the Transportation Research Board recently reported, any decision to use performance-based regulations “must take into account the regulator’s own ability to enforce and motivate compliance (through methods such as auditing and field inspections) as well as the capacity of regulated entities to meet their obligations.” FTA officials noted that they are actively engaged with members of the Transportation Research Board in reviewing and discussing these recent findings related to safety regulations for high-hazards industries. In January 2017, FTA issued its National Public Transportation Safety Plan, which FTA officials noted is one component of their transit safety standard development program. According to FTA officials, the plan identifies a list of issue areas that the agency is currently studying to determine whether national regulations are needed. FTA officials also stated that the plan includes “voluntary standards,” which are intended to put the industry “on notice” that federal safety regulations may be proposed in those areas. FTA officials stated that they view the National Public Transportation Safety Plan as iterative and more easily updated compared with official regulations. Additional tools that FTA officials stated the agency employs in its approach to safety oversight include general directives as well as the requirements associated with FTA grants. Inspections: FRA Utilizes a Risk-Based Model, While FTA Oversees the Development of State Safety Agencies’ New Programs Based on our assessment and studies we reviewed, a strength of FRA’s safety oversight program is its risk-based approach to distributing inspection resources, which may serve as an example for FTA and state safety agencies. According to NTSB, FRA’s qualified inspectors are a strength of its oversight program. To help target these inspectors to the areas of highest risk, FRA developed the National Inspection Plan, which includes a quantitative model for allocating inspection resources in a way that tries to minimize railroad accidents. This model utilizes data including: (1) accident and incident data that railroads are required to report, (2) data from FRA inspection activity, and (3) information on railroad activities such as train miles and other data. Based on our assessment of FRA’s model, we believe that it can be an appropriate and useful tool for directing its inspection resources based on risk because it relies on statistical methods commonly used to predict the risk of a violation for regulated entities. While we did not review FRA’s entire modeling process, nor did we validate the results it generates, we do believe that FRA’s approach to using these statistical models as a key part of its inspection program is appropriate. However, a potential limitation of FRA’s inspection program is the flexibility granted to individual inspectors and whether the manner and extent to which inspectors implement this discretion may be inconsistent with the risk-based National Inspection Plan. FRA’s National Inspection Plan provides guidance for inspectors about how much time they should spend inspecting individual railroads. According to FRA officials, FRA inspectors have considerable flexibility to deviate from the National Inspection Plan based on their judgment regarding where to more effectively use their resources. FRA officials stated that situations arise that call for deviations in planned inspections. For example, a particular railroad may experience a serious accident and therefore require more oversight from FRA. According to FRA officials, regional offices make these decisions based on their understanding of emerging issues. Inspectors are expected to know their region and decide which locations to go to, and are in part evaluated based on these decisions. When a region’s record of total inspection time spent on a particular railroad differs from the National Inspection Plan by more than 5 percent, the region’s leadership submits an explanation to FRA’s Office of Railroad Safety. This practice, if not monitored, could allow inspectors to deviate from the data-driven model results in ways that undermine the goal of the National Inspection Plan to deploy FRA’s limited resources efficiently and based on risk. However, FRA officials told us that flexibility for individual inspectors is important, and that FRA is continuously monitoring the model’s performance and making changes as appropriate. Further, OECD’s Best Practice Principles for Regulatory Policy note that it is important to ensure “that sufficient flexibility is left to enforcement and inspection officials to adapt their response in proportion to the facts on the ground.” A strength of FTA’s approach to rail transit safety oversight is that it is working to overcome weaknesses in state oversight of rail transit identified in our prior work and stakeholders we spoke with. For example, FTA has noted that in the past some state safety agencies lacked sufficient oversight authorities. To now be certified by FTA, state safety agencies must demonstrate that they have authority to review, approve and oversee the implementation of rail transit operator’s safety plans. Additionally, we have found, and FTA has also noted, that some state safety agencies would benefit from more training and additional staff. To now be certified by FTA, state safety agencies must be capable of directly hiring and developing staff and contract support, as well as have a training plan for certain staff. Though FTA is seeking to implement stronger safety oversight activities, a limitation of its program is that state safety agencies have not received the guidance and support necessary to develop effective inspection programs. FTA does not currently plan to conduct widespread inspections itself and recommends that state safety agencies develop risk-based inspection programs. According to FTA, states have discretion to establish their inspection programs in accordance with their program standards, and are not required to actually conduct inspections as the method of verifying rail transit operators’ compliance with safety rules. However, direct observation, audits, and performance indicator tracking are useful methods for an oversight agency in assessing a regulated entity’s safety culture. Officials we spoke with from selected state safety agencies say that they have received little guidance from FTA on what their risk-based inspection programs should look like. In the materials FTA provided to states, it said that it intends to provide guidance to states on risk-based inspections but did not provide us with a plan or timeline for doing so. Without guidance from FTA, state safety agencies may not develop effective risk-based inspection programs and thus not use their resources efficiently. Effective risk-based inspection programs are particularly important given state safety agencies’ limited resources. We have reported in the past that some state safety agencies lack sufficient resources, including training and staff. Officials from two rail transit operators and all four industry associations we spoke with stated that state safety agencies continue to have limited resources and capacity. Several state safety agencies we spoke with rely on contractors or employees with other responsibilities besides oversight of rail transit to meet their increased oversight responsibilities and achieve certification from FTA. Federal standards for internal control as well as leading practices for regulatory inspections state that agency objectives, including those related to inspections and enforcement, should be clearly communicated. Specifically, federal standards for internal control require that management communicate the necessary quality information to achieve the agency’s objectives. Additionally, the OECD’s Best Practice Principles for Regulatory Policy recommends that governments ensure clarity of rules and processes for enforcement and inspections and clearly articulate rights and obligations of officials. According to OECD, the frequency of inspections and the resources employed should be proportional to the level of risk. Enforcement: FRA Utilizes Various Mechanisms; FTA Has No Process or Methodology to Assess the Effectiveness of State Safety Agency Enforcement A strength of FRA’s safety oversight program is that the agency has and utilizes clear enforcement authority, according to NTSB and stakeholders we spoke with. As previously discussed, FRA has several enforcement tools available when inspectors find that railroads are noncompliant with applicable regulations, including civil penalties, individual liability, compliance orders, and emergency orders. According to NTSB, this array of specific enforcement tools helps ensure safety deficiencies are addressed by railroads. FRA officials also told us that the process of adjudicating civil penalties provides a forum for FRA and railroad officials to meet to discuss safety issues. Four rail operators also told us that FRA’s authority to issue civil penalties is necessary to ensure railroads’ compliance with regulations. However, a potential limitation of FRA’s approach to enforcement is that it is difficult to quantify the effectiveness of FRA’s civil penalties. FRA has reported that it cannot determine whether observable safety improvements are directly attributable to discrete civil penalties or whether the amount of civil penalties has any effect on safety. We have reported in the past about the challenges of determining the effect of penalties on compliance in tax policy, though we also noted that, despite these challenges, some analyses likely would be useful for better understanding the effect of penalties on compliance. FRA also reported, though, that according to the judgments of its inspectors, issuing civil penalties yields observable improvements in safety practices and compliance with the law. Further, according to FRA, though it does not quantify the impact of civil penalties, FRA monitors railroad responses to its enforcement activity and adjusts its oversight as necessary. More broadly, civil penalties are not meant, by themselves, to ensure railroad safety. Instead, FRA reported that it uses its entire regulatory regime as a whole to try and ensure safety. FRA officials also noted that the agency has additional tools, apart from civil penalties, to compel railroads to adopt safety practices. A strength of FTA’s rail transit safety oversight is that it seeks to improve historically weak state safety agency enforcement authorities, as described in our previous report as well stakeholders we spoke with. FTA requires state safety agencies to adopt enforcement authorities that are sufficient to enable states to compel action from rail transit agencies to address identified deficiencies. FTA has also communicated to states that it is focusing its evaluation of each state’s enforcement authorities in two major areas: ensuring that the state can carry out its primary responsibility for rail transit safety in response to (1) an imminent threat to public safety on the rail transit system and (2) a lack of action or non- compliance from the rail transit operator in carrying out certain safety plans. FTA has provided states with examples of the enforcement authorities and policies state safety agencies could establish to address these specific concerns. Authorities to address imminent safety threats may include the authority to suspend rail transit agencies’ operations, inspect and remove deficient equipment or system infrastructure from service, or issue an order requiring the rail transit agency to correct an unsafe condition prior to placing equipment or infrastructure back into passenger service. FTA has also provided state safety agencies with examples of authorities to address a lack of action or cooperation by the rail transit operator, including the authority to withhold or redirect funds, levy civil or criminal fines or penalties, and a formal citation or ticketing program. Though federal law requires that state safety agencies have enforcement authorities over the safety of the rail transit entities they oversee, a limitation of FTA’s approach is that FTA has not developed a method to evaluate the effectiveness of states’ enforcement practices. Certified state safety agencies will be evaluated for continued compliance with FTA regulations every 3 years. This triennial review process (for rail transit safety) seeks to ensure that states are effectively carrying out their responsibilities. While FTA officials told us that they will evaluate state safety agencies’ enforcement during the triennial reviews, FTA has not developed a process or methodology to evaluate whether state enforcement authorities and practices as a whole are effective. Without a method for determining the effectiveness of state safety agencies’ enforcement, FTA may not have the information needed to identify ineffective safety enforcement. As a result, deficiencies may remain for long periods of time, potentially contributing to safety incidents. Federal standards for internal control maintain that agency managers should perform a range of practices that would facilitate the establishment of a system to monitor the effectiveness of agency activities, which in the context of FTA’s mission includes the effectiveness of its rail transit safety oversight. Agency managers should define objectives clearly to enable the identification of risks, establish activities to monitor performance measures and indicators, and externally communicate the necessary quality information to achieve the entity’s objectives. Internal control standards further stipulate that agency managers should perform monitoring activities regarding their internal control system and evaluate the results. To do so, federal standards for internal control state that agency managers should monitor ongoing operations and effectiveness, evaluate the results of this monitoring, and identify any changes that need to be made to achieve improvement in agency operations. The effectiveness of state safety agency enforcement is especially important because questions have been raised about the efficacy of FTA’s own enforcement mechanisms, including its ability to withhold funds from and assume direct control over safety oversight for a rail transit entity. Rail transit operators and industry association representatives we spoke with stated that FTA’s authority to withhold funding from states is overly punitive, and two stakeholders said the FTA needs more precise tools. Officials from an industry association added that withholding funds can be counterproductive, as most state safety agencies are already underfunded and understaffed. FTA officials pointed to examples in which the agency successfully supported state safety agencies in compelling action from rail transit agencies as evidence that the state safety oversight model, in which FTA backs up state safety agencies, is effective. Additionally, officials from numerous state safety agencies and others questioned whether FTA has the capacity to effectively assume direct safety oversight of rail transit operators. FTA has not assumed direct safety oversight of any rail transit operators outside of WMATA, and FTA officials noted that they intend to continue supporting state safety agencies in their oversight wherever possible. Conclusions The approaches to rail safety oversight utilized by FRA and FTA each have strengths and limitations. However, FTA’s program is currently in transition as the agency implements new authorities and responsibilities provided in federal law. Though FTA has made progress by evaluating existing rail transit safety standards and providing some guidance to states as part of the certification process, limitations in FTA’s approach may still hinder the success of the state-based rail transit safety oversight program. Given the looming 2019 deadline for state safety oversight programs to achieve FTA certification, FTA can improve its efforts to implement its new rail transit safety oversight program. In particular, without guidance from FTA on how to develop and carry out risk-based inspection programs, state safety agencies may not use limited resources efficiently, risking that important safety issues will go undetected. Further, without a method for how it will monitor the effectiveness of states safety agencies’ enforcement, FTA will lack the information needed to identify ineffective state enforcement, which risks allowing safety deficiencies to remain for long periods of time. By providing this additional guidance and direction to the state safety agencies, FTA would help ensure that states are able to effectively identify and resolve rail transit safety issues. Recommendations We are making the following two recommendations to FTA: The Office of Transit Safety and Oversight should create a plan, with a timeline, for developing guidance for state safety agencies about how to develop and implement a risk-based inspection program. (Recommendation 1) The Office of Transit Safety and Oversight should develop and communicate a method for how it will monitor the effectiveness of the enforcement authorities and practices of state safety agencies. (Recommendation 2) Agency Comments We provided a draft copy of this report to DOT, NTSB, and WMATA for review and comment. In written comments, reproduced in appendix I, DOT agreed with both our recommendations. DOT also provided technical comments, which we incorporated as appropriate. In e-mails, NTSB and WMATA provided technical comments, which we incorporated as appropriate. NTSB noted that we do not discuss the role of system safety initiatives, such as safety management systems, in the FRA and FTA rail safety oversight programs. We agree with NTSB that system safety concepts are increasingly influencing the FRA and FTA approaches to rail safety oversight but, as NTSB also noted, both FRA and FTA lack finalized regulations codifying their approaches to system safety initiatives. Because the extent of rail entities’ implementation of these initiatives varies and is not complete we did not include an assessment of the strengths and limitations of those initiatives in our scope. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Transportation, Chairman of NTSB, General Manager of WMATA, and the appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Transportation Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Steve Cohen (Assistant Director); Kyle Browning (Analyst in Charge); Melissa Bodeau; Lacey Coppage; Serena Lo; Sean Miskell; and Josh Ormond made key contributions to this report.
Why GAO Did This Study In 2012 and 2015, DOT was provided with additional authority to oversee the safety of rail transit. Within DOT, FTA is now implementing this authority. The DOT's Office of Inspector General has reported, though, that FTA faces challenges in carrying out its enhanced safety oversight. FRA, also in DOT, has long carried out safety oversight of freight, intercity passenger, and commuter railroads. GAO was asked to review various rail safety and oversight issues, including the differences between FRA's and FTA's rail safety oversight programs. This report examines (1) key characteristics of FRA's and FTA's rail safety oversight programs and (2) strengths and limitations of FRA's and FTA's rail safety oversight programs. GAO assessed FRA's and FTA's information about rail safety oversight activities against guidance from the Office of Management and Budget, leading practices developed by the transit industry, and federal standards for internal control. GAO also interviewed stakeholders, including rail operators chosen based on mode, size, and location. What GAO Found The Department of Transportation's (DOT) Federal Railroad Administration (FRA) and Federal Transit Administration (FTA) carry out different approaches to rail safety oversight. FRA has a more centralized safety oversight program for railroads, while FTA's program for oversight of rail transit safety largely relies on state safety agencies to monitor and enforce rail transit safety, as established in federal statute. Key characteristics of both programs include: (1) safety regulations, (2) inspections and other oversight activities, and (3) enforcement mechanisms to ensure that safety deficiencies are addressed (see figure). There are strengths and limitations to FRA's and FTA's approaches to their safety oversight missions, including how the two agencies develop safety regulations, conduct inspections, and carry out enforcement. The National Transportation Safety Board has reported, and stakeholders GAO spoke with generally agreed, that strengths of FRA's rail safety oversight program include its safety regulations, its risk-based inspection program, and its enforcement authorities. FRA also has potential limitations in its oversight framework, though, such as difficulty evaluating the effectiveness of its enforcement mechanisms. FTA has made some progress implementing changes to the rail transit safety program. However, FTA has not provided all the necessary guidance and support to states' safety agencies to ensure they develop appropriate and effective rail transit safety inspection programs. In particular, FTA has not provided states with guidance on how to develop and implement risk-based inspection programs. Though FTA has said that it will develop such guidance, it does not have a plan or timeline to do so. Without guidance from FTA on how to develop and carry out risk-based inspections, state safety agencies may not allocate their limited resources efficiently, and important safety issues may go undetected. In addition, FTA has not developed a process or methodology to evaluate whether state safety agency enforcement authorities and practices are effective. Without clear evidence that state safety agencies' enforcement is effective, states and FTA may not be able to compel rail transit operators to remedy safety deficiencies. As a result, deficiencies may remain for long periods, potentially contributing to safety incidents. What GAO Recommends GAO recommends that FTA (1) create a plan, with timeline, for developing risk-based inspection guidance for state safety agencies, and (2) develop and communicate a method for how FTA will monitor whether state safety agencies' enforcement practices are effective. DOT agreed with our recommendations. DOT, NTSB, and WMATA provided technical comments that we incorporated as appropriate.
gao_GAO-18-443
gao_GAO-18-443_0
Background Federally Recognized Indian Tribes The federal government has consistently recognized Indian tribes as distinct, independent political communities with inherent powers of limited sovereignty. The 2013 amendments in SRIA allow tribes to decide how to request federal disaster assistance, thereby allowing tribes to exercise their sovereignty. As of April 2018, there were 573 federally recognized Indian tribes, residing on more than 56 million acres. Thirty-six states have at least parts of a tribe within their borders, with fewer tribes located on the East Coast of the United States and over 300 tribes are located in Alaska and California. These tribes are each sovereign governments and vary in size, demographics, and location. For instance, Navajo Nation has the largest reservation covering over 17.5 million acres, stretching across New Mexico, Arizona and Utah, and is home to approximately 174,000 residents, while the Mashantucket Pequot Reservation in Connecticut covers over 2,000 acres and is home to about 350 residents. Only tribes that are federally recognized can make disaster declaration requests. The 10 FEMA Regions and the location of each regional office, along with the number of federally recognized tribes in each region, are illustrated in figure 1. Pre-Disaster Emergency Management Grants for Tribes Before a disaster occurs, tribes may need certain resources to assist in the development of their local emergency management capacity. In addition to offering technical assistance for certain administrative requirements, such as developing a hazard mitigation plan, FEMA administers four pre-disaster grant programs that tribes may access. These grant programs could provide tribes, either directly or as a sub- grantee through a state, with funds that would help support aspects of their emergency management capability. They are: Emergency Management Performance Grant (EMPG). The purpose of EMPG is to help build and sustain core emergency management capabilities. EMPG is particularly important for building the capacity to declare and manage a disaster, because it is the primary federal program for which salaries and training for emergency management personnel is an allowable expense. Only states and U.S. territories are eligible to receive EMPG funds directly. According to FEMA officials, after states receive EMPG funds, they make determinations about whether and under what conditions to provide the funds to tribes and local governments within their geographical boundaries. However, according to officials, not all states will distribute EMPG funds to tribes. State Homeland Security Program (SHSP). The purpose of SHSP is to help states and U.S. territories prevent, prepare for, protect against, and respond to acts of terrorism and otherwise reduce overall risk. Allowable expenses include, but are not limited to, equipment, training, and exercises. As with EMPG, states and territories receive SHSP funds and subsequently decide how to distribute them. Tribal Homeland Security Grant Program (THSGP). THSGP is a tribal- specific grant program intended to serve the same general purpose as SHSP. THSGP is available to tribes that meet one or more specific criteria, including comprising at least 1,000 square miles of Indian country or being near an international border, near prioritized critical infrastructure, or within or adjacent to one of the 50 most populous regions in the United States. Pre-Disaster Mitigation (PDM). A PDM grant primarily funds development and upkeep of hazard mitigation plans, but can be used for hazard mitigation projects as well. All nonfederal governments—including tribal governments—must have an up-to-date, FEMA-approved hazard mitigation plan in place before receiving disaster assistance following a major disaster declaration. Declaration Process for Major Disaster Declarations After a disaster, tribal chief executives may request federal assistance, if the disaster is of such severity and magnitude that effective response is beyond the capabilities of the affected tribal government and federal assistance is necessary. Tribes may make a request for assistance as a direct recipient, or they may join a state’s request as a sub-recipient. Similar to the state request process, FEMA Regional Administrators evaluate the tribe’s request and make a recommendation to FEMA headquarters. The FEMA Administrator then sends the recommendation to the President for a final decision as to whether the tribe’s, or a state’s, request for a major disaster declaration should be approved or denied. Figure 2 illustrates the process tribes follow to make a direct request or join a state’s request. Federal Disaster Assistance Available to Tribes Following a Major Disaster Declaration When a major disaster is declared, FEMA provides disaster assistance for eligible disaster recovery projects through the Disaster Relief Fund (DRF). The three types of post-disaster grants, through the DRF, that state governors or tribal chief executives may request are: (1) Public Assistance (PA), which provides grants for eligible emergency work and repairs or restoration to infrastructure. (2) Individual Assistance (IA), which provides assistance to individuals and households to meet their sustenance, shelter, and medical needs. (3) Hazard Mitigation Grant Program (HMGP), which provides grants for eligible projects to reduce the potential for future damage. Tribal Requests for Major Disaster Declarations from 2013 through 2016 According to FEMA data, between 2013 and 2016, 36 tribes made requests for disaster assistance as a direct recipient or by joining a state’s request. Of those 36 tribes: Fifteen tribes made a total of 17 direct requests to the U.S. President through FEMA for major disaster declarations. Eight of these requests were approved across 7 tribes. From 2013 through 2016, the Pueblo of Santa Clara, New Mexico was the only tribe approved for two major disaster declarations for severe storms and flooding in 2013. The remaining 9 direct requests were denied across 9 tribes. Twenty-nine tribes were sub-recipients under 36 state major disaster declaration requests. Eight tribes made a direct request and also joined at least one state request for a major disaster declaration. Figure 3 below shows the types of state requests tribes joined as well as the direct tribal requests that were approved and denied between 2013 and 2016. See appendix II for background information on the 36 tribes that made requests for disaster assistance and those that received pre- disaster grants during the study period. Tribes Considered Sovereignty, Finances, FEMA Support, and Emergency Management Capacity When Deciding How to Request a Disaster Declaration Officials from the tribes that responded to our survey and those we interviewed reported that there are several factors they took into consideration when deciding whether to make a direct request or to join a state’s request for a disaster declaration, during the 2013 to 2016 period. On the basis of the cumulative responses from these officials, we found that tribal sovereignty, financial matters, FEMA support, and the tribe’s emergency management capacity were key factors in their decision- making process. As shown in figure 4, the 23 survey respondents fall into three subsets, which totals 29 direct and state requests made by the survey respondents. Tribal Sovereignty and Government-to- Government Relationship Nine of 10 survey respondents that made at least one direct request during the 2013 to 2016 period reported that tribal sovereignty was a major factor they considered when making a direct request. Two survey respondents reported that the new authority is of strategic importance for tribal sovereignty because they are no longer required to join a state’s request when seeking a major disaster declaration. For example, in instances where the state’s request for a major disaster declaration has been denied, tribes now have the option to request disaster assistance directly as a result of this new authority. This factor was also of practical importance for tribes with reservations located in more than one state or county. During our site visit interviews, officials from one tribe said it was a challenge to manage multiple state bureaucracies when the reservation spans multiple states. In some cases, portions of a reservation may not receive disaster assistance if one state—or county—did not request or receive a major disaster declaration. Officials from 5 tribes we visited said they prefer making direct requests because of the government-to-government relationship with the United States, and because working through the state as an intermediary impinged on their sovereignty. An official from one small rural tribe said that the tribe currently does not have the capacity to make a direct request but is taking the steps to do so in the future because it is important to their tribal sovereignty. Financial Considerations Tribal officials responding to our survey and interview questions reported that the potential to receive additional assistance from states to pay the non-federal cost share might influence them to join a state’s request. Conversely, the timeliness of reimbursement and the potential to receive administrative costs and HMGP grants might be factors in deciding to make a direct request. Eight out of 13 respondents that received disaster assistance only as a sub-recipient of a state reported that they had concerns about paying the required nonfederal cost-share. When managing disaster assistance grants as a direct recipient, a tribal government is solely responsible for the entire nonfederal cost share. On the other hand, if the tribe is a sub- recipient to a state request, the tribe may have a lighter financial burden since several states offer partial or full nonfederal cost share assistance to their local and tribal sub-recipients. For example, officials from one tribe said that there is a strong financial incentive to join a state’s request because the state reimburses the tribal government’s half of the cost share. In addition, some tribes may face financial hardship with the startup cost for recovery projects because PA and HMGP are reimbursement programs. For example, one tribal official said that it is especially difficult for small, rural, non-gaming tribes to find the financial capital to initiate recovery and hazard mitigation projects. While some tribes may have the money set aside for this purpose or may be able to secure loans to begin projects like the one illustrated in figure 5, other tribes are unable to start certain internal processes until the FEMA funds have been obligated. At a minimum, recipients have to present a scope of work before they can receive funds, the preparation of which usually requires the services of engineers or other technical experts. Therefore, the timeliness of the reimbursements, especially when the tribe is a sub- recipient under a state request, can result in financial challenges. For example, one tribal official we interviewed said that it takes much longer, on average, to request and receive reimbursement for recovery projects when the tribe has to submit the request through the state. Conversely, the official noted that reimbursement processes are typically much quicker when working directly with FEMA. During our site visit interviews, officials from one tribe told us that they prefer to make direct requests so they could receive HMGP funds to make decisions about the hazard mitigation projects on their reservation. Generally, as a direct recipient, a state or tribe will receive HMGP funding based on a percentage, usually 15 percent, of the total amount of PA and IA funds received for the disaster recovery. HMGP funds can be used for eligible hazard mitigation projects or to create or renew hazard mitigation plans. Under a state declaration, the state receives these funds and can, at its discretion, use them anywhere within its boundaries for eligible projects. According to officials from one tribe, they can ensure they receive the total amount of HMGP funds to use on hazard mitigation projects within their own jurisdiction when they are a direct recipient. FEMA’s Policies, Guidance, and Technical Assistance Tribal officials’ confidence in the level of support they expected to receive from FEMA influenced their decision whether to make a direct request or to join a state. Specifically, in response to our survey, tribes that made direct requests largely reported that they believed FEMA’s policies and requirements would be clear enough for them to effectively navigate the processes and that timely and accurate information would be available. In contrast, multiple tribes that decided to join a state’s request reported that their concerns in those areas influenced their decisions to join a state’s request. FEMA Policies and Guidance Eight of the 10 tribes responding to our survey that requested a direct disaster declaration during the 2013 to 2016 period stated that the clarity of policy and guidance was a factor (five called it a major factor and three deemed it minor) in their decision making. Conversely, eight of the 13 tribes that only joined a state request reported that concern about FEMA’s policies and requirements being clear enough to seek a direct request was a factor in their decision to join a state request. During our site visit interviews, officials from 2 tribes discussed challenges they have experienced with FEMA’s policies and requirements for estimating IA-related damages. Applicants for IA, including owners and renters, must be able to prove they occupied the damaged dwelling, pre-disaster, as their primary residence before receiving assistance. However, according to tribal officials, many homeowners on reservations do not possess formal deeds to their home or do not carry insurance, making it difficult for FEMA to ensure that potential recipients of the IA funds meet the requirements of the program. According to FEMA officials, the agency has attempted to be flexible during the pilot phase of the tribal declarations program. For example, FEMA officials in one region told us that they would accept a tribal government’s declaration of home ownership in lieu of a formal deed. FEMA officials told us they will continue to evaluate how issues of homeownership will be adjudicated. In addition, during our site visit interviews, officials from 3 tribes discussed various types of difficulty with completing and maintaining the paperwork associated with recovery projects. For example, officials from a tribe stated that they are not equipped to manage and comply with processes such as permit requirements or federal procurement procedures and as a result are currently seeking to hire a full time emergency manager. Throughout the life of a major disaster declaration, tribal officials are required to maintain paperwork to document the recovery projects, which can require both physical and electronic recordkeeping systems, space, time, and expertise. For example, figure 6 below shows an example of the volume of paperwork needed to support and close out the recovery projects associated with a landslide in Washington State, according to the tribal and state officials involved. Nine of 10 tribes responding to our survey that were awarded a direct disaster declaration reported that a factor (six major and three minor) in their decision making was their determination that the availability of timely and accurate assistance from FEMA would help them successfully manage the request process. For tribes that only joined state requests, fewer tribes reported that concerns about receiving timely and accurate technical assistance affected their decisions than those that had concerns about the clarity of FEMA’s policy and guidance. Four of the 13 total tribes that only joined a state declaration cited concerns about having access to technical assistance as a factor (one called it a major and three deemed it minor). Damage Assessments After a disaster occurs, the first step in the declaration process is for the tribe to conduct an assessment of the impacts of the disaster to determine if there are needs that cannot be addressed with tribal resources or through insurance. Using this assessment—known as an initial damage assessment–-a tribal government can determine what, if any, needs or damages are eligible for FEMA disaster assistance. If a tribe determines that such needs or damages are beyond its capabilities to address with its own resources or insurance, the next step is to request a Joint Preliminary Damage Assessment (Joint PDA) from their FEMA Regional Administrator so that FEMA and the tribe can go through a process of reaching agreement about what damages and needs are eligible. According to FEMA officials, the agency has assigned staff as dedicated Regional Tribal Liaisons (RTL) in all FEMA regional offices. RTLs help tribes maintain awareness of various program requirements, including those for conducting damage assessments and submitting requests for major disaster declarations. RTLs accomplish this role by connecting tribes with FEMA subject matter experts, who help tribes navigate the major disaster declaration processes and programs. During our site visit interviews, officials from 5 tribes we interviewed discussed the importance of having a good working relationship with FEMA regional officials. Some of the steps FEMA has taken to provide technical assistance to tribes are discussed further below. Tribes’ Emergency Management Capacity Tribal officials’ confidence in the tribe’s capacity to manage the major disaster declaration process and subsequently administer the recovery without assistance from a state was a key factor in determining whether or not to seek a request directly or join a state request. Tribes, like states, have to carry out specific tasks and meet eligibility requirements to be able to make a direct request and manage the recovery processes for a major disaster declaration, as shown in figure 7. While states have had decades to develop the emergency management capacity needed to request and administer federal disaster assistance, tribes have had the opportunity to apply directly for federal disaster assistance since the passage of SRIA in 2013. Developing and maintaining such a capacity requires, among other things, having in- house knowledge or the ability to contract for (or otherwise access) specialized expertise to navigate through complex planning and processes. Multiple officials from tribes we interviewed and surveyed reported challenges building and maintaining emergency management capacity that affected their ability to make direct requests for, and manage the recovery effort associated with, a major disaster declaration. Specifically, 9 of 10 tribes responding to our survey that made a direct request said determining that their tribes had the emergency management capacity to successfully manage the major disaster declaration request process was a factor (6 identified it as minor, 3 as major). Conversely, 7 of the 13 tribes responding to our survey that only joined a state request said determining that they did not have the emergency management capacity to successfully manage the major disaster declaration request process was a factor in their decisions (4 identified as major, 3 as minor). As with the capacity to handle the declaration process, determining whether the tribe had the capacity to manage the recovery process, as illustrated in figure 7, also affected decision making. Officials from one tribe we interviewed who had not made direct requests told us that unless they have the emergency management capacity to manage both the request and the recovery process, they plan to continue joining states’ requests whenever possible. Tribal Hazard Mitigation Plan A Tribal Hazard Mitigation Plan describes sustained actions that may be taken by the tribal government to reduce or eliminate the long-term risk of future damage to human life and property from hazards. When making a direct request for a major disaster declaration, a tribal government must have a Federal Emergency Management Agency (FEMA)- approved Tribal Mitigation Plan that meets the requirements in 44 C.F.R. § 201.7 before receiving FEMA disaster assistance funds under certain programs. If electing to be a sub-recipient under a state’s major disaster declaration request, the tribal government may be eligible to receive disaster assistance funds through the state without having a Tribal Mitigation Plan. A tribal emergency management consultant who works with several tribes in one of the areas where we conducted site visits told us that the lack of a FEMA-approved tribal hazard mitigation plan limits the ability of many of these tribes to receive disaster funding. A hazard mitigation plan is required prior to a recipient being able to receive PA permanent work or HMGP. As of December 2017, 143 out of 567 tribes had a FEMA- approved Tribal Mitigation Plan, according to FEMA. In addition, the consultant reported that some tribes also lacked a designated emergency manager and hiring one may be unaffordable or in some cases, the applicants lack qualifications. For another tribe, the designated emergency manager had several job titles, including the tribe’s first responder and fire chief, which the official said makes it difficult to dedicate the time required to hone the skills necessary to manage the FEMA declaration processes. The official recounted an attempt to develop a hazard mitigation plan that at the time of our interview was still incomplete due, in part, to the complexity of the FEMA guidelines. In such cases, tribes may need to hire a specialist to assist with this administrative requirement, but may not have the budget to do so. Another challenge tribal officials identified is that tribes face barriers to accessing federal pre-disaster funding that could help them build capacity to manage post-disaster grants following a successful declaration request. During our site visit interviews, officials from two tribes told us they have considered seeking federal grant opportunities to help enhance emergency management capacity, but the eligibility requirements, such as the requirement to be near designated critical infrastructure or within 100 miles of the border, for the tribal homeland security grants program precluded them from applying. They also said that they have received few, if any, state homeland security grant funds from states. EMPG pays for salaries and is the primary source of support for developing and maintaining the requisite emergency management expertise. According to the FEMA and tribal officials we spoke with, as well as grant data provided by FEMA, tribes receive relatively low amounts of EMPG funding (see table 1 below) through the states. Tribes are not eligible to apply directly to FEMA for EMPG funds. In addition, according to tribal officials, when tribes apply to states for EMPG funds, the states can impose conditions that impinge on tribal sovereignty. For example, one state requires tribes to waive their legal immunity and agree to follow state laws, which some tribal officials viewed as contradictory to their sovereignty. As a result, these officials said they choose not to apply for these grants through the states and have never received EMPG funds. FEMA officials acknowledged that tribes face challenges getting federal grant funds to help them enhance their emergency management capacity. According to the officials there are statutory, policy, and budget considerations that limit their ability to make significant changes in the way such grant funds are distributed. However, they told us that they continue to work under their current authorities to assist tribes that seek to develop and maintain their emergency management capacity, primarily through training and technical assistance, as described later in this report. FEMA Has Created Pilot Guidance for Tribes and Offers Training and Technical Assistance on Directly Requesting Disaster Declarations Since the passage of SRIA in 2013, FEMA has implemented various policies tailored to tribes that wish to make a direct request to the President, through FEMA, for federal disaster assistance. In December 2013, FEMA issued a policy regarding coordination with tribal governments. As part of this policy, FEMA committed to consulting tribal governments before taking proposed actions that would have a substantial direct effect on tribes. In addition, the policy recognized the tribes’ rights to self-governance and tribal sovereignty. Since 2013, according to FEMA officials, the agency has provided multiple opportunities through Federal Register notices and ongoing consultations for input into the development of the guidance that currently governs the tribal request process for major disaster declarations. Specifically, FEMA reported that it is implementing this authority in three phases: (1) use of existing regulations, (2) pilot period, and (3) rulemaking. During phase 1, from 2013-2016, FEMA processed tribal declaration requests using existing state declaration regulations in order to allow tribal governments the choice to use the new authority immediately and to provide time for consultation on drafts of the Tribal Declarations Pilot Guidance. In January 2016, FEMA published a draft of the Tribal Declarations Pilot Guidance and requested comments on the draft guidance through April 2016. Based on feedback received, FEMA issued a final version of the guidance, with which it will manage tribal declaration requests during the pilot phase, in January 2017. The publication of this guidance in January 2017 officially started phase 2, the pilot phase, of the tribal declarations implementation. FEMA officials told us that, before beginning the development of regulations on tribal disaster declarations, they intend to operate under the pilot guidance for at least 2 years. They noted that they cannot specify an exact date on which they expect to finalize the guidance because there is uncertainty about what kind of disasters will strike and where. According to officials, they have identified data they would like to collect to assess the guidance before finalizing it. Among other things, they said they plan to do economic analyses using quantitative data such as the types of disaster assistance requests from tribes (PA, IA, and HMGP) and the amount of funding allocated to tribes. In addition, these officials said they plan to conduct focus groups with tribal officials to learn more about how the disaster declaration policies and guidance have worked for tribal governments that used them. In the meantime, according to these officials, their aim is to be as flexible as possible while maintaining consistency with other relevant disaster regulations, so that they can respond to any unique challenges that arise in implementing this new authority. In addition to assessing how the pilot is working for tribes, FEMA has developed and implemented training to help tribes understand the disaster declaration process and provided technical assistance to tribes as needed, prior to, during, and after disasters. FEMA has offered training opportunities at the Emergency Management Institute in Emmitsburg, Maryland, and has hosted regional training workshops and consultations throughout the country. According to tribal officials, these training courses have helped increase tribes’ emergency management expertise. One of the offerings, Tribal Declarations Pilot Guidance, was a 1-hour briefing offered in multiple locations and provided to dozens of tribes and other government agencies. In addition, FEMA has RTLs in each regional office that are a primary point of contact for tribal governments that have questions or require technical assistance on FEMA programs. Officials from one tribe we visited told us they believe the technical assistance they received from a FEMA RTL was timely and thorough. These officials said the tribe contacted FEMA for assistance following the Tribal Council’s decision to declare a state of emergency on the reservation. According to the tribal officials, a fire had started on a Sunday and the FEMA team was on-site at the reservation by Wednesday to conduct a joint preliminary damage assessment with tribal officials. The officials also said they were impressed with FEMA’s quick response on the damage assessment results, which they received within a week. The tribe did not ultimately request a major disaster declaration because the damage assessment fell short of the minimum damage amount at that time. However, officials from the tribe said the experience they gained was helpful for the tribe’s emergency management staff and that they are now confident they will be able to conduct an initial damage assessment should a future disaster occur. Agency Comments and Our Evaluation We provided a draft of this report to the Department of Homeland Security and FEMA for review and comment. They provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (404) 679-1875 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology The objectives of this study were to examine (1) the factors that influence selected tribes’ decisions about how to seek federal assistance through a major disaster declaration and (2) the actions the Federal Emergency Management Agency (FEMA) is taking to assist tribal efforts to request and manage disaster declarations. To report on the factors that tribes consider when seeking federal disaster assistance, we reviewed FEMA’s pilot guidance for tribal disaster declarations that was published in January 2017 and discussed the program’s pilot plans with key agency officials. We also interviewed officials from two national tribal organizations (the National Congress of American Indians and the National Tribal Emergency Management Council) and FEMA to develop a preliminary list of potential factors that may influence a tribe’s decision to make a direct request or to join a state’s request as a sub-recipient. Using these factors, we developed a survey with both closed and open-ended questions. To minimize non- response error, we pre-tested the survey instrument with officials from two tribes in FEMA Regions VI and X (see figure 1) to ensure the questions were clear and unbiased and that the survey questions were culturally appropriate. We also consulted tribal officials during a FEMA training course and held additional interviews with officials from tribal organizations to ensure that the questions were clear, understandable, and appropriate. An independent reviewer within our agency also reviewed a draft of the survey prior to the pre-tests. We made appropriate revisions to the content and format of the questionnaire based on the pretests and independent review. We sent our survey to the 36 tribal governments that either (1) received declaration funds through a direct request, (2) received declaration funds as a sub-recipient of a state’s request, or (3) made a direct request that was denied between January 2013 and December 2016.The time period we chose coincides with the year SRIA was enacted to the most recent calendar year in which a full year of data on major disaster declarations was available when we began this work. Using e-mail addresses provided by FEMA Regional offices, we emailed the survey in an attached document that respondents could complete electronically or by hand and return via email or mail. We sent an invitation letter to the tribes on July 12, 2017, informing them of the purpose of the survey and the date it would be sent. We then sent the survey on July 18, 2017 and began soliciting survey responses from August 7, 2017 until January 12, 2018, by phone and email. We received completed surveys from 23 of the 36 tribes in the target population. We compared selected characteristics of the tribes responding to the survey with the same characteristics of the 36 tribes in the target population, as well as the completion of individual questions, and did not find a nonresponse bias. The final survey questionnaire is in appendix III. To complement the survey responses, we conducted site visits to 7 tribes selected from among the 23 tribes that responded to our survey. The objectives of these site visits were to obtain added information from the tribal officials regarding the factors influencing their disaster declaration decisions during this period. We also observed recent disaster damage; ongoing recovery projects; and aspects of each tribe’s emergency management capability. We selected these various sites so that, as a set, they included a mixture of tribes that had participated in direct declarations; in state declarations as a sub-recipient; participated in declarations that were granted and denied; and were located in different FEMA regions. The selected tribes are located in Arizona, New Mexico, Washington, and Idaho, representing FEMA Regions VI, VIII, IX, and X. During our site visits, we interviewed tribal executives and emergency management officials and toured completed projects. Although the information gathered from our survey and site visits cannot be generalized across the tribes, our observations and the tribal officials’ responses underscored the uniqueness of each tribe and each disaster, as well as offering important details regarding the opportunities and challenges for tribes under this new authority. To report on related FEMA grant funds obligated from 2013 through 2016, we collected data regarding the Homeland Security Grant Program, Tribal Homeland Security Grant Program, Emergency Management Performance Grant, and the Pre Disaster Mitigation grant data from FEMA Grants Program Division officials. We selected these programs because they provide pre-disaster grant funds to states and tribes that are, in part, intended to enhance grantees emergency management capacity. To assess the reliability of these data, we performed electronic data testing for obvious errors in accuracy and completeness, and interviewed agency officials knowledgeable about the collection and processing of these data. We determined these data to be sufficiently reliable for the purposes of reporting FEMA’s awards of these grant funds. To address the second objective, we reviewed federal documentation— such as FEMA’s Tribal Declarations Pilot Guidance, federal regulations and statutes governing the major disaster declaration process to see what actions FEMA has taken specifically related to tribe’s requesting and managing major disaster declarations. We also reviewed disaster-related documentation provided by tribal governments and available on-line, including correspondence between tribes and FEMA, testimony statements, and additional documents that provided details of tribes’ experiences requesting and managing major disaster declarations. In addition, we interviewed officials from the two aforementioned national tribal organizations to discuss any successes or challenges they were familiar with related to the new authority that allows tribes to request a major disaster declaration directly from the President of the United States. During our interviews with tribal organizations and tribal officials, we examined challenges related to implementing the new authority and carrying out the various requirements associated with requesting and managing a major disaster declaration. We also interviewed FEMA officials about the actions they had taken to help tribes make informed decisions about whether they would prefer to exercise the new authority. In addition, we interviewed FEMA officials about how they assisted tribes that were considering whether to exercise the new authority and how to do so, if desired, as well as what, if any, steps they had taken to address the challenges identified by tribes. For example, we discussed what actions FEMA has taken to assess the pilot program, offer training opportunities, and provide technical assistance to tribes that seek to enhance their emergency management capacity. We also attended a tribal emergency management conference in June 2017, attended a FEMA tribal emergency management training session in person in March 2017, and attended two FEMA-sponsored webinars designed specifically for tribal participants. We conducted this performance audit from October 2016 through May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Grant Funds Received by Tribes that Requested or Joined a State’s Major Disaster Declaration, 2013-2016 Pre-Disaster Grant Programs State Homeland Security Program (SHSP) provides funding to support states’ implementation of homeland security strategies to address the identified planning, organization, equipment, training, and exercise needs at the state and local levels to prevent, protect against, respond to, and recover from acts of terrorism and other catastrophic events. Tribal Homeland Security Grant Program (THSGP) provides funding to eligible tribes to strengthen their capacity to prevent, protect against, mitigate, respond to, and recover from potential terrorist attacks and other hazards. Emergency Management Performance Grant (EMPG) program provides funding to assist local, tribal, territorial, and state governments in enhancing and sustaining all-hazards emergency management capabilities. Pre-Disaster Mitigation (PDM) grant program provides funds to communities for hazard mitigation planning and the implementation of mitigation projects prior to a disaster event. Funding these plans and projects reduces overall risks to life and property and the future cost of recovering from a disaster event. The goal of the program is to reduce overall risk to the population and structures, while at the same time also reducing reliance on Federal funding from actual disaster declarations. Post-Disaster Grant Programs through a Major Disaster Declaration Individual Assistance (IA) provides financial assistance to individuals. Public Assistance (PA) provides financial assistance to jurisdictions for debris removal, emergency protective measures, and the restoration of disaster-damaged, publicly-owned facilities and the facilities of certain private nonprofit organizations, such as utilities. Hazard Mitigation Grant Program (HMGP) provides additional funds to assist communities in implementing long-term measures to help reduce the potential risk of future damages to facilities. Appendix III: GAO Survey to Tribes that Requested or Joined a State’s Major Disaster Declaration, 2013-2016 Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the contact named above, Kathryn Godfrey (Assistant Director), R. Denton Herring (Analyst-In-Charge), Pat Donahue, Dainia Lawes, and Claudia Rodriguez made key contributions to this report. In addition, Eric Hauswirth, Susan Hsu, Tracey King, Gary Malavenda, Jeff Malcolm, and Heidi Nielson also provided assistance.
Why GAO Did This Study Since the Sandy Recovery Improvement Act (SRIA) of 2013, federally recognized Indian tribes affected by major disasters have had the option to make disaster declaration requests directly to the President of the United States or join a state's request for federal disaster assistance. Prior to this, tribes had to receive assistance through a state. GAO was asked to assess the implementation of this new authority. This report addresses (1) the factors that influenced selected tribes' decisions about how to seek federal disaster assistance, and (2) the actions FEMA has taken to help tribes exercise the new authority. GAO analyzed FEMA's pilot guidance for tribal declarations and interviewed FEMA and tribal emergency management experts. GAO also surveyed the 36 tribes who made requests for disaster assistance in fiscal years 2013-2016 about the factors that influenced their decision making. Twenty three tribes responded. GAO visited seven tribes selected from among the survey respondents to represent different FEMA regions and disaster types. The site visits cannot be generalized but provided valuable insights into the opportunities and challenges of exercising this new authority. What GAO Found According to tribal officials GAO surveyed and interviewed, there are several factors they considered when deciding whether to make a direct request or to join a state's request for a major disaster declaration. Key factors that tribes reported considering were the (1) importance of tribal sovereignty, (2) financial matters such as the timeliness with which they receive funds, (3) the level of support they anticipated receiving from the Federal Emergency Management Agency (FEMA), and (4) their own emergency management capacity. For example, survey results showed that tribal officials' confidence in their capacity to manage the declaration was a key factor in determining whether to make a request directly. Specifically, various elements of emergency management capacity, as illustrated below, could affect tribes' ability to manage a declaration. FEMA has developed pilot guidance for tribal declarations and solicited comments from tribes, as part of its effort to consider the needs of tribes and develop regulations. According to FEMA officials, they are currently assessing the effectiveness of policies and procedures based on data collected from tribal declarations since the passage of SRIA. These officials said they intend to begin the rulemaking process as soon as 2 years into the pilot, but may delay if they cannot collect enough data about different disaster situations during that time to conduct a complete analysis. Until the regulations are final, officials say they will exercise flexibility whenever possible. In addition, the agency offers training on the tribal declaration process and has dedicated staff who act as primary points of contact for tribal governments that require technical assistance. What GAO Recommends GAO is not making any recommendations in this report.
gao_GAO-18-96
gao_GAO-18-96_0
Background In 1984, the Commercial Space Launch Act gave DOT the authority, among other things, to license and monitor the safety of commercial space launches and to promote the industry. Executive Order 12465 designated DOT as the lead federal agency for enabling private-sector launch capability. The Office of Commercial Space Transportation and its responsibilities, which were originally within the Office of the Secretary of Transportation, were transferred to FAA in 1995. The U.S. commercial space launch industry has achieved several milestones since 1984. For example, in recent years SpaceX, a commercial space launch company, has successfully tested reusable elements of expendable launch vehicles and landed them back on land and on an off-shore landing vessel called a drone ship. In addition, the industry is changing with the emergence of some suborbital launch vehicles that are capable of being launched into space more than once and can enable space tourism. For example, Blue Origin has successfully launched and landed the vehicle it intends to use in the future for space tourism. By adding an expendable upper stage, suborbital vehicles can also be used to transport small satellites to orbit. Furthermore, although licensed launches historically took place at federal launch sites such as Cape Canaveral Air Force Station and the National Aeronautics and Space Administration’s Kennedy Space Center, launch sites now can be private spaceports or FAA-licensed launch sites. One launch site is co-located at an airport that has scheduled commercial airline flights and other spaceports are used for general aviation. As of August 2017, there were 10 licensed launch sites in the United States. The Office of Commercial Space Transportation works with other FAA lines of business such as: the Air Traffic Organization on integrating licensed launches and permitted activities in the national airspace, the Office of Airports regarding airports that seek to be or already are licensed launch sites, and the Office of Aviation Safety on launch vehicles that follow aircraft rules and can be used for commercial space activities. In fiscal year 2017, the Office of Commercial Space Transportation had 104 full-time equivalent positions and an operations budget of $19.8 million—an increase of 20 full-time equivalent positions and $2 million over fiscal year 2016. FAA has a staff of over 40,000 people and a budget of $16.4 billion in fiscal year 2017. According to the Office of Commercial Space Transportation, its workload has increased significantly in recent years, particularly regarding pre-application consultations for launch and launch site licenses. The Office of the Secretary of Transportation has offices that are responsible for policy, legal, and government affairs among other issues. In 1987, the House Appropriations Committee recommended that DOT perform a comprehensive organization and management study of the Office of Commercial Space Transportation with the objectives of eliminating duplication of activities carried out by offices within the Office of the Secretary and DOT modal administrations, and determining potential areas for streamlining operations. In 1991, DOT asked the National Academy of Public Administration to analyze and evaluate the key organizational and management issues facing the Office of Commercial Space Transportation which at that time was located in the Office of the Secretary of Transportation. The report considered organizational options for the office including establishing an independent regulatory office, merging the office into an existing operating administration such as FAA, transferring this office to bureau status in DOT, or creating a new operating administration in DOT. According to the report, three of the study’s five panel members stated that they believed that the office should be removed from the Office of the Secretary of Transportation and established as an operating administration because its mission was inconsistent with the broad and cross-cutting organizations within the Secretary’s Office that are focused on policy, budget, and administrative issues. Stakeholders Cited Various Perspectives on Moving the Office of Commercial Space Transportation Representatives from the commercial space launch companies and spaceports we spoke to described both potential advantages and disadvantages of moving the office, but most of them favored moving the office. On the other hand, most FAA officials we interviewed did not favor the idea. A senior official in the Office of Commercial Space Transportation said that there are advantages and disadvantages to moving the office and that whether such an action would be beneficial depended on the implementation details and the administration’s preferences. Officials from the Office of the Secretary of Transportation said they currently do not have plans to move the office. Stakeholders and officials provided perspectives on what they believe might result from a move including discussions regarding communicating with the industry and coordinating within FAA, program operations, updating regulations, and obtaining resources for the office, and other issues. Stakeholder Perspectives on Advantages and Disadvantages of a Move Communication and Coordination Officials in the Office of the Secretary of Transportation said a possible advantage of moving all or part of the office would be having a unified point of contact for communicating with the industry on commercial space launch issues. Representatives from a commercial space launch company also said that rather than working with various FAA offices, they would like there to be a “one-stop shop” for commercial space launch issues and a senior official in the Office of Commercial Space Transportation indicated the office’s original purpose was to fulfill that role. Some company representatives further explained that although they generally work with the Office of Commercial Space Transportation on licensing issues and with the Air Traffic Organization on airspace access, in some cases, the lines of responsibility between the two offices are not clearly defined. Furthermore, a spaceport official said that in addition to working with the Office of Commercial Space Transportation he also needs to work with FAA’s Office of Airports, which reviews the effects of spaceports on airports, among other responsibilities. In discussing these issues with the senior official involved in the Air Traffic Organization’s emerging technologies integration efforts, the official said that although there is overlap and a need for more communication between the Office of Commercial Space Transportation and the Air Traffic Organization, coordination between the two offices is improving. He also said that later this year or early next year, FAA plans to start an aviation rulemaking advisory committee that will help to determine airspace access priorities for all national airspace users. Similarly, an official from the Office of Airports said the office is developing standard operating procedures and a memorandum of understanding with the Office of Commercial Space Transportation to resolve issues. A spaceport official who has been working on a launch site operator’s license application for several years confirmed that coordination among various FAA offices on commercial space launch issues has significantly improved during the last 6 months. Furthermore, several FAA senior officials said that moving the office could make it more difficult for FAA offices to coordinate on commercial space activities. For example, a senior official involved in the Air Traffic Organization’s emerging technologies integration efforts said that although such a move may increase the visibility of the Office of Commercial Space Transportation, it would not necessarily improve airspace integration. In addition, written responses to our questions from the Office of Aviation Safety indicated that their ability to interact with the Office of Commercial Space Transportation at an internal agency level may be less cumbersome than having to go through the additional communication protocols at the level of the Office of the Secretary of Transportation. Similarly, officials from the Office of Airports indicated that coordinating airspace review is an inherently FAA function that uses the experience and knowledge of subject matter experts located within the FAA, and that moving the commercial space office to the Office of the Secretary of Transportation could affect the efficiency of these reviews. Officials from the Office of the Secretary of Transportation also said that even if the commercial space transportation office were moved to their office, they would still need to work with FAA on airspace access issues and that they would not necessarily favor the industry regarding airspace issues. Moreover, FAA officials we interviewed said they are working on improving commercial space coordination through various working groups, particularly through the Commercial Space Transportation Executive Working Group that was formed earlier this year to coordinate on commercial space issues. This group is chaired by the official directing commercial space integration in the Office of Commercial Space Transportation and is comprised of executives from across the agency, including the Air Traffic Organization, the Office of Airports, and the Office of Aviation Safety. According to the group’s chairman, this group was formed to formalize coordination on commercial space launch issues across the agency because there was confusion among commercial space stakeholders and across the agency, and commercial space launch companies were hearing different things from different FAA lines of business. The group’s chairman said that the Executive Working Group reports to FAA’s New Entrants Board, a group formed to provide status updates on activities and events as well as decide how to move forward on specific initiatives associated with new entrants to the airspace such as drones and commercial space launch vehicles and is comprised of the principal leaders of FAA lines of business working on these issues. An FAA senior official told us that he believes commercial space coordination issues will be resolved as launches become more routine. In addition, the Air Traffic Organization has formed an Emerging Technologies Integration Office to focus on integrating commercial space operations and unmanned aircraft system activities within the national airspace system. A senior official in that office said that for decades, the Air Traffic Organization was focused on airplanes and that any deviation in airplane flow was viewed as an impediment, but that his office’s goal is to shift the understanding within the organization from an airplane-only focus to the idea that several types of vehicles can use the national airspace system. Program Operations A representative from one commercial space launch company said that an advantage of moving the Office of Commercial Space Transportation and thereby making space transportation its own mode, is that it could facilitate a more “level playing field” for space activities operating in and through the national airspace system. The representative noted that the Air Traffic Organization is a much larger office than the Office of Commercial Space Transportation and is focused on aviation safety which is regulated differently than space activities. As a result, the representative said that companies perceive an unequal playing field between these two offices and the risk of negative effects if aviation standards are imposed on space, including airspace closures during launch and reentry. According to the representative, because of the Air Traffic Organization’s lack of familiarity with space launch operations and the mechanics of placing a spacecraft into orbit or on a trajectory to another celestial body, the office has suggested launch times be limited to certain times of day and certain days of the month as dictated by the amount of air traffic. The representative said that the Air Traffic Organization’s proposed approach is “untenable” for commercial space launches because launch times are dictated by orbital mechanics and that the Air Traffic Organization has imposed airspace restrictions during the holidays that have required launches to be rescheduled. A representative from another commercial space launch company said that an unequal playing field between these two offices results in the Office of Commercial Space Transportation not having the practical authority commensurate with its responsibility. According to this representative, the impact of this mismatch results in confusion over authority and negatively affects when commercial space companies are able to launch as well as excessive time and volume of airspace closed during a launch. A representative from a third company said that there are multiple variables to consider about moving the office. The representative said that while moving the Office of Commercial Space Transportation to the Office of the Secretary of Transportation would provide it with much more visibility, the office may still be at a disadvantage when it disagrees with larger offices in the FAA. In addition, the representative said that most launch companies would still have to work with FAA on air traffic control issues as well as hybrid vehicles and experimental aircraft licenses. Moreover, a representative from the Commercial Spaceflight Federation said that although the association does not have a consensus position on moving the Office of Commercial Space Transportation, its members are concerned that the Air Traffic Organization is attempting to treat the rapidly developing area of commercial space similarly to how it treats the mature commercial aviation industry. In response to these comments, an Air Traffic Organization official told us that the airspace is restricted to commercial space launches for about 15 days per year during the holidays because a launch can affect hundreds of flights and that they prefer that launches occur when there are fewer effects on the national airspace system, for example, at night. However, an official said that the Air Traffic Organization has only denied one launch request over the last 5 years. An Air Traffic Organization official also said that they do not regulate the commercial space launch industry and focus on providing safe access to the airspace by all users of the national airspace system. In addition, an official involved with commercial space integration in the Office of Commercial Space Transportation and a spaceport representative told us they expect that technology will allow for more efficient use of the national airspace in the future by reducing the amount of time that the airspace will need to be shut down for launches. Regulations Some stakeholders said that moving the Office of Commercial Space Transportation could help accelerate the pace of updating regulations to reflect new technology, which they said was proceeding too slowly. A senior official in the Office of Commercial Space Transportation said that instead of competing with other FAA offices for rulemaking approval within the agency, moving the Office of Commercial Space Transportation to the Office of the Secretary might give the commercial space office a higher priority with regard to rulemaking. However, officials from the Office of the Secretary of Transportation also said that the regulatory rulemakings are not allocated by office but are set according to the priorities of each administration, so moving the office would not necessarily affect regulatory reform efforts. Resources According to some stakeholders and a senior official in the Office of Commercial Space Transportation, moving the office out of FAA could give commercial space launch issues a higher profile and more resources because FAA is focused on aviation as opposed to commercial space. One stakeholder also said moving the office out of FAA would make the office a priority as an independent organization within DOT. Furthermore, a senior official in the Office of Commercial Space Transportation said that the office has reached the limits of what it can accomplish with existing resources, policies, and authorities, and that moving the office could enable industry growth. In addition, a company representative said that the primary possible advantage of moving the office would be to have an Assistant Secretary for Commercial Space Transportation who would be in a leadership position to represent the growing industry directly to the Secretary of Transportation. However, officials from the Office of the Secretary of Transportation said that it is uncertain whether the Office of Commercial Space Transportation would receive more resources if it were moved to the Secretary’s office. In addition, some stakeholders said that if moved, the office would have to pay for support services that are currently available within FAA, such as legal, regulatory, human resources, and administrative support. Other Issues Noted by Stakeholders and Officials A commercial space launch company representative suggested that the Office of Commercial Space Transportation’s promotional responsibilities should be separate from its regulatory responsibilities to avoid even the appearance of a conflict of interest between regulating safety and promoting a company interest, but did not suggest that its promotional responsibilities had affected safety. In addition, a senior FAA official said that it would make sense to move the Office of Commercial Space Transportation’s promotion duties out of FAA because of an inherent conflict with the office being both a promoter and a regulator. Officials from the Office of the Secretary said transferring the policy and promotion aspects of the Office of Commercial Space Transportation’s work to the Secretary’s office, but not the launch licensing responsibilities, is one of various options regarding the office but that they have not advanced a specific proposal. A senior official in the Office of Commercial Space Transportation said there is no specific office within the Office of Commercial Space Transportation that promotes the industry and that the office’s promotional functions are part of its overall responsibilities, so moving only the promotional responsibilities would not be feasible. A former DOT official who served in a senior position when the Office of Commercial Space Transportation was transferred to FAA in 1995 noted that one reason the office was moved was because of the belief that the Office of the Secretary of Transportation should not be involved in programmatic activities that belong in the operating agencies. However, in 2014, Congress moved a programmatic office, the Research and Innovative Technology Administration (RITA), to the Office of the Secretary of Transportation. This former DOT official also said that the Office of Commercial Space Transportation would benefit from the technological and engineering support available within FAA. Finally, representatives from commercial space launch companies and an FAA official had different perspectives on whether the Office of Commercial Space Transportation would or should be its own modal agency within DOT or part of the Office of the Secretary of Transportation. For example, a company representative who favored moving the office said that commercial space could easily be considered its own transportation mode and not as part of aviation. Another company’s representative expected that the Office of Commercial Space Transportation, if it were moved out of FAA, would start out as its own modal agency. A third stakeholder suggested that eventually space transportation will become its own independent mode of transportation such as air, sea, rail, and roads and that moving the Office of Commercial Space Transportation out of the FAA is an inevitable first step in that direction. A senior official from the Office of Commercial Space Transportation said that moving the Office of Commercial Space Transportation would be a step toward considering commercial space transportation as a mode similar to rail or highway transportation. Steps Can Be Taken through DOT’s Rulemaking Process to Move the Office of Commercial Space Transportation All or part of the Office of Commercial Space Transportation can be transferred back to the Secretary’s office through a rulemaking process as was used in 1995 to amend the existing DOT delegation regulation. This process, which does not require congressional approval, was used when the Secretary of Transportation delegated the office’s responsibilities from DOT to FAA in 1995. FAA officials and the former Deputy Secretary of Transportation said moving the Office of Commercial Space Transportation from the Office of the Secretary of Transportation to FAA in 1995 was a “seamless” process. FAA and DOT officials said the following steps would need to be taken to move the office: Equivalent salaries would need to be determined for employees who are transferring because FAA and DOT have different pay scales. Legal, human capital, and administrative support currently provided by FAA would need to be obtained from DOT. New physical space for the office would likely need to be obtained, as FAA and the Office of the Secretary of Transportation are in different buildings. New processes and procedures for coordination and communication would need to be established. Key Practices and Considerations for Organizational Changes Our prior work has identified key practices and questions for consideration when evaluating proposals for or implementing organizational changes such as a consolidation or merger. We have previously found that implementing large-scale change management initiatives, such as mergers and organizational transformations, are not simple endeavors and require the concentrated efforts of both leadership and employees to realize intended synergies and to accomplish new organizational goals. We have found that mergers and transformations that incorporate strategic human capital management approaches will help to sustain agency efforts and improve the efficiency, effectiveness, and accountability of the federal government. These key merger and transformation practices include focusing on a key set of principles and priorities at the outset of the transformation, setting implementation goals and a timeline to build momentum and show progress, and establishing a communication plan. Questions to consider when evaluating consolidation proposals include (1) What are the goals of the consolidation? and (2) What will be the likely costs and benefits of the consolidation? Based on these key practices and considerations, DOT and FAA, for example, would need to determine the purpose of moving the Office of Commercial Space Transportation and the costs and benefits of such a move. Furthermore, to ensure employee and management support, DOT and FAA would need to obtain the buy-in of various FAA offices involved in commercial space launch issues such as the Air Traffic Organization. In addition, to assess the costs of the transformation, DOT and FAA would need to determine the costs of any additional support that would be needed by moving to the Office of the Secretary such as legal and administrative support. Moreover, DOT could consider the risk of unintended consequences of moving the office such as incurring additional costs. In addition, a spaceport representative told us that he is more concerned about the execution of moving the office than its placement. The representative said that although conceptually moving the office to the Office of the Secretary of Transportation could bring it more visibility and resources, the move would be futile if it is executed poorly. Therefore, if a decision were made to move the office, an implementation plan would be needed, consistent with our key mergers and transformation practices. Implementing a large-scale organizational transformation requires the concentrated efforts of both leadership and employees to accomplish new organizational goals. Agencies should have an implementation plan that includes essential change-management practices such as active, engaged leadership of executives at the highest possible levels; a dedicated implementation team that can be held accountable for a strategy for capturing best practices, measuring progress toward the established goals of the consolidation, retaining key talent, and assessing and mitigating risk, among others. Table 1 of appendix I lists the key practices and implementation steps that we have previously identified for mergers and organizational transformations. Table 2 of appendix I provides the key questions we have identified for evaluating proposals to consolidate physical infrastructure and management functions. Although moving the office does not involve a consolidation, we believe that many of these questions would apply to other organizational changes such as an office move. Agency Comments We provided a draft of this report to DOT for review and comment. DOT provided technical comments via email which we incorporated as appropriate. We are sending copies of this report to the Secretary of Transportation, the Administrator of the Federal Aviation Administration, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions concerning this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Key Practices for Mergers and Organizational Transformations and Questions to Consider for Consolidations Appendix I: Key Practices for Mergers and Organizational Transformations and Questions to Consider for Consolidations Key Questions What are the goals of the consolidation? What opportunities will be addressed through the consolidation and what problems will be solved? What problems, if any, will be created? What will be the likely costs and benefits of the consolidation? Are sufficiently reliable data available to support a business-case analysis or cost-benefit analysis? How can the up-front costs associated with the consolidation be funded? Who are the consolidation stakeholders, and how will they be affected? How have the stakeholders been involved in the decision, and how have their views been considered? On balance, do stakeholders understand the rationale for consolidation? To what extent do plans show that change-management practices will be used to implement the consolidation? (Please see table 1 for the key merger and transformation practices.) Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Cathy Colwell (Assistant Director); Bob Homan (Analyst-in-Charge); Maureen Luna-Long; Dave Hooper; SaraAnn Moessbauer; and Sarah Veale made key contributions to this report.
Why GAO Did This Study The Office of Commercial Space Transportation, which regulates and promotes the U.S. commercial space launch industry, was established in 1984 within the Office of the Secretary of Transportation and transferred to FAA in 1995. In 2015, GAO reported that the Office of Commercial Space Transportation faced challenges associated with the growth of the commercial space launch industry such as licensing more launches. To help meet these and other challenges such as updating regulations, some industry stakeholders and others suggested that the Office of Commercial Space Transportation should be moved back to the Office of the Secretary of Transportation. GAO was asked to review issues regarding transferring the Office of Commercial Space Transportation from FAA to the Office of the Secretary of Transportation. This report addresses: (1) selected stakeholders' and officials' perspectives on transferring the Office of Commercial Space Transportation from FAA to the Office of the Secretary of Transportation, (2) what steps would be required to make this transfer, and (3) key practices and considerations GAO has previously identified for organizational changes that could be instructive for such a transfer. GAO interviewed industry stakeholders and FAA and DOT officials, reviewed the steps taken during the office's 1995 transfer, and reviewed prior reports on key practices and questions to consider regarding organizational changes. GAO is making no recommendations in this report. What GAO Found Representatives from commercial space launch companies and spaceports GAO interviewed described advantages and disadvantages of moving the Office of Commercial Space Transportation to the Office of the Secretary of Transportation, but most of them favored moving the office. Conversely, most Federal Aviation Administration (FAA) officials GAO interviewed did not favor the idea. A senior official in the Office of Commercial Space Transportation said that there are advantages and disadvantages to moving the office and that whether such an action would be beneficial depends on the implementation details and the administration's preferences. Officials from the Office of the Secretary of Transportation said they currently do not have plans to move the office. Stakeholders' and officials' perspectives are based on what they perceive could occur as a result of a move, for example: Communication and coordination: Department of Transportation (DOT) officials said that a possible advantage of moving the office would be having a unified point of contact for the industry in communicating about commercial space launch issues, while FAA officials said that moving the office could make it more difficult for FAA offices to coordinate on commercial space activities. Regulations: Some stakeholders said that moving the office could help accelerate the pace of commercial space regulatory reform, but DOT officials said that moving the office would not necessarily do so. Resources: According to some stakeholders and a senior official in the Office of Commercial Space Transportation, moving the office out of FAA could give commercial space launch issues a higher profile and more resources because FAA is focused on aviation as opposed to commercial space. However, officials from the Office of the Secretary of Transportation said that it is uncertain whether the office would receive more resources if it were moved to the Secretary's office. The Secretary of Transportation could move all or part of the office through a delegation of responsibilities for commercial space, as was the case in the prior move in 1995. If the office were moved, other necessary steps would include addressing the differences in pay scales between FAA and the Office of the Secretary of Transportation, obtaining support services and office space, and establishing new coordination and communication processes and procedures. GAO's prior work has identified key practices and questions for consideration when evaluating proposals for or implementing organizational changes such as a consolidation or merger. These key practices include: (1) focusing on a key set of principles and priorities at the outset of the transformation, (2) setting implementation goals and a timeline to build momentum and show progress, and (3) establishing a communication plan. Questions to consider when evaluating consolidation proposals include (1) What are the goals of the consolidation? and (2) What will be the likely costs and benefits of the consolidation?
gao_GAO-18-24
gao_GAO-18-24_0
Background ATF is one of several DOJ law-enforcement components, including the Federal Bureau of Investigation (FBI) and the Drug Enforcement Administration (DEA), responsible for fighting violent crime. ATF is the lead agency charged with enforcing federal firearms laws and regulating the firearms industry. ATF is also responsible for investigating criminals and criminal organizations that use firearms, arson, or explosives in violent criminal activity. ATF investigates and combats violent crime related to firearm trafficking, criminal possession and use of firearms, and the diversion of firearms from legal commerce. This work includes law-enforcement operations and intelligence gathering and analysis. For example, special agents investigate reports of prohibited individuals acquiring or attempting to acquire firearms from private sellers in order to avoid background checks that would otherwise be required if purchasing through an FFL. According to ATF officials, intelligence analysts may help agents by gathering information from the public social-media profiles of individuals under investigation. In addition, ATF investigates reports of individuals engaging in the business of dealing firearms without a license, thereby circumventing background-check, record-keeping, and other requirements. Statutes and Regulations The National Firearms Act of 1934 (NFA) and the Gun Control Act of 1968 (GCA) are the primary federal laws that regulate the manufacture, sale, distribution, and possession of firearms. There are no laws that specifically regulate firearms transactions facilitated by the Internet. Rather, firearms transactions facilitated by the Internet are subject to the same legal requirements and regulations as traditional firearms sales. National Firearms Act of 1934, as Amended The NFA defines the specific types of firearms and components subject to the provisions of the act based on the firearm’s function, design, configuration, or dimensions. For example, the NFA applies to machine guns, short-barreled rifles, short-barreled shotguns, and silencers. The NFA requires these firearms and components to be registered with ATF. The lawful transfer of firearms and components subject to the NFA generally requires ATF approval, a process that involves the submission of application forms, fingerprints, and photographs to ATF, as well as payment of a transfer tax. Transfers outside of this ATF-approval process are generally illegal. Gun Control Act of 1968, as Amended The GCA, the main federal statute applicable to firearms such as handguns, shotguns, and rifles, requires all persons engaged in the business of manufacturing, importing, or dealing in firearms to become an FFL through ATF. The GCA defines a person “engaged in the business” as a dealer of firearms as someone who “devotes time, attention, and labor to dealing in firearms as a regular course of trade or business with the principal objective of livelihood and profit through the repetitive purchase and resale of firearms.” The definition excludes individuals who make “occasional” sales or purchases to enhance a personal collection or for a hobby or who sell all or part of a personal collection of firearms. The GCA requires that FFLs maintain records of all their gun sales. These records are used, among other purposes, to trace a firearm recovered by law-enforcement officials from its first sale by the manufacturer or importer through the distribution chain to the first retail purchaser, in order to provide law-enforcement agencies with investigative leads. As amended by the Brady Handgun Violence Prevention Act, the GCA generally requires FFLs to contact the FBI’s National Instant Criminal Background Check System (NICS) prior to transferring a firearm to a nonlicensed individual. During a NICS background check, the buyer provides the FFL with appropriate identification, such as a valid driver’s license. The FFL submits descriptive data, including the buyer’s name and date of birth, to NICS, which searches three national databases containing criminal history and other relevant records to determine whether federal or state law prohibits the person from receiving or possessing a firearm. The transfer may proceed if NICS informs the FFL that it has no information indicating that the transfer would be in violation of law, or if 3 business days have elapsed without notification that the transfer would violate the law. The GCA prohibits individuals from knowingly making a false statement intended to deceive FFLs with respect to any fact material to the lawfulness of the sale, such as a person claiming that he or she is the actual buyer of a firearm and not acquiring the firearm on behalf of another person, when in fact he or she is purchasing the firearm with the intent to transfer it to a prohibited person. This type of transaction is often referred to as a “straw purchase.” In addition, the GCA establishes the categories of persons generally prohibited from shipping, transporting, receiving, or possessing firearms and ammunition. Specifically, persons are prohibited from shipping, transporting, receiving, or possessing a firearm if they (1) have been convicted of a felony; (2) are a fugitive from justice; (3) are an unlawful user of or addicted to any controlled substance; (4) have been committed to a mental institution or judged to be mentally defective; (5) are aliens illegally or unlawfully in the United States, or certain other aliens admitted under a nonimmigrant visa; (6) have been dishonorably discharged from the military; (7) have renounced their U.S. citizenship; (8) are under a qualifying domestic violence restraining order; or (9) have been convicted of a misdemeanor crime of domestic violence. In addition, federal law prohibits persons under felony indictment from shipping, transporting, or receiving a firearm. Individuals who are not engaged in the business of dealing in firearms may not legally sell firearms to other unlicensed individuals under certain circumstances. For example, a transaction between unlicensed individuals would be illegal if the seller knows or has reasonable cause to believe that the buyer is legally prohibited from possessing firearms or is a resident of a different state than the seller. If the seller is not aware of these circumstances, the seller may transfer the firearm to the buyer without any record-keeping or background-check requirements. Nonprohibited, nonlicensed individuals may legally purchase firearms through an FFL or through individual private sales with residents of the same state. Regardless of whether an FFL is involved in an Internet- facilitated firearm purchase, if a seller knows or has a reasonable cause to believe that the prospective recipient is prohibited from possessing firearms, the seller must not transfer the firearm. See figure 1. As outlined in figure 1, the Internet can facilitate legal purchases either through FFLs or through nonlicensed private sellers. For purchases through an FFL, an individual orders a firearm online, and generally completes the transaction process in person. The FFL submits the required paperwork to ATF. A background check is processed directly by NICS or through a state government that checks NICS. Unless denied by the background check, the transaction is completed. If the individual is purchasing the firearm from an FFL in another state, the original FFL will transfer the firearm to an FFL in the state the buyer resides in to complete the transaction. If both the buyer and seller are residents of the same state, transfers between private nonlicensed parties facilitated by the Internet without the involvement of an FFL may be lawful. The firearm may be transferred in person between the buyer and the seller, or, if the firearm is a shotgun or rifle, it may be mailed intrastate between the individuals. The seller has no record-keeping obligations, and no NICS background check is performed on the buyer. However, a nonlicensed individual is usually prohibited from directly transferring a firearm to a person who the transferor knows or had reasonable cause to believe is residing in another state. In addition, it is usually illegal for any nonlicensed individual to transport into or receive in the state where he resides any firearm purchased or otherwise obtained outside that state. Therefore, interstate transactions between two nonlicensed individuals are likely to be illegal unless an FFL becomes a party to the transaction. For a legal transaction between residents of different states, the seller must send the firearm to an FFL in the buyer’s state. The FFL submits the paperwork, a background check is processed, and, unless denied by the background check, the FFL transfers the firearm to the buyer. Internet Firearm Marketplaces Potential gun buyers can view firearm advertisements and make purchases from the following categories of websites: major retailers, online retailers, online auctions and marketplaces, online classified listings, online forums and social media networks, and Dark Web websites. According to ATF reports, major retailers and online retailers meet the definition of firearm dealers and therefore must be FFLs in order to operate. To see how purchases may be facilitated by various Internet marketplaces, see figure 2. Prior Reporting on Internet Firearms Sales GAO, DOJ, and the Congressional Research Service (CRS), as well as a gun-control advocacy group, have reported on the issue of Internet firearm sales since the early 2000s. In 2001 we reported the results of our undercover inquiries to private individuals who advertised firearms online. We attempted to purchase firearms from two of these individuals. Both individuals were willing to complete the transactions in person, though we did not complete the sales. Also in 2001, as part of a larger report on reducing gun violence, DOJ identified issues related to firearms sales facilitated by the Internet. Among the issues outlined in the report was the possibility prohibited individuals may use the Internet to acquire firearms. The report also stated that the Internet may facilitate illegal sales by individuals selling firearms commercially without a license. The report stated that enforcement mechanisms must be established to prevent prohibited individuals from obtaining firearms through the Internet and to make sure that both FFLs and nonlicensed sellers follow existing law when conducting sales through the Internet. The report noted that ATF was working to establish a unit to identify and respond to criminal violations involving the Internet and other new computer technology and worked with other federal law-enforcement agencies to establish enforcement mechanisms to prevent prohibited individuals from obtaining firearms through the Internet and to make sure both FFLs and nonlicensed sellers follow existing law when conducting sales through the Internet. In 2012, CRS reported on Internet firearm and ammunition sales. The report outlined the extent to which federal law regulates the sale of firearms via the Internet, which is not treated as legally distinct from sales not facilitated by the Internet. CRS noted that this situation has raised concerns about the possibility of increased violation of federal firearm laws and about challenges that law-enforcement agencies may face when attempting to investigate violations of these laws. Additionally, a prior report by an advocacy group explored how the Internet may facilitate firearm sales to prohibited individuals. However, the report described how prohibited individuals may use the Internet to find firearms for sale and then to conduct face-to-face transactions. The report did not demonstrate how prohibited individuals may have firearms mailed directly to them, thus circumventing the FFL purchase process, or otherwise break the law. Representatives from the investigative organization that performed this work stated that they did not break the law when performing their testing. ATF Takes Various Actions to Enforce Firearm Regulations Related to Prohibited Firearm Transactions Facilitated by the Internet ATF Does Not Distinguish between In-Person Sales and Sales Facilitated by the Internet when Enforcing Firearms Statutes and Regulations As we noted above, there are no specific statutes or regulations pertaining to Internet firearms transactions. Hence, ATF does not distinguish between private firearms transactions taking place in person versus those that use the Internet to facilitate the sale. Licensed and nonlicensed sellers use the Internet to facilitate firearm sales in a variety of ways. Major retailers with a federal firearms license enable customers to browse available firearms on their websites but require transactions to be made in person at the store. Online retailers with a federal firearms license advertise firearms online and transfer the firearm to the purchaser through either a storefront that qualifies as an FFL or another FFL in the buyer’s state. Online auction and marketplace websites, online classifieds, and online forums also facilitate sales between buyers and both licensed and nonlicensed sellers. Depending on the website, potential buyers can search for firearms nationwide or narrowed down to city or zip code. According to ATF, searching capabilities can affect whether transactions among nonlicensed individuals are more likely to occur in person or through an FFL as well as the potential for illegal activity to occur. A private sale between two nonlicensed individuals would have an unlawful component if, for example, (1) the seller knows or has reasonable cause to believe that the buyer is legally prohibited from possessing firearms or is a resident of a different state; (2) the seller is engaged in the business of dealing in firearms without a license; or (3) the item is an NFA-restricted weapon. ATF officials who oversee Internet- related investigations said that it is not possible to monitor private firearms transactions coordinated over the Internet as they take place. Federal law does not require the seller in a private firearm transaction to conduct a background check or otherwise process paperwork through ATF. ATF Developed an Internet Investigations Center to Help Identify Individuals Unlawfully Transferring Firearms Using the Internet According to ATF officials, in 2012 the agency created a national center for Internet-related investigations, now known as the Internet Investigations Center (Center). ATF officials noted that, as an example of its activities, field agents who perform work involving the Internet will coordinate with the Center to ensure they have the necessary training to operate online in an undercover capacity. The Center has access to a variety of tools to facilitate Internet investigations. Much of the Center’s software that is used to analyze online content for investigations is free and open source. For example, according to ATF officials, using free open-source software allows analysts to glean information from public websites without violating users’ privacy rights. ATF officials stated that the Center investigates buyers and sellers who use the Internet to facilitate illegal firearms transactions. The officials with the Center noted that these investigations are generally reactive, meaning that the Center initiates them after receiving a tip or a request from a field agent. For example, in November 2014 the Center received a tip from a person who was selling firearms on an online firearms marketplace and was suspicious of a prospective buyer attempting to obtain a pistol without involving an FFL. The Center identified the prospective buyer and engaged in an undercover operation in which the individual agreed to provide the undercover agent with components designed to turn pistols and rifles into fully automatic firearms in exchange for a pistol and cash. The undercover agent and the buyer met in person and completed the transaction. ATF agents arrested the buyer at the scene, and he was later sentenced to 33 months in prison. ATF officials said the agency frequently receives tips about nonlicensed sellers engaging in the business of firearms. For example, ATF investigated a nonlicensed seller who posted more than 280 firearms for sale on multiple online firearms marketplaces; purchased at least 54 firearms; and sold at least 51 firearms at a profit. The seller, who was also found to have made straw purchases for other buyers, was sentenced in August 2010 to 2 years in prison. For additional examples of ATF enforcement actions involving sales facilitated by the Internet, please see appendix II. According to ATF officials, the Center also performs investigative work on the Dark Web, which requires knowledge of the Internet and investigative techniques. For example, ATF analysts must understand virtual currency, such as Bitcoin values. They must also know what sellers are charging for their products, because prices on the Dark Web “skyrocket” due to the criminal nature of the merchandise. In addition, the analysts learn common terms associated with firearm culture, in order to communicate with users engaged in criminal activity. ATF officials with the Center also noted that investigations might involve both the Surface Web and the Dark Web. For example, to identify an anonymous user on the Dark Web, the Center works to establish the user’s “digital footprint” on the Surface Web. In some cases, users might conduct illegal activity on the Dark Web but might then go to the Surface Web, such as a social-networking website with chat forums on a wide variety of topics, and discuss their illegal activity. From there, analysts can link the user to other social-media accounts, where the user may post a photo showing a street sign or other characteristics to help investigators narrow the user’s location. The ATF officials with the Center noted that posts on some websites contain meta-data, which includes geo-coding that helps the analysts identify where posts originated. ATF issued the Firearms and Internet Transactions Intelligence Assessment Report in April 2016 to provide information and analysis in the area of online firearm sales, including both legal and illegal transactions. The report highlighted several key findings about how firearm transactions are facilitated by the Internet. Specifically, the ATF analysis of the online marketplaces for firearms demonstrated the ease with which individuals can choose to circumvent the generally applicable law in this arena. Within the report, ATF detailed a market analysis of firearms transactions, including Surface Web and Dark Web marketplaces. Firearms transactions that occur on the Dark Web are more likely to be conducted in person or via the mail or common carrier, versus through an FFL. Additionally, the report noted that it appears that the price of a firearm increases as the transaction becomes more covert or when parties attempt to subvert laws and regulations. According to ATF staff, they plan to update the report when there is a significant shift in Internet gun trafficking. The ATF officials with the Center said they have not determined the frequency with which updated reports will be issued but they do not plan to update it annually. ATF Enforces Firearms Laws through Regulatory Inspections of Licensed Firearms Dealers to Detect Prohibited Firearms To enforce the NFA, GCA, and related firearms regulations, ATF carries out a variety of regulatory activities. For example, ATF monitors the firearms industry from manufacture and importation through retail sale. Specifically, ATF Industry Operations Investigators determine whether FFL applicants are qualified to engage in firearms commerce through routine inspections and regulatory oversight. Industry Operations Investigators also routinely inspect FFLs to ensure continued compliance with statutes and regulations. ATF officials stated that investigators conduct compliance inspections of FFLs—who must renew their licenses every 3 years. ATF conducts these inspections at least once during the 3- year licensing period. Additionally, ATF officials stated that as part of each inspection, officers will review all sales transactions an FFL has made in the last 12 months and analyze the data for aberrant patterns. Based on a review of DOJ Office of Inspector General documentation and our own observations during an FFL inspection, we determined that, during these inspections, ATF performs an inventory of the FFL’s firearms and checks it against the FFL’s inventory to ensure that firearm transactions reconcile with the firearm inventory; reviews the FFL’s records of background checks for purchases processed through NICS; checks the prior year’s Firearms Transaction Record forms, which document acquisition and disposition information that ATF uses to trace firearms involved in crimes; and reviews sales records to ensure that the FFL has recorded appropriate tax information. While ATF investigators routinely monitor firearms transactions of FFLs, the agency does not monitor private firearms transactions among nonlicensed individuals. As noted above, private sales among nonlicensed individuals who are residents of the same state are not subject to record-keeping or background-check requirements, so ATF does not have a means by which to monitor these sales as they take place. ATF Law-Enforcement Operations Investigate Firearm-Related Crimes, Including Those Facilitated by the Internet One aspect of the enforcement work undertaken by ATF agents is to investigate reports of individuals engaging in the business of dealing in firearms without a license. According to agency officials with the ATF Violent Crime Intelligence Unit, as part of these investigations, agents gather information about a suspect’s firearm transactions. On the basis of the activity detected, agents will determine whether the extent of the sales history is significant enough to warrant further action. In fiscal years 2014–2016, ATF made 322 arrests for engaging in the business of dealing in firearms without a license. These figures represent all arrests, as ATF does not identify or track whether transactions were facilitated by the Internet. During the same time, ATF also made 53 arrests for charges related to the unlawful interstate transfer of firearms, 204 arrests for charges related to the sale of firearms to a prohibited person, and 12,586 arrests for charges related to the possession of a firearm by a prohibited person. These arrests may include but are not limited to Internet-related investigations. According to documentation provided by ATF, 89 percent of the defendants in these arrests received a conviction. See table 1. Agents Purchased Two Firearms on the Dark Web, but Covert Attempts to Illegally Purchase Firearms on the Surface Web Were Unsuccessful Agents Successfully Purchased Two Firearms on the Dark Web Our agents successfully purchased two firearms from sellers we located on a Dark Web marketplace as a result of seven total attempts. ATF officials stated that the Dark Web is completely anonymous and is designed to facilitate criminal activity online. Further, an ATF report states that most used firearms are sold via the online auctions, online marketplaces, and on the Dark Web as compared to the Surface Web. In the seven attempts, our agents did not disclose any information indicating they were prohibited from possessing a firearm. In the five attempts where we did not ultimately purchase a firearm, the prospective seller stopped responding to our inquiries, stated the firearm was no longer for sale, refused to use an escrow account for payment, or experienced technical problems using the Dark Web marketplace. The first weapon that we purchased was an AR-15 rifle, which is a semiautomatic firearm. The serial number on the firearm was obliterated. The Dark Web seller shipped the dismantled weapon directly to the undercover address provided by our agent. It is unlawful for any person to possess or ship in interstate commerce a firearm which has had the importer’s or manufacturer’s serial number removed, obliterated, or altered, if the individual had such knowledge about the serial number. Additionally, because the firearm was shipped across state lines, the seller may not have been a resident of the same state as our agent. We did not confirm whether the seller notified the shipping company that the package contained a firearm. Any of these circumstances—removing a serial number, selling to a resident of a different state, or failing to properly notify the shipping company that the shipment contained a firearm—if proven, would likely violate federal law. A photo of the weapon can be seen in figure 3. The second weapon we purchased was an Uzi, which is an Israeli-made semiautomatic firearm, and was advertised as a fully automatic firearm. See photo in figure 4. If the firearm meets the NFA’s definition of a machine gun, the seller’s prior possession of the Uzi, and the shipment to our agent, likely violated federal law. Generally, only machine guns that were lawfully possessed prior to May 19, 1986, may continue to be possessed and transferred, with ATF approval, if they are registered in accordance with the NFA. We are referring information regarding our two Dark Web purchases to applicable law-enforcement agencies to inform any ongoing investigations for any further action they deem appropriate. All of Our Attempts to Illegally Purchase Firearms from Private Sellers on the Surface Web Were Unsuccessful Our covert testing involving GAO agents attempting to purchase firearms illegally on the Surface Web were unsuccessful. Specifically, private sellers on Surface Web gun forums and in classified ads were unwilling to sell a firearm to our agents that self-identified as being prohibited from possessing a firearm. In our 72 attempts to purchase firearms from private sellers on the Surface Web, 56 sellers refused to complete a transaction once we revealed that either the shipping address was across state lines or that we were prohibited by law from owning firearms. The scenarios we applied to the purchases were derived from provisions in the GCA. The five scenarios disclosed status information that would disqualify our agents from purchasing a firearm. For example, in one scenario we stated that we were a convicted felon; in another scenario, we informed the seller that we had a dishonorable discharge from the military. In these 56 attempts, 29 sellers refused because they would not ship a firearm and 27 refused after we presented the scenario. Furthermore, in five of these attempts, the accounts we set up on several forums were frozen by the websites, which prevented us from using them after we disclosed our prohibited status or requested interstate shipment and attempted to make a purchase. In the 11 remaining attempts, we encountered private sellers that appeared to have scammed us, or attempted to scam us, after we disclosed our prohibited status or asked to avoid using an FFL. In two of these instances, we made a payment and never received the firearm or a refund. In the remaining nine attempted scams, our agents determined that the seller may not be legitimate and therefore did not complete the purchase. For example, in one attempt, the agent conducted investigative research on the seller and found evidence suggesting that the seller may be involved in online fraud. As a result, the agent did not follow through with the purchase attempt. ATF does not have jurisdiction over fraud cases so, when it encounters such circumstances, the agency may refer the case to the Joint Support and Operations Center or to local or state law-enforcement agencies or may encourage the victim to file a police report. The results of our attempts on the Surface Web are summarized in figure 5. Agency Comments We provided a draft of this report to ATF and DOJ on October 31, 2017, for review and comment. ATF provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Deputy Director of ATF and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Seto Bagdoyan at (202) 512-6722 or [email protected], or Wayne McElrath at (202) 512-2905 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Methods for Performing GAO Covert Testing Scope For our covert attempts to buy firearms on the Internet, we performed tests on both the Dark Web and the Surface Web to compare and contrast how transactions are completed. For the tests, our agents employed undercover identities and accessed online marketplaces where firearms were advertised for sale. In both Dark Web and Surface Web testing, the agents contacted sellers that posted ads online, and attempted to complete firearm purchases. For our testing, we did not proactively attempt to purchase firearms from Federal Firearm Licensees (FFL), focusing our efforts on private sellers. We counted an attempt as successful if we received a firearm. We counted an attempt as a failure if we contacted the seller and expressed interest in purchasing the advertised firearm and the seller refused to complete the purchase, or if the seller failed to respond after initial contact was made. In some instances on the Surface Web, after we contacted a seller and described our prohibited status, we were “banned,” or prohibited from accessing the gun forum or classified ad website. Additionally, in two instances, our agents were apparently “scammed” in that we remitted payment for a firearm we did not receive, or our agents otherwise identified indicators that the firearm would not be shipped. The results of our testing are for illustrative purposes only and are not generalizable. Prior to beginning our testing, to understand how prohibited individuals may use the Internet to purchase firearms or firearm components, we reviewed Department of Justice (DOJ) and Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) published reports, including adjudicated criminal cases. We also met with third-party groups with knowledge of the firearms industry, including state law-enforcement agencies, a purveyor of commercial website that host online firearm classified advertisements, a gun-control advocacy group, a firearm-industry organization, and an academic research center, to learn about online firearm marketplaces, criminal pathways to illegally purchase or sell firearms, and enforcement responses. Additionally, we reviewed reports by a gun-control advocacy group to understand how prior similar work in this area was performed. We learned through our review and our subsequent interviews with individuals who performed this work that no federal laws were broken when this testing was conducted. Accordingly, to demonstrate how the Internet may facilitate illegal firearm transactions, we decided our agents would complete the firearm purchases. Methodology for Dark Web Covert Testing Agents also accessed firearm advertisements on a Dark Web marketplace and attempted to purchase firearms or firearm components from nonlicensed private sellers. Agents focused on one Dark Web marketplace for this stage of testing. Our agents performed a preliminary test to assess the feasibility of purchasing a firearm on the Dark Web. This attempt was successful, so our agents proceeded with additional planned attempts to purchase additional firearms on the Dark Web. Testing ended once a firearm was successfully purchased and received by our agents, with a total of seven attempts completed. For these covert tests, we did not disclose any information about our presumed prohibited status. We also focused our efforts on purchasing a firearm that appeared to be restricted by the National Firearms Act of 1934 (NFA). Methodology for Surface Web Covert Testing To perform Surface Web testing, our agents accessed public gun forums and other classified ads where private nonlicensed sellers listed firearms for sale. These forums and classified ads were identified from our meetings with ATF and third-party entities, and a review of available documentation. We considered the following factors when selecting online classified websites: hosted nationwide or regional ads, quantity of ads, variety of firearms available, and accessibility of website. Recently posted ads on these sites were selected if they fell within a designated price range, and were for transactions between private nonlicensed individuals. The purpose of our Surface Web purchase attempts was to determine whether private sellers would knowingly sell a firearm to an individual prohibited from possessing one, as outlined by the Gun Control Act of 1968 (GCA). Our agents used one of five scenarios based on a provision of the GCA when attempting to purchase a firearm. The scenarios involved overtly explaining why our agent was prohibited from possessing a firearm. The scenarios based on the GCA covered the following: a felon avoiding a background check, an individual with a domestic-violence background or a restraining order against him or her, an individual who unlawfully uses controlled substances (or is an an individual who was dishonorably discharged from the military, and an individual who has renounced his or her citizenship or is otherwise an unlawful alien. Before we began testing, we determined that we would run each scenario iteratively until we successfully completed a purchase, we exhausted the number of applicable ads, or we capped out our predetermined cap of 15 purchase attempts, with a total of 75 attempts to be made in total. However, due to investigative decisions, we only employed 72 attempts. We conducted this performance audit from July 2015 to November 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted our related investigative work in accordance with the standards prescribed by the Council of the Inspectors General on Integrity and Efficiency. Appendix II: Examples of Illegal Firearms Sales Facilitated by the Internet As noted above, the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) does not track statistics on firearm enforcement actions that involve illegal transactions facilitated by the Internet. However, ATF officials provided several examples of closed, adjudicated cases where the agency took enforcement action against individuals who were using the Internet to facilitate illegal transactions. The following summaries provide examples of the type of investigative and enforcement work ATF agents perform: One individual was indicted in February 2015 for being a felon in possession of firearms and for possessing a machine gun. In November 2014, ATF’s Internet Investigation Center (the Center) received a tip from the ATF Tip Line; a legitimate seller was suspicious of a buyer who was attempting to obtain a firearm without involving a Federal Firearm Licensee (FFL) and suggested the seller could obliterate the serial numbers. The Center identified the prospective buyer as a convicted felon. The individual agreed to provide the undercover agent with a Glock auto sear—which, when attached to a firearm makes it a fully automatic weapon—and firearm components that could be used to transform an M-16 style rifle into a machine gun. In exchange, the undercover agent would provide the individual with a Glock pistol and $300 cash. The individual and an undercover agent completed the transaction and the individual was immediately arrested. The individual’s criminal history included a recent prior felony gun-possession conviction. The individual pleaded guilty to being a felon in possession of a firearm and to the illegal transfer or possession of a machine gun, and was sentenced to 33 months imprisonment and 36 months of supervised release. In 2009, one individual was indicted on six counts of federal criminal violations, including one count for engaging in the business of firearms without a license. According to the indictment, from approximately January 1, 2005, to May 8, 2008, while serving as an FBI agent, the individual purchased multiple firearms from various sources including private sellers, local stores, and sellers he dealt with over the Internet. He posted at least 280 firearms for sale on a legitimate firearm website, some of which were multiple listings of the same item in the event that interested bidders did not meet his target price. During this period, he purchased at least 54 firearms and sold at least 51 firearms. He profited from all the sales, collecting more than $118,000 in gross receipts. The individual was also indicted on four counts of causing a firearms dealer to maintain false records, which related to his purchasing firearms for third parties (straw purchases). In addition, the individual was indicted on one count of providing ATF with a false document listing the firearms he bought and sold; agents recovered a more-extensive and more-descriptive list. The individual was found guilty on all counts in April 2010, and was sentenced in August 2010 to 2 years in federal prison. According to an affidavit from an ATF Special Agent, an individual offered silencers, pistols, and rifles for sale on the Dark Web, as well as nationwide shipping. The ATF Center “proactively targeted” the individual’s vendor name “through various methods of analysis,” identified numerous Internet forum and social-media profiles associated with the individual, and ultimately discovered his true identity. The Center referred “an investigative lead” and the corresponding evidence and analysis to the respective ATF Field Division. According to the affidavit, the Special Agent conducted a controlled purchase through one of the Dark Web marketplaces, reviewed U.S. Postal Service security video and observed the individual mail the firearm, and executed arrest and search warrants. The individual pled guilty to one count of causing a firearm silencer to be delivered by the U.S. Postal Service without proper notification, and was sentenced to 6 months in federal prison and 3 years of supervised release. In October 2013, an individual was indicted for illegal exportation, shipment, and delivery of firearms and firearm components that were sold on a Dark Web site. The man shipped a handgun concealed in a video game system to a buyer in Sydney, Australia. Australian Federal Police intercepted the package and alerted ATF, which began an investigation. During the investigation, the individual shipped a 9 mm pistol with an obliterated serial number to the United Kingdom, various assault-rifle parts to Australia, and a .22-caliber pistol with an obliterated serial number and a weapon magazine to Sweden. Each firearm was disassembled and concealed in a broken electronic device. The individual pleaded guilty and was sentenced to 2 years imprisonment and 2 years of supervised release. In February 2015, an individual was indicted for dealing in firearms without a license and selling firearms to residents of other states. The individual sold firearms via two Dark Web sites and shipped them to buyers in the United States and internationally. In an attempt to hide his identity, the man placed false return-address labels on the packages, used aliases to send the packages, and packed the firearms so that they appeared to be computer hard drives. The individuals agreed to sell handguns to undercover ATF agents posing as gun buyers and then shipped the guns from Alabama to Nebraska and New Jersey. The individual was found guilty and sentenced in November 2015 to 51 months in prison and 36 months supervised release. Appendix III: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contact named above, Dave Bruno (Assistant Director), Dean Campbell, Julia DiPonio, Robert Graves, and Kristen Timko made key contributions to this report. Other contributors include Marcus Corbin, Colin Fallon, Maria McMullen, James Murphy, Anna Maria Ortiz, Julie Spetz, and Helina Wong.
Why GAO Did This Study The current federal legal framework governing buying and selling of firearms does not specifically address the use of the Internet to facilitate these transactions. Additionally, private transactions involving the most-common types of firearms between individuals who are not licensed to commercially sell weapons and who are residents of the same state, including transactions facilitated by the Internet, are generally not subject to federal background-check requirements. Congressional requesters asked that GAO assess the extent to which ATF is enforcing existing laws and investigate whether online private sellers sell firearms to people who are not allowed or eligible to possess a firearm. This report describes (1) techniques ATF uses to investigate and enforce generally applicable firearm laws in instances where the firearm or firearm-component transaction is facilitated by the Internet and (2) results of GAO's undercover attempts to buy firearms on the Dark Web and Surface Web. GAO analyzed documents and interviewed officials to identify actions ATF has taken to prohibit illegal firearm transactions. GAO also attempted to purchase firearms from Dark Web and Surface Web marketplaces. The results of the testing are illustrative and nongeneralizable. What GAO Found The Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) is responsible for investigating criminal and regulatory violations of firearms statutes and regulations that govern firearms transactions, including sales that are facilitated by the Internet. Two components of the Internet may be used to facilitate Internet firearm sales: the Surface Web and the Dark Web. The Surface Web is searchable with standard web search engines. The Dark Web contains content that has been intentionally concealed and requires specific computer software to gain access. ATF created the Internet Investigations Center (Center) to investigate buyers and sellers who use the Internet to facilitate illegal firearms transactions. The Center uses several tools to provide investigative support to ATF, which has resulted in the arrests of individuals using the Internet to facilitate illegal firearm purchases. ATF officials with the Center also noted that investigations might involve both the Surface Web and the Dark Web. For example, to identify an anonymous user on the Dark Web, the Center works to establish a user's “digital footprint” on the Surface Web. In 2016, the Center also issued a report about Internet firearm transactions. This and other ATF reports highlighted the following about Internet-facilitated firearm transactions: The relative anonymity of the Internet makes it an ideal means for prohibited individuals to obtain illegal firearms. The more anonymity employed by a firearms purchaser, the greater the likelihood that the transaction violates federal law. Firearm transactions that occur on the Dark Web are more likely to be completed in person or via the mail or common carrier, versus through a Federal Firearm Licensee. GAO agents attempted to purchase firearms from Dark Web and Surface Web marketplaces. Agents made seven attempts to purchase firearms on the Dark Web. In these attempts, agents did not disclose any information about whether they were prohibited from possessing a firearm. Of these seven attempts, two on a Dark Web marketplace were successful. Specifically, GAO agents purchased and received an AR-15 rifle and an Uzi that the seller said was modified so that it would fire automatically. GAO provided referral letters to applicable law-enforcement agencies for these purchases to inform any ongoing investigations. Tests performed on the Surface Web demonstrated that private sellers GAO contacted on gun forums and other classified ads were unwilling to sell a firearm to an individual who appeared to be prohibited from possessing a firearm. Of the 72 attempts agents made to purchase firearms on the Surface Web, 56 sellers refused to complete a transaction: 29 sellers stated they would not ship a firearm and 27 refused after the disclosure of the undercover identities' stated prohibited status. Furthermore, in 5 of these 72 attempts, the accounts GAO set up were frozen by the websites, which prevented the agents from using the forums and attempting to make a purchase. What GAO Recommends GAO is not making recommendations in this report. ATF provided technical comments, which GAO incorporated as appropriate.
gao_GAO-18-612
gao_GAO-18-612_0
Background Since 1993, USAID has obligated more than $5 billion in bilateral assistance to the Palestinians in the West Bank and Gaza, primarily using funds appropriated through the ESF. According to State officials, through the ESF, USAID provides project assistance and debt relief payments to PA creditors. USAID, with overall foreign policy guidance from State, implements most ESF programs, including programs related to private sector development, health, water and road infrastructure, local governance, civil society, rule of law, education, and youth development. According to USAID officials, this assistance to the West Bank and Gaza contributes to building a more democratic, stable, prosperous, and secure Palestinian society—a goal that USAID described as being in the interest of the Palestinians, the United States, and Israel. Figure 1 shows the location of the West Bank and Gaza relative to surrounding countries. USAID assistance to the West Bank and Gaza is conducted under antiterrorism policies and procedures outlined in an administrative policy document known as Mission Order 21. The stated purpose of the mission order, as amended, is to describe policies and procedures to ensure that the mission’s program assistance does not inadvertently provide support to entities or individuals associated with terrorism. We have previously reported on the status of ESF assistance to the Palestinians and USAID’s antiterrorism policies and procedures in the West Bank and Gaza. Status of ESF Assistance to the West Bank and Gaza for Fiscal Years 2015 and 2016, Including Project Assistance and Payments to PA Creditors As of March 31, 2018, USAID had obligated about $544.1 million (over 99 percent) and expended about $350.6 million (over 64 percent) of approximately $544.5 million in ESF assistance allocated for the West Bank and Gaza in fiscal years 2015 and 2016 (see table 1). USAID obligated portions of the allocated funds for direct payments to PA creditors—specifically, payments to two Israeli fuel companies, to cover debts for petroleum purchases, and to a local Palestinian bank, to pay off a line of credit used for PA medical referrals to six hospitals in the East Jerusalem Hospital network. Project assistance obligated for fiscal years 2015 and 2016 accounted for about $215 million (74 percent) and $184 million (72 percent), respectively, of USAID’s obligations of ESF assistance for the West Bank and Gaza for those fiscal years (see fig. 1). Payments to the PA’s creditors accounted for the remaining obligations—about $75 million (26 percent) of fiscal year 2015 obligations and about $70 million (28 percent) of fiscal year 2016 obligations. According to USAID documents, ESF project assistance for the West Bank and Gaza for fiscal years 2015 and 2016 was obligated for three USAID development objectives: Economic Growth and Infrastructure (about $239 million), Investing in the Next Generation (about $107 million), and Governance and Civic Engagement (about $25 million). Program support—which sustains all development objectives, according to USAID—accounted for about $29 million (see table 2). Economic Growth and Infrastructure. The largest share—about 60 percent—of USAID’s ESF project assistance for the West Bank and Gaza for fiscal years 2015 and 2016 supported the agency’s Economic Growth and Infrastructure development objective. According to USAID documents, as of March 31, 2018, the agency had obligated about $239 million and expended approximately $89 million (about 37 percent) for projects under this objective. USAID officials stated that the agency funded these projects under the following standard State-budgeted program areas: health (including water), infrastructure, private sector competiveness, and stabilization operations and security sector reform. The largest project—the Architecture and Engineering Services project—received about $20 million of fiscal year 2015 ESF assistance and $17 million of fiscal year 2016 ESF assistance. The purpose of the project was to rehabilitate and construct infrastructure through the procurement of infrastructure services, including engineering design and construction management, among other things. The contractor was required to coordinate with relevant PA and Israeli entities, as well as with USAID, to assist in the selection of PA water and wastewater projects and in the planning and design of water projects such as small- to large-scale water distribution systems, water treatment systems, and institutional capacity building. Investing in the Next Generation. The second-largest share—about 27 percent—of USAID’s fiscal years 2015 and 2016 ESF project assistance for the West Bank and Gaza supported the agency’s Investing in the Next Generation development objective. According to USAID documents, as of March 31, 2018, the agency had obligated about $107 million and expended approximately $79 million (about 74 percent) for projects under this objective. Program areas funded included education, health, social and economic services and protection of vulnerable populations. The largest project funded under this objective—a grant to the World Food Program for assistance to vulnerable groups—received $12 million in fiscal year 2015 and $15 million in fiscal year 2016 ESF assistance. The project focused on ensuring food security, including meeting food needs, of the nonrefugee population; increasing food availability and dietary diversity for the most vulnerable and food-insecure nonrefugee population; and establishing linkages with the Palestinian private sector (shopkeepers, farms, and factories) to produce and deliver the aid being provided to Palestinians. For example, the project directly distributed a standard food ration through both direct food distribution and electronic food vouchers to vulnerable nonrefugee families. Governance and Civic Engagement. The smallest share—about 6 percent—of USAID’s fiscal years 2015 and 2016 ESF project assistance for the West Bank and Gaza supported the agency’s Governance and Civic Engagement development objective. According to USAID documents, as of March 31, 2018, USAID had obligated about $24.6 million and expended approximately $14.5 million (about 60 percent) for projects in program areas that included civil society, good governance, and rule of law. The largest project funded under this objective—a contract for the Communities Thrive Project— received about $5.2 million and $8 million in fiscal years 2015 and 2016 ESF assistance, respectively. The project aimed to help 55 West Bank municipalities improve fiscal management, fiscal accountability and transparency, and delivery and management of municipal services, among other things. Under debt relief grant agreements with the PA, USAID made direct payments of ESF assistance to PA creditors totaling about $75 million from fiscal year 2015 funds and $70 million from fiscal year 2016 funds. USAID paid about $40 million from fiscal year 2015 funds and $45 million from fiscal year 2016 funds to two oil companies to cover debts for petroleum purchases. In addition, USAID paid about $35 million from fiscal year 2015 funds and $25 million from fiscal year 2016 funds to the Bank of Palestine, to pay off a PA line of credit that was used to cover PA medical referrals to six hospitals in the East Jerusalem Hospital network. USAID Vetted PA Creditors and Conducted External Assessments and Financial Audits of PA Ministries Before using fiscal years 2015 and 2016 ESF assistance to pay PA creditors, USAID vetted the creditors to ensure that the assistance would not provide support to entities or individuals associated with terrorism, as required by its policies and procedures. USAID determined that certain legal requirements, including the requirement for an assessment of the PA Ministry of Finance and Planning, were not applicable for direct payments of these funds to PA creditors. Nevertheless, USAID continued to commission external assessments and financial audits of the PA Ministries of Health and Finance and Planning. USAID Vetted PA Creditors as Required by Its Policies and Procedures for Direct Payments to Creditors USAID documentation for payments to creditors shows that before signing debt relief agreements with the PA, mission officials checked, as required by Mission Order 21, the vetting status of PA creditors who would receive direct payments under the agreements, to ensure their eligibility before any payment was made. USAID Mission Order 21 requires that before payments to PA creditors are executed, the creditors must be vetted—that is, the creditors’ key individuals and other identifying information must be checked against the federal Terrorist Screening Center database and other information sources to determine whether they have links to terrorism. According to USAID policies and procedures, each PA creditor must be vetted if more than 12 months have passed since the last time the creditor was vetted and approved to receive ESF payments. We found that for payments made to PA creditors using fiscal years 2015 and 2016 ESF assistance, USAID vetted each PA creditor that received payments and completed the vetting during the 12- month period before the debt relief agreements with the PA were signed (see table 3). USAID Determined Certain Legal Requirements Were Not Applicable to Payments to PA Creditors USAID determined that certain legal requirements applicable to cash transfers to the PA were not applicable to direct payments to PA creditors of fiscal years 2015 and 2016 ESF assistance. In September 2015, we reported that USAID ceased making cash payments directly to the PA in 2014 and began making payments of ESF assistance directly to PA creditors. In reviewing USAID’s compliance with key legal requirements, we found that USAID had complied with the requirements when making cash transfers to the PA in fiscal year 2013. However, USAID had determined that some requirements were not applicable to direct payments made to PA creditors in fiscal year 2014, because no funds were being provided directly to the PA. After fiscal year 2015, USAID further defined the scope of statutory requirements it deemed applicable to payments to PA creditors using fiscal years 2015 and 2016 ESF assistance, under the rationale that these payments do not constitute direct payments to the PA. Specifically, according to USAID, the agency determined that the following statutory requirements discussed in our prior report were not applicable to direct payments to PA creditors. A requirement to notify the Committees on Appropriations 15 days before obligating funds for a cash transfer to the PA A requirement for the PA to maintain cash transfer funds in a separate account A requirement for the President to waive the prohibition on providing funds to the PA and to submit an accompanying report to the Committees on Appropriations A requirement for the Secretary of State to provide a certification and accompanying report to the Committees on Appropriations when the President waives the prohibition on providing funds to the PA Requirements for direct government-to-government assistance, including an assessment of the PA Ministry of Finance and Planning According to USAID officials, they currently do not plan to resume cash payments to the PA, because making direct payments to creditors minimizes the misuse of funds and assures full transparency and appropriateness of transfers. USAID Commissioned External Assessments and Financial Audits of PA Creditors before Executing Payments Although USAID concluded that the statutory requirement mandating assessments of the PA Ministry of Finance and Planning did not apply to direct payments to PA creditors, the West Bank and Gaza mission commissioned external assessments of the PA Ministry of Health’s medical referral services and Ministry of Finance and Planning’s petroleum procurement system. According to a USAID document, while the payments to the creditors did not constitute direct budget support to the PA, the agency chose to commission external assessments to determine whether the PA’s financial systems were sufficient to ensure adequate accountability for USAID funds consistent with legislative requirements for direct budget support funds. These external assessments identified weaknesses in both systems. Ministry of Health medical referrals. The assessment report stated that the ministry did not have approved policies and procedures for the medical referral process, a list of medical services covered by the referral system, and written criteria for selecting referral hospitals in the medical referral systems. In response, in a January 2016 internal memorandum, the West Bank and Gaza mission officials concluded, among other things, that the findings did not pose a significant risk to USAID funds. They also stated that the Ministry of Health’s medical referral system had adequate policies and procedures for referrals to local hospitals. However, after the assessment report was issued, a USAID contractor worked with the Ministry of Health to update, revise, and approve guidelines for medical referrals. Ministry of Finance and Planning petroleum procurements. The assessment report stated that the ministry lacked specific policies and procedures to prevent or detect fraud in the petroleum procurement systems. In the West Bank and Gaza mission’s January 2016 memorandum, USAID mission officials disagreed with the assessment’s findings regarding the petroleum procurement system, stating that the assessment did not take into account sufficient and adequate internal controls at the ministry as a first line of defense against fraud. The memorandum also stated that the finding did not affect USAID debt relief payments to the PA creditors. USAID officials told us that, while they did not believe the external assessments’ findings affected the integrity of USAID’s debt relief payment process, they took four additional steps to mitigate findings noted in the assessment of the Ministry of Finance and Planning’s fuel procurement processes. According to USAID officials, they (1) confirmed that the fuel companies had controls and systems to ensure an objective and transparent system in receiving and recording PA orders, (2) dispatched orders with official and properly signed shipping delivery and receipt documents, (3) obtained written confirmation from the fuel companies of the costs of the fuel provided to the PA, and (4) confirmed the PA’s petroleum debt with the fuel companies before initiating the payments and after making the payments. In addition, in 2016, USAID commissioned two routine financial audits of the debt relief grant agreed to by USAID and the PA for the use of fiscal year 2015 ESF assistance to make direct payments to PA creditors. According to USAID officials, the auditors were to examine the PA Ministry of Finance and Planning’s recording of USAID payments to PA creditors in its financial records as well as the ministry’s and USAID’s compliance with the terms of the grant agreement and related implementation letters. The audits did not identify any questioned or ineligible costs, reportable material weaknesses in internal control, or material instances of noncompliance with the terms of the debt relief grant. Also, in 2017, USAID contracted for a financial audit of the fiscal year 2016 debt relief grant agreed to by USAID and the PA. According to a USAID document, in May 2018, USAID held an entrance conference with the PA Ministry of Finance and Planning for the audit of the fiscal year 2016 grant. In July 2018, USAID sent the final audit report to the Regional Inspector General for review. According to the USAID document, the report did not identify any questioned or ineligible costs, reportable material weaknesses in internal controls, or material instances of noncompliance with the terms of the grant. Agency Comments We provided a draft of this report to USAID and State for review and comment. USAID provided comments, which we have reproduced in appendix II, as well as technical comments, which we incorporated as appropriate. State did not provide comments. We are sending copies of this report to the appropriate congressional committees, the Administrator of USAID, and the Secretary of State. In addition, the report is available at no charge on the GAO website at http://www.gao.gov If you or your staff have any questions about this report, please contact me at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who contributed to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Appropriations acts for fiscal years 2015 and 2016 included provisions for GAO to review the treatment, handling, and uses of funds provided through the ESF for assistance to the West Bank and Gaza. This report examines (1) the status of ESF assistance and projects provided to the West Bank and Gaza for fiscal years 2015 and 2016, including payments to PA creditors, and (2) the extent to which USAID conducted required vetting of PA creditors to ensure that assistance would not support entities or individuals associated with terrorism and assessed PA ministries’ capacity to use ESF assistance as intended. To address our first objective, we reviewed appropriations legislation, related budget justification documents, and financial data for fiscal years 2015 and 2016, including expenditures as of March 31, 2018, provided by USAID’s West Bank and Gaza mission in Tel Aviv, Israel. We reviewed data that USAID provided on obligations and expenditures of all ESF assistance for the West Bank and Gaza as of March 31, 2018, from annual allocations for fiscal years 2015 and 2016. We also reviewed relevant USAID documents, including notifications to Congress regarding the use of appropriated funds. In addition, we interviewed USAID and State officials in Washington, D.C., and Tel Aviv. To determine whether the data were sufficiently reliable for the purposes of this report, we requested and reviewed information from USAID officials about their procedures for entering contract and financial information into USAID’s data system. We determined that the USAID data were sufficiently reliable. For the project information included in this report, we relied on data that USAID provided, showing its obligations and expenditures of fiscal year 2015 and 2016 ESF assistance for West Bank and Gaza. For illustrative purposes, we requested and obtained from USAID descriptions of projects that, according to USAID officials, represented the largest financial obligations for each development objective in fiscal years 2015 and 2016. To address our second objective, we identified and reviewed relevant legal requirements as well as USAID policies and procedures to comply with those requirements. USAID Mission Order 21 is the primary document that details USAID procedures to ensure that the mission’s assistance program does not provide support to entities or individuals associated with terrorism, consistent with the prohibition on such support found in relevant laws and executive orders. In addition, we reviewed 27 USAID determinations of compliance for payments to PA creditors and discussed with USAID mission officials their efforts to comply with the policies and procedures in Mission Order 21 before executing payments to hospitals, companies, and banks that facilitated the payments. We also reviewed the timing of USAID’s vetting of each PA creditor that received payments, to ensure that, as required by Mission Order 21, the vetting occurred within 12 months before USAID signed the relevant debt relief grant agreement with the PA. Further, we reviewed external assessments of the PA Ministries of Health and Finance and Planning and financial audits of the PA Ministry of Finance and Planning, and we discussed the assessments’ and audits with USAID officials responsible for payments to PA creditors. We conducted this performance audit from September 2017 to August 2018, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for findings and conclusions based on our audit objectives. Appendix II: Comments from the U.S. Agency for International Development Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Judith McCloskey (Assistant Director), Tom Zingale (Analyst-in-Charge), Eddie Uyekawa, Jeff Isaacs, and Nicole Willems made significant contributions to this report. David Dornisch, Neil Doherty, Reid Lowe, and Roger Stoltz also contributed to the report.
Why GAO Did This Study Since 1993, the U.S. government has committed more than $5 billion in bilateral assistance to the Palestinians in the West Bank and Gaza. According to the Department of State, this assistance to the Palestinians promotes U.S. economic and political foreign policy interests by supporting Middle East peace negotiations and financing economic stabilization programs. USAID is primarily responsible for administering ESF assistance to the West Bank and Gaza. Appropriations acts for fiscal years 2015 and 2016 included provisions for GAO to review the treatment, handling, and uses of funds provided through the ESF for assistance to the West Bank and Gaza. This report examines (1) the status of ESF assistance and projects provided to the West Bank and Gaza for fiscal years 2015 and 2016, including project assistance and payments to PA creditors, and (2) the extent to which USAID conducted required vetting of PA creditors to ensure that this assistance would not support entities or individuals associated with terrorism and assessed PA ministries' capacity to use ESF assistance as intended. GAO reviewed relevant laws and regulations and USAID financial data, policies, procedures, and documents. GAO also interviewed USAID and State Department officials. What GAO Found As of March 2018, the U.S. Agency for International Development (USAID) had allocated about $545 million of funding appropriated to the Economic Support Fund (ESF) for assistance in the West Bank and Gaza for fiscal years 2015 and 2016. USAID obligated about $544 million (over 99 percent) and expended about $351 million (over 64 percent) of the total allocations. Project assistance accounted for approximately $399 million of the obligated funds, while payments to Palestinian Authority (PA) creditors accounted for $145 million (see figure). USAID's obligations for project assistance in the West Bank and Gaza for fiscal years 2015 and 2016 supported three development objectives—Economic Growth and Infrastructure ($239 million), Investing in the Next Generation ($107 million), and Governance and Civic Engagement (about $25 million). In fiscal years 2015 and 2016, USAID made payments directly to PA creditors—two Israeli fuel companies, to cover debts for petroleum purchases, and a local Palestinian bank, to pay off a line of credit used for PA medical referrals to six hospitals in the East Jerusalem Hospital network. USAID vetted PA creditors to ensure that the program assistance would not provide support to entities or individuals associated with terrorism and also conducted external assessments and financial audits of PA ministries of Health and Finance and Planning. USAID documentation showed that, as required, officials checked the vetting status of each PA creditor within 12 months before USAID signed its debt relief grant agreements with the PA. In addition, although USAID determined that it was not legally required to assess the PA Ministry of Health's medical referral services and the Ministry of Finance and Planning's petroleum procurement system, the agency commissioned external assessments of both ministries. These assessments found some weaknesses in both ministries' systems; however, USAID mission officials stated that these weaknesses did not affect USAID debt relief payments to the PA creditors. Nevertheless, USAID took additional steps to mitigate the identified weaknesses. For example, a USAID contractor worked with the Ministry of Health to update, revise, and approve guidelines for medical referrals. In addition, USAID commissioned financial audits of the debt relief grant agreements between USAID and the PA for direct payments to PA creditors in fiscal year 2015 and 2016. The audits did not identify any ineligible costs, reportable material weaknesses in internal control, or material instances of noncompliance with the terms of the agreements. What GAO Recommends GAO is not making recommendations in this report.
gao_GAO-18-600
gao_GAO-18-600_0
Background History of the Polar Icebreakers and Icebreaking Capability Gap The Coast Guard has been responsible for carrying out the nation’s polar icebreaking missions since 1965—when it assumed primary responsibility for the nation’s polar icebreaking fleet. The Coast Guard’s responsibilities are outlined in various statutes, policies, and interagency agreements. A 2010 Coast Guard study identified gaps in the Coast Guard’s ability to support and conduct missions in the Arctic and Antarctic. As a result, in June 2013, the Coast Guard established the need for up to three heavy polar icebreakers and three medium icebreakers to adequately meet these Coast Guard mission demands. More recently, in November 2017, Coast Guard officials reiterated that they will be able to fulfill all mission requirements—which include support to agencies with Arctic responsibilities such as DOD, the National Science Foundation (NSF), Department of State, National Oceanic and Atmospheric Administration, and National Aeronautics and Space Administration—with a fleet of three heavy and three medium polar icebreakers. Coast Guard officials told us they are not currently assessing acquisition of the medium polar icebreakers because they are focusing on the HPIB acquisition and plan to assess the costs and benefits of acquiring medium polar icebreakers at a later time. The Coast Guard currently has two active polar icebreakers in its fleet— the Polar Star, a heavy icebreaker, and the Healy, a medium icebreaker. An additional Coast Guard heavy icebreaker, the Polar Sea, has been inactive since 2010 when it experienced a catastrophic engine failure. Commissioned in 1976, the Polar Star is the world’s most powerful active non-nuclear icebreaker. The less powerful Healy primarily supports Arctic research. Although the Healy is capable of carrying out a wide range of activities, it cannot operate independently in the ice conditions in the Antarctic or ensure timely access to some Arctic areas in the winter. See figure 1 for the Coast Guard’s active icebreakers. The Coast Guard has faced challenges in meeting the government’s icebreaking needs in recent years. For example, in June 2016, we found that when neither the Polar Sea nor the Polar Star was active in 2011 and 2012, the Coast Guard did not maintain assured, year-round access to both the Arctic and Antarctic, as the Healy cannot reach ice-covered areas with more than 4½ feet of ice. According to a January 2017 Coast Guard assessment, the Coast Guard does not plan to recommission the Polar Sea because it would not be cost-effective. Polar Star Sustainment Efforts According to Coast Guard planning documents, the Polar Star’s service life is estimated to end between fiscal years 2020 and 2023. This creates a potential heavy polar icebreaker capability gap of about 3 years, assuming the Polar Star’s service life ends in 2020 and the lead HPIB is delivered by the end of fiscal year 2023 as planned. If the lead ship is delivered later than planned in this scenario, the potential gap could be more than 3 years. As a result, according to a 2017 polar icebreaking bridging strategy, the Coast Guard is planning to recapitalize the Polar Star’s key systems starting in 2020 to extend the service life of the ship until the planned delivery of the second HPIB (see figure 2). In September 2017, we found that the Coast Guard’s $75 million cost estimate for the Polar Star life extension project may be unrealistic, in part because it was based on the assumption of continuing to use parts from the decommissioned Polar Sea, as has been done in previous maintenance events. Because of the finite number of parts available from the Polar Sea, the Coast Guard may have to acquire new parts for the Polar Star that could increase the $75 million estimate. As a result, we recommended that the Coast Guard complete a comprehensive cost estimate and follow cost estimating best practices before committing to the life extension project. The Coast Guard concurred with this recommendation. As of May 2018, Coast Guard officials told us they were still conducting ship engineering inspections on the Polar Star to determine the details of the work needed for the limited service life extension, which will then inform the development of a cost estimate. In January 2018, the Coast Guard completed its ship structures and machinery evaluation board report. Coast Guard officials told us that this report will help to determine the details of the work needed for the limited life extension. The January 2018 report estimated the remaining service life of the Polar Star as 5 years or less. In April 2018, the Coast Guard approved the Polar Star life extension project to establish requirements and evaluate the feasibility of alternatives that will achieve the requirements. Coast Guard officials stated they completed a notional cost estimate in April 2018 and plan to complete a detailed formal cost estimate by June 2020. Coast Guard’s and Navy’s Roles in the Heavy Polar Icebreaker Program The Coast Guard and the Navy established the IPO to collaborate and develop a management approach to acquire three HPIBs. Through the IPO, the Coast Guard planned to leverage the Navy’s shipbuilding expertise and pursue an accelerated acquisition schedule. A Coast Guard program manager heads the IPO, which includes embedded Navy officials who provide acquisition, contracting, engineering, cost- estimating, and executive support to the program. The IPO has responsibility for managing and executing the HPIB’s acquisition schedule, acquisition oversight reviews, budget and communications, and interagency coordination. In addition, the IPO coordinates with several key organizations within the Coast Guard and Navy that contribute to the HPIB program, including: Coast Guard Capabilities Directorate: This directorate is responsible for identifying and providing capabilities, competencies, and capacity and developing standards to meet Coast Guard mission requirements. The directorate sponsored the HPIB’s operational requirements document, which provides the key performance parameters the HPIB must meet—such as icebreaking, endurance, and interoperability thresholds and objectives. Ship design team: The ship design team includes Coast Guard and Navy technical experts that develop ship specifications based on the HPIB operational requirements document. The ship design team is under the supervision of a Coast Guard ship design manager, who provides all technical oversight for development of the HPIB design, including development of “indicative,” or concept, designs used to inform the ship’s specifications and the program’s lifecycle cost estimate. Generally, the purpose of an indicative design is to determine requirements feasibility, support cost estimating, and provide a starting point for trade studies. Naval Sea Systems Command (NAVSEA) Cost Engineering and Industrial Analysis Group (NAVSEA 05C): The group developed the HPIB lifecycle cost estimate, which informs the program’s cost baselines and affordability constraints. NAVSEA 05C developed the HPIB’s lifecycle cost estimate based on the ship design team’s indicative design and the technical assumptions outlined in the program cost estimating baseline document. NAVSEA Contracts Directorate (NAVSEA 02): This directorate includes the Navy contracting officer who released the HPIB detail design and construction contract’s solicitation in March 2018 and plans to award the contract under Navy authorities. The contracting officer performs contract management services and provides guidance to the IPO to help ensure the HPIB’s contract adheres to DOD and Navy contracting regulations and guidance. Figure 3 shows key organizations that support the HPIB program and their responsibilities prior to the award of the contract. Since establishing the IPO, the Coast Guard, DHS, and the Navy formalized agreements on their approach for the HPIB acquisition in three 2017 memorandums of agreements and understanding. These agreements define the Navy’s and Coast Guard’s roles in the HPIB acquisition with respect to funding responsibilities, acquisition oversight functions, and contracting and program management authorities, among other things. Heavy Polar Icebreaker Program’s Acquisition Framework DHS, the Coast Guard, and the Navy have agreed to manage and oversee the HPIB program using DHS’s acquisition framework, as Coast Guard is a component within DHS. DHS’s acquisition policy establishes that a major acquisition program’s decision authority shall review the program at a series of predetermined acquisition decision events (ADE) to assess whether the major program is ready to proceed through the acquisition life-cycle phases (see figure 4). As we found in April 2018, the Coast Guard and the Navy will adhere to a tailored DHS acquisition framework for the HPIB program that supplements DHS ADE reviews with additional “gate” reviews adopted from the Navy’s acquisition processes. The DHS Under Secretary for Management retains final decision authority for the HPIB’s ADEs as the acquisition decision authority. The HPIB program achieved a combined ADE 2A/2B in February 2018, when DHS approved the program’s baselines and permitted the program to enter into the Obtain Phase of the DHS acquisition framework. The corresponding acquisition decision memorandum was signed in March 2018. The Coast Guard and the Navy plan to start detail design work for the HPIB in June 2019, once the detail design and construction contract is awarded. In Navy shipbuilding, detail design work can include outlining the steel structure of the ship; determining the routing of systems, such as electrical and piping, throughout the ship; and developing work instructions for constructing elements of the ship, such as installation drawings and material lists. The program’s ADE 2C, or the low-rate initial production decision, corresponds with the approval to start construction of the lead ship, which is planned to begin no later than June 2021. Key steps typically taken in the construction phase of a Navy ship include steel cutting and block fabrication, assembly and outfitting of blocks, keel laying and block erection, launch of the ship from dry dock, system testing and commissioning, sea trials, and delivery and acceptance (see appendix II for more detailed information on each shipbuilding phase). ADE 3, scheduled to be held no later than March 2026, authorizes the program to start follow-on test and evaluation. Figure 5 shows the HPIB’s acquisition framework, including ADE milestones and major program decision points, and how they relate to the shipbuilding phases. DHS acquisition policy establishes that the acquisition program baseline is the fundamental agreement between programs, component, and department-level officials establishing what will be delivered, how it will perform, when it will be delivered, and what it will cost. Specifically, the program baseline establishes a program’s schedule, costs, and key performance parameters, and covers the entire scope of the program’s lifecycle. The HPIB acquisition program baseline serves as an agreement between the Coast Guard and DHS that the Coast Guard will execute the acquisition within the bounds detailed in the document. The acquisition program baseline establishes objective (target) and threshold (maximum acceptable for cost, latest acceptable for schedule, and minimum acceptable for performance) baselines. Tables 1, 2, and 3 show selected cost, schedule, and performance baselines that DHS approved for the HPIB program at ADE 2A/2B in March 2018. After DHS approved the HPIB’s program baselines, the Navy released the solicitation for the program’s detail design and construction contract in March 2018. As revised, the solicitation requires offerors to submit their technical proposals in August 2018 and their price proposals in October 2018. The Navy plans to competitively award the HPIB detail design and construction contract to a single shipyard for all three ships in June 2019. The contract award would include design (advance planning and engineering) and long lead time materials, with separate options for detail design and construction of each of the three ships. The HPIB contract award and administration will follow DOD and Navy contracting regulations and policies, including the Defense Federal Acquisition Regulation Supplement. Although the Navy is planning to award the contract, the source selection authority is from the Coast Guard, with both Coast Guard and Navy personnel serving on the source selection evaluation board. Starting Programs with Sound Business Cases Our prior work has found that successful programs start out with solid, executable business cases before setting program baselines and committing resources. For Coast Guard programs, such a business case would be expected at ADE 2A/2B. A sound business case requires balance between the concept selected to satisfy operator needs and the resources—technologies, design knowledge, funding, and time—needed to transform the concept into a product—or in the HPIB’s case, a ship. At the heart of a business case is a knowledge-based approach—we have found that successful shipbuilding programs build on attaining critical levels of knowledge at key points in the shipbuilding process before significant investments are made. We have previously found that key enablers of a good business case include firm, feasible requirements; plans for a stable design; mature technologies; reliable cost estimates; and realistic schedule targets. Without a sound business case, acquisition programs are at risk of breaching the cost, schedule, and performance baselines set when the program was initiated—in other words, experiencing cost growth, schedule delays, and reduced capabilities. In November 2016, we found that a particular challenge for Congress is the fact that committees must often consider requests to authorize and fund a new program well ahead of program initiation—the point at which key business case information would be presented. Given the time lag between budget requests and the decision to initiate a new acquisition program, Congress could be making critical funding decisions with limited information about the soundness of the program’s business case. Although the HPIB program has already proceeded through ADE 2A/2B and established acquisition program baselines, information about the soundness of the HPIB’s business case will be helpful for decision makers as the Coast Guard and the Navy request funding in preparation for the detail design and construction contract award in June 2019 and anticipated construction start by the end of June 2021—two points at which significant resource commitments will need to be made. The Coast Guard Did Not Assess Design Maturity or Technology Risks Prior to Setting the Polar Icebreaker Program’s Baselines The Coast Guard set the HPIB’s acquisition program baselines at ADE 2A/2B without conducting a preliminary design review to assess the design maturity of the ship or a technology readiness assessment to determine the maturity of key technologies. This approach meets DHS acquisition policy requirements but is contrary to our best practices (see figure 6). While the Coast Guard is committed to a stable design prior to the start of lead ship construction, it established baselines without clear knowledge of the ship design because it does not plan to assess design maturity until after the planned June 2019 award of the detail design and construction contract. In addition, without a technology readiness assessment, the Coast Guard does not have full insight into whether the technologies are mature, potentially underrepresenting technical risk and increasing design risk. As a result, the Coast Guard will be committing resources to the HPIB program without key elements of a sound business case, increasing the risk that the program will exceed its planned costs and schedule. Polar Icebreaker Program Took Steps to Identify Design Risks but Did Not Assess Design Maturity Prior to Setting Baselines Early Efforts to Identify Design Risks To help inform the HPIB’s key performance parameters, specifications, and design considerations prior to setting the acquisition program baselines, the Coast Guard conducted design studies and partnered with Canada (with which the United States has an existing cooperative agreement) to gain knowledge on the HPIB’s design risks. For example: Starting in November 2016, the HPIB ship design team developed an indicative (or concept) design, which has undergone several revisions as more information became available, completing a fifth iteration in September 2017. To inform the HPIB indicative design, the ship design team told us they used design elements with validated characteristics, such as the hull form, from existing Coast Guard icebreakers, including the Polar Star, Polar Sea, Healy, and the Mackinaw (a Great Lakes icebreaker). Collectively, these icebreakers informed elements of the indicative design such as the size and producibility of the ship. The indicative design represents an icebreaker design that meets the threshold key performance parameter of breaking 6 feet of ice at a continuous speed of 3 knots rather than the objective parameter of 8 feet of ice at a continuous speed of 3 knots. Coast Guard officials stated that based on preliminary analysis, a design that meets the HPIB’s objective key performance parameters would be an entirely separate design and would be too costly to construct. Coast Guard officials told us that in addition to price, the shipbuilders’ HPIB proposals will be evaluated on design factors, including how much the potential design exceeds the threshold icebreaking performance parameters. In February 2017, the Coast Guard contracted with five shipbuilders, who teamed with icebreaker design firms, to conduct a series of iterative design studies. These studies examined major design cost drivers and technology risks for the HPIB program. Coast Guard officials stated the results of these studies helped inform and refine the ship’s specifications and provided them with a better understanding of the technology risks and schedule challenges. As of February 2018, each contract was valued at about $5.6 million. Under these contracts, each shipbuilder completed five detailed industry study iterations. For example, the shipbuilders analyzed various hull forms, propulsion systems, cold weather operations, space arrangements, and icebreaking enhancements. In April 2017, the Coast Guard completed an alternatives analysis study—an independent study required prior to ADE 2A that identifies the most efficient method of addressing an identified capability gap. The study examined various options, including whether existing foreign icebreakers could meet the Coast Guard’s HPIB performance requirements. The Coast Guard analyzed 18 domestic and foreign icebreaker designs against the HPIB’s key performance parameters and other requirements, such as seakeeping and habitability. The icebreaker designs included a variety of icebreakers in terms of propulsion power and size, such as nuclear-powered Russian icebreakers and polar research and supply vessels from Australia, Finland, and Germany. The alternatives analysis found that only a Russian nuclear-powered icebreaker and a design for a Canadian diesel-electric-powered icebreaker, which has yet to be constructed, passed initial screening for design maturity and performance requirements. Given a previous independent study analyzing the cost-effectiveness of nuclear- powered icebreakers, the Coast Guard deemed a nuclear-powered icebreaker design as infeasible. The alternatives analysis also noted that the Canadian design met icebreaking requirements. However, Coast Guard officials told us the Canadian design did not meet requirements such as habitability and military-oriented multi-mission tasks, but the design could potentially be modified to meet those needs. In addition, IPO officials stated the Canadian design was designed for science missions rather than military missions. The Canadian design was considered among some of the shipbuilders as a starting point in examining HPIB design risks. From May to August 2017, the Coast Guard tested two scale models of icebreakers at the Canadian National Research Council’s ice tank facility in Newfoundland. Coast Guard officials told us the testing helped to mitigate potential design risks with the hull form and propulsors—a mechanical device that generates thrust to provide propulsion for the ship. The Coast Guard tested the resistance, powering, and maneuvering of the model icebreakers’ hull form and propulsion to inform their indicative design and discovered that the ship’s maneuverability was a challenge during model testing. However, through model testing, the Coast Guard was able to validate general characteristics of its indicative design, including power needs and the hull form. In addition to model testing, Canadian Coast Guard officials told us that the U.S. Coast Guard has engaged with them in formal and informal exchanges regarding icebreaker acquisitions more generally. As a result of its indicative design, industry studies, and model testing efforts, the Coast Guard identified the integrated power plant, propulsors, and hull form as key design considerations for the HPIB. Because these design elements work together to ensure the HPIB can meet its icebreaking requirements, we determined that these are the HPIB’s main design risks (see figure 7). Although the Coast Guard undertook early efforts to identify design risks, it did not conduct a preliminary design review for the HPIB program prior to setting program acquisition baselines at ADE 2A/2B. These baselines inform DHS’s and Coast Guard’s decisions to commit resources. Our best practices for knowledge-based acquisitions state that before program baselines are set, programs should hold key systems engineering events, such as a preliminary design review, to help ensure that requirements are defined and feasible and that the proposed design can be met within cost, schedule, and other system constraints. Similarly, in November 2016, we found that establishing a preliminary design through early detailed systems engineering results in better program outcomes than doing so after program start. During the HPIB’s preliminary design review, the Coast Guard plans to verify that the contractor’s design meets the requirement of the ship specifications and is producible, and the schedule is achievable, among other activities. The Coast Guard has yet to conduct a preliminary design review for the HPIB program because DHS’s current acquisition policy does not require programs to do so until after ADE 2A/2B. The Coast Guard plans to hold the preliminary design review by December 2019, after the award of the detail design and construction contract. Holding a preliminary design review after ADE 2A/2B is consistent with DHS policy. However, in April 2017, we found that DHS’s sequencing of the preliminary design review is not consistent with our acquisition best practices, which state that programs should pursue a knowledge-based acquisition approach that ensures program needs are matched with available resources—such as technical and engineering knowledge, time, and funding—prior to setting baselines. In that report, we found that by initiating programs without a well-developed understanding of system needs through key engineering reviews such as the preliminary design review, DHS increases the likelihood that programs will change their user-defined key performance parameters, costs, or schedules after establishing their baselines. As a result, we recommended that DHS update its acquisition policy to require key technical reviews, including the preliminary design review, to be conducted prior to approving programs’ baselines. DHS concurred with this recommendation and stated that it planned to initiate a study to assess how to better align its processes for technical reviews and acquisition decisions. Upon completion of the study, DHS plans to update its acquisition policies, as appropriate. Instead of establishing the HPIB program’s acquisition program baselines after assessing the shipbuilder’s preliminary design, the Coast Guard established cost baselines based on a cost estimate that used the ship design team’s indicative design. Coast Guard officials told us that the selected shipbuilder will develop its own HPIB design as part of the detail design and construction contract, independent of the indicative design. The ship design team noted that the indicative design informed the ship’s specifications but is not meant to be an optimized design, does not represent a design solution, and will not be provided to the shipbuilders. Coast Guard officials stated that the shipbuilders that respond to the request for proposals will propose their own designs based on their production capabilities, which will drive where they will place components, such as bulkheads, within the design. As a result, the shipbuilder’s design will be different from the indicative design. By setting the HPIB’s acquisition program baselines prior to gaining knowledge on the shipbuilder’s design, the Coast Guard has established cost, schedule, and performance baselines without a stable or mature design. Although completing the preliminary design review after setting program baselines is consistent with DHS policy, this puts the Coast Guard at risk of breaching its established baselines and having to revise them later in the acquisition process, after a contract has been signed and significant resources have already been committed to the program. At that point, the program will be well underway and it will be too late for decision makers to make appropriate tradeoff decisions between requirements and resources without causing disruptions to the program. Consistent with DHS acquisition policy, DHS and the Coast Guard must monitor the HPIB program against the acquisition program baselines set at ADE 2A/2B; however, DHS acquisition policy does not require an official update to the baseline unless the program breaches its baselines or until the next major milestone, whichever occurs first. For the HPIB, the next milestone is ADE 2C, which is currently planned for no later than June 2021. ADE 2C corresponds to the approval of low-rate initial production and in the case of the HPIB, the start of construction for the lead ship. Evaluating the HPIB’s baselines at ADE 2C—immediately before the shipbuilder is authorized to start construction—is too late because the funding required for the construction phase likely would have already been requested and provided. On the other hand, evaluating the acquisition program baselines after the preliminary design review but before ADE 2C would help to ensure that the knowledge gained during the preliminary design review is used to inform the program baselines and business case for investing in the HPIBs before significant resource commitments are made. Although the Coast Guard set the acquisition program baselines prior to gaining knowledge on the feasibility of the selected shipbuilder’s design, it has expressed a commitment to having a stable design prior to the start of lead ship construction. In Navy shipbuilding, detail design typically encompasses three design phases: Basic design. Includes fixing the ship steel structure; routing all major distributive systems, including electricity, water, and other utilities; and ensuring the ship will meet the performance specifications. Functional design. Includes providing further iteration of the basic design, providing information on the exact position of piping and other outfitting in each block, and completing a 3D product model. Production design. Generating work instructions that show detailed system information and including guidance for subcontractors and suppliers, installation drawings, schedules, material lists, and lists of prefabricated materials and parts. Shipbuilding best practices we identified in 2009 found that design stability is achieved upon completion of the basic and functional designs. At this point of design stability, the shipbuilder has a clear understanding of the ship structure as well as how every system is set up and routed throughout the ship. Consistent with our best practices, prior to the start of construction on the lead ship, the Coast Guard will require the shipbuilder to complete basic and functional designs, develop a 3D model output, and provide at least 6 months of production information to support the start of construction. IPO officials have stated that they are committed to ensuring that the HPIB’s design is stable before construction of the lead ship begins, given the challenges prior Navy shipbuilding programs have experienced when construction proceeded before designs were completed. Coast Guard Intends to Use Proven Technologies for the Polar Icebreaker Program but Has Not Assessed Their Maturity The Coast Guard intends on using what it refers to as proven technologies for the HPIB but has not conducted a technology readiness assessment to determine maturity of key technologies that drive performance of the ship prior to ADE 2A/2B, which is inconsistent with our best practices. A technology readiness assessment is a systematic, evidence-based process that evaluates the maturity of critical technologies—hardware and software technologies critical to the fulfillment of the key objectives of an acquisition program. This assessment does not eliminate technology risk but, when done well, can illuminate concerns and serve as a basis for realistic discussions on how to mitigate potential risks. According to our best practices, a technology readiness assessment should be conducted prior to program initiation. DHS systems engineering guidance also recommends conducting a technology readiness assessment before ADE 2A to help ensure that the program’s technologies are sufficiently mature by the start of the program. The Coast Guard intends on using what it has deemed “state-of-the- market” or “proven” technologies for the HPIB. DHS’s technical assessment of the HPIB noted that the February 2017 design studies resulted in industry producing designs that used commercially available, state-of-the-market, and proven technologies. From the studies and industry engagement, Coast Guard officials determined that the technologies required for the HPIB, such as the integrated power plant and azimuthing propulsors—thrusters that rotate up to 360 degrees and provide propulsion to the ship—are available commercially and do not need to be developed. Coast Guard officials further stated that the integrated power plant is the standard power plant used on domestic and foreign icebreakers. Coast Guard officials told us that similarly, market survey data on azimuthing propulsors shows that ice-qualified azimuthing propulsors in the power range have been used on foreign icebreakers. The Coast Guard has also communicated to industry through the request for proposals that the HPIB should have only proven technology and plans to have the shipbuilders provide information on the maturity of the technologies when they submit their proposals. As a result, Coast Guard officials stated the HPIB program does not have any critical technologies, as defined by DHS systems engineering guidance, and does not need to conduct a technology readiness assessment. However, according to DHS systems engineering guidance, a technology element is considered critical if the system being acquired depends on this technology to meet operational requirements, and if the technology or its application is new, novel, or in an area that poses major technological risk during detailed design or demonstration. The guidance further states that technologies can become critical if they need to be modified from prior successful use or expected to operate in an environment beyond their original demonstrated capability. Similarly, according to our best practices for assessing technology readiness, critical technologies are not just technologies that are new or novel. Technologies used on prior systems can also become critical if they are being used in a different form, fit, or function. Our technology readiness assessment guide notes that program officials sometimes disregard critical technologies when they have longstanding history, knowledge, or familiarity with them. The best practices guide cites examples of organizations not considering a technology critical if it has been determined to be mature, has already been fielded, or does not currently pose a risk to the program. Additionally, our guide notes that contractors may be overly optimistic about the maturity of critical technologies, especially prior to contract awards. According to our best practices guide, presuming a previously used technology as mature is problematic when the technologies are being reapplied to a different program or operational environment. As a result, based on our analysis of available Coast Guard information, we believe the HPIB’s planned integrated power plant and azimuthing propulsors should be considered critical technologies given their criticality in meeting key performance parameters, their use in a different environment from prior ships, and the extent to which they pose major cost risks (see table 4). Without conducting a technology readiness assessment, the Coast Guard does not have insight into how mature these critical technologies are. According to our best practices, evaluating critical technologies requires disciplined and repeatable steps and criteria to perform the assessment and make credible judgments about their maturity. The evaluation of each critical technology must be based on evidence such as data and test results. In addition, the team that assesses the technologies must be objective and ideally independent. Instead, the Coast Guard has relied on industry to provide information on the maturity of the HPIB’s technologies and uses terms such as “state-of-the-market” or “proven,” which do not translate into meaningful measures for systematically communicating the technology readiness, especially when discussing new applications of existing technologies. Additionally, even if the Coast Guard determines the maturity levels of the HPIB’s technologies through an objective and independent technology readiness assessment, the program’s planned level of maturity for the ship’s technologies falls short of our best practices. According to the HPIB’s systems engineering tailoring plan and request for proposals, the program intends on implementing only proven technologies that have been demonstrated in a relevant environment, commensurate with a technology readiness level (TRL) of 6. However, our best practices do not consider a technology to be mature until it has been demonstrated in an operational environment, commensurate with a TRL 7. Specifically, our best practices for shipbuilding recommend that programs should require critical technologies to be matured into actual prototypes and successfully demonstrated in an operational or a realistic environment (TRL 7) before a contract is awarded for the detail design of a new ship. DHS’s systems engineering guidance also states that critical technologies below TRL 7 should be identified as technical risks. By not conducting a technology readiness assessment and identifying, assessing, and maturing its critical technologies prior to setting the HPIB’s program baselines and prior to awarding the detail design contract, the Coast Guard is underrepresenting the program’s technical risks and understating its cost, schedule, and performance risks. Technology risks that manifest later could require the shipbuilder to redesign parts of the ship, which increases the risk of rework and schedule delays during the construction phase. The Coast Guard Based the Polar Icebreaker Program’s Baselines on a Cost Estimate That Is Not Fully Reliable and an Optimistic Schedule The cost estimate and schedule that informed DHS’s decision to authorize the HPIB program do not reflect the full scope of the program’s risks. We found that while the Navy substantially adhered to a number of best practices when it developed the HPIB’s cost estimate, the estimate is not fully reliable, primarily because it does not reflect the full range of possible costs over the HPIB’s 30-year lifecycle. We also found the HPIB schedule was not informed by a realistic assessment of the work necessary to construct the ship. Rather, the schedule was driven by the potential gap in icebreaking capabilities once the Coast Guard’s only operating HPIB reaches the end of its service life. Reliable cost estimates and schedules are key elements of an executable business case, and are needed at the outset of programs—when competitive pressures to obtain funding for the program are high—to provide decision makers with insight into how risks affect a program’s ability to deliver within its cost and schedule goals. Polar Icebreaker Program’s Cost Estimate Substantially Met Best Practices but Is Not Fully Reliable Because It Does Not Include Full Range of Possible Costs We found that the lifecycle cost estimate used to inform the HPIB program’s baselines substantially adheres to most cost estimating best practices; however, the estimate is not fully reliable. The cost estimate only partially met best practices for being credible primarily because it did not quantify the range of possible costs over the entire life of the program. We assessed the program’s lifecycle cost estimate, which was performed by NAVSEA 05C, against our best practices for cost estimating. For our reporting purposes, we collapsed 18 of our applicable best practices into the four general characteristics of a reliable cost estimate: comprehensive, well-documented, accurate, and credible. Figure 8 provides a summary of our assessment of the HPIB’s lifecycle cost estimate. Comprehensive. We found the HPIB cost estimate substantially met the best practices for being comprehensive. For example, the estimate includes government and contractor costs over the full lifecycle of all three ships and contains sufficient levels of detail in the program’s work breakdown structure—a hierarchical breakdown of the program into specific efforts, including system engineering and ship construction. The estimate also documents detailed ground rules and assumptions, such as the learning curve used to capture expected labor efficiencies for follow- on ships. However, we found that the costs for disposal of the three ships were not at a level of detail to ensure that all costs were considered and not all assumptions, particularly regarding operating and support costs, were varied to reflect the impact on cost should these assumptions change. Well-Documented. We also found the cost estimate substantially met the best practices for being well-documented. Specifically, the cost estimate’s documentation mostly captured the source data used as well as the primary methods, calculations, results, rationales, and assumptions used to generate each cost element. However, the documentation alone did not provide enough information for someone unfamiliar with the cost estimate to replicate what was done and arrive at the same results. For example, NAVSEA officials discussed and showed us how historical data from the analogous ships were used to create the estimate, but these specific sources were not found in the cost estimate documentation. Accurate. We found the estimate substantially met best practices for being accurate. In particular, the estimate was properly adjusted for inflation, and we did not find any mathematical errors in the estimate calculations we inspected. Officials stated that labor and material cost data from recent, analogous programs were used in the estimate. While the documentation does not discuss the reliability, age, or relevance of the cost data, NAVSEA officials provided us with additional information regarding those data characteristics. Additionally, officials provided documentation that demonstrated that they had updated the cost estimate several times in the last 2 years. Credible. We found the HPIB cost estimate partially met the best practices associated with being credible. A credible cost estimate should analyze the sensitivity of the program’s expected cost to changes among key cost-driving assumptions and risks. It should also quantify the cost impact of risks related to assumptions changing and variability in the underlying data used to create the cost estimate. Credible cost estimates should also be cross-checked internally and reconciled with an independent cost estimate that is performed by an outside group. These two best practices ensure that the estimate has been checked for any potential bias. Our review of the HPIB cost estimate determined it partially met the best practices for being credible due to the following: Exclusions of major costs from sensitivity analysis and risk and uncertainty analysis. The cost estimators conducted sensitivity analysis as well as risk and uncertainty analysis on only a small portion of the total lifecycle costs. For both the sensitivity analysis and risk and uncertainty analysis, we found that NAVSEA only modeled cost variation in the detail design and construction portion of the program and excluded from its analyses any risk impacts related to the remainder of the acquisition, operating and support, and disposal phases, which altogether comprise about 75 percent of the lifecycle cost. The cost estimate documents that the limited number of active icebreakers and available data prevented NAVSEA from identifying accurate risk bounds for the operating and support and disposal phases. Further, NAVSEA officials told us because they used historical data, including average maintenance costs from the Healy, they felt that their estimate was reasonable. However, similar to how NAVSEA consulted with the ship design team to establish high and low-end costs using analogous ships, NAVSEA could have used cost ranges in the historical data to develop risk bounds for the remaining costs in the acquisition, operations and support, and disposal phases. Without performing a sensitivity analysis on the entire life cycle cost of the three ships, it is not possible for NAVSEA to identify key elements affecting the overall cost estimate. Further, without performing a risk and uncertainty analysis on the entire life cycle cost of the three ships, it is not possible for NAVSEA to determine a level of confidence associated with the overall cost estimate. By not quantifying important risks, NAVSEA may have underestimated the range of possible costs for about three-quarters of the entire program. Lack of applied correlation in the risk and uncertainty analysis. In its independent assessment of the HPIB cost estimate, the DHS Cost Analysis Division similarly found that the results of the risk and uncertainty analysis may understate the range of possible cost outcomes for the HPIB. The DHS assessment noted that NAVSEA did not use applied correlation—which links costs for related items so that they rise and fall together during the analysis—in its cost model. According to a joint agency handbook on cost risk and uncertainty, applied correlation helps to ensure that cost estimates do not understate the possible variation in total program costs. Omitting applied correlation when assessing a cost estimate for risk can cause an understated range of possible program costs and create a false sense of confidence in the cost estimate. For example, absent applied correlation, the DHS assessment noted that the Navy calculated with a 99-percent level of confidence that the program will not exceed its threshold (maximum acceptable) acquisition cost. Navy officials explained that they will incorporate applied correlation in future updates to the cost estimate when better data are available. However, by applying correlation factors from the joint agency handbook to the same data that NAVSEA used, DHS’s Cost Analysis Division determined that NAVSEA overstated the likelihood of the program not exceeding its threshold acquisition cost. Cost estimate not fully reconciled with a comparable independent cost estimate. While the Naval Center for Cost Analysis performed an independent cost estimate of the HPIB program, the office used a different methodology from NAVSEA’s, and its estimate was based on an earlier version of the indicative ship design and associated technical baseline. NAVSEA officials told us that before the Coast Guard’s ship design team updated the indicative ship design and technical baseline, NAVSEA met twice with Naval Center for Cost Analysis to reconcile their results. However, NAVSEA officials told us that due to the speed at which the program was progressing, no reconciliation occurred after the ship design team finalized the indicative ship design. While we did not find any specific ground rules and assumptions that differed between the two estimates, some ship characteristics had changed, such as the weight estimates for propulsion and auxiliary systems, among others. The use of two different technical baselines creates differences in the two estimates and makes them less comparable to one another. For additional details on our assessment of the HPIB’s cost estimate against our 18 cost estimating best practices, see appendix III. By excluding the majority of the HPIB program’s lifecycle costs from the sensitivity analysis as well as the risk and uncertainty analysis, and reconciling the estimate with an independent cost estimate based on a different iteration of the ship design, the cost estimate does not provide a fully credible range of costs the program may incur. Moreover, the exclusion of applied correlation further provides a false sense of confidence that the program will not exceed its threshold cost. As a result, the estimate provides an overly optimistic assessment of the program’s vulnerability to cost growth should risks be realized or current assumptions change. This, in turn, may underestimate the lifecycle cost of the program and calls into question the cost baselines DHS approved and used to inform the HPIB’s budget request. Without a reliable cost estimate to inform the business case for the HPIB prior to award of the contract option for lead ship construction, Congress is at risk of committing to a course of action without a complete understanding of the program’s longer-term potential for cost growth. Polar Icebreaker Program’s Optimistic Schedule Is Driven by Capability Gap and Does Not Reflect Robust Analysis The Coast Guard set an optimistic schedule baseline for the HPIB based on operational need, but its approach does not reflect a robust analysis of what is realistic and feasible. According to DHS and Coast Guard acquisition guidance, the goal of ADE 2A/2B is, among other things, to ensure that the program’s schedule baseline is executable at an acceptable cost. Rather than building a schedule based on knowledge—including determining realistic schedule targets, analyzing how much time to include in the schedule to buffer against potential delays, and comprehensively assessing schedule risks—the Coast Guard used the estimated end date of the Polar Star’s service life as the primary driver to set the lead ship’s objective (or target) delivery date of September 2023 and threshold (latest acceptable) delivery date of March 2024. Analysis Conducted to Determine Lead Ship Construction Schedule Not Robust The Coast Guard and the Navy did not conduct a robust analysis to determine how realistic the 2.5- to 3-year construction cycle time is for the lead HPIB before setting the schedule baseline. Our best practices for developing project schedules state that, rather than meeting a particular completion date, estimating how long an activity takes should be based on the effort required to complete the activity and the resources available. Doing so ensures that activity durations and completion dates are realistic and supported by logic. The Coast Guard and the Navy validated the reasonableness of the 2.5- to 3-year construction time by comparing this duration to historical Navy ship construction data. Program officials told us that they used 211 Navy ships in their analysis and determined that the HPIB’s construction schedule was within historical norms given its weight. However, program officials told us they included both lead and follow-on ships in their analysis. As we have found in our prior Navy shipbuilding work, schedule delays tend to be amplified for lead ships in a class. Therefore, we believe the program’s analysis for the lead ship was overly optimistic. The Coast Guard also sought industry feedback to determine whether 2.5 to 3 years to build the lead HPIB was feasible. Design study information provided to the Coast Guard by several shipbuilders estimated that they would need between 2.5 to 3.5 years to build the lead ship. We determined that the Coast Guard used the more optimistic estimate of 2.5 years for the objective delivery date and 3 years for the threshold delivery date. Three years was also the time frame reflected in the request for proposals for the detail design and construction contract. The request for proposals lists December 2023 as the target delivery date for the lead ship, which is approximately 3 years from the objective construction start date. Further, we compared the HPIB’s planned construction schedule to the construction schedules of delivered lead ships for major Coast Guard and Navy shipbuilding programs active in the last 10 years as well as the Healy. We found that the HPIB’s lead ship construction cycle time of 2.5 to 3 years is optimistic, as only three of the ten ships in our analysis were constructed in 3 years or less. For the purposes of our analysis, we included information on each ship’s weight and classification, both of which can affect complexity and, therefore, construction times (see figure 9). The Coast Guard also did not conduct any analysis to identify a reasonable amount of margin to include in the program schedule baseline to account for any delays. Estimating and documenting schedule margin based on an analysis of schedule risks helps to ensure that a program’s baseline schedule is achievable despite delays that may unexpectedly arise. Program officials told us that the only margin included in the HPIB schedule is the 6 months between the objective and threshold dates—the maximum time between objective and threshold dates before DHS policy requires additional rationale and justification. According to the request for proposals, the winning shipbuilder will examine schedule risks while preparing an integrated schedule. In addition, Coast Guard officials told us that the current schedule will remain largely notional until the winning shipbuilder provides detailed updates to the schedule. Delays in project schedules, whether they are in the program’s control or not, should be expected. For example, in prior shipbuilding programs we have reviewed, we have found that delays have resulted from a number of issues, including redesign work to address issues discovered during pre-delivery testing, key system integration problems, and design quality issues. Delays outside of the program’s control such as funding instability, late material delivery, and bid protests have previously affected a program’s ability to meet schedule. Program officials told us these and other schedule risks are not accounted for in the HPIB schedule. Further, our analysis of 12 selected shipbuilding acquisition programs active in the last 10 years shows that the Navy and the Coast Guard have delayed delivery of all but one lead ship from their original planned delivery dates by more than 6 months, with delays occurring both before and after the start of construction. The delays in lead ship deliveries ranged from 9 months to 75 months. For the purposes of our analysis, we included the lead ships of major Coast Guard and Navy shipbuilding programs that have been active from 2008 to 2018. We excluded the Navy submarines and aircraft carriers from our analysis because we determined that their size and complexity did not make them reasonable comparisons to the HPIB (see figure 10). By supporting the lead ship construction time with overly optimistic analysis and by not conducting analysis to estimate a reasonable amount of margin, the Coast Guard’s HPIB schedule does not fully account for likely or unforeseen delays, which would help ensure that the planned delivery date for the lead ship is feasible. Schedule Risks after Construction Start Not Identified The Coast Guard has set the HPIB’s schedule baselines, including when all three ships are planned to achieve full operational capability, but has not yet identified risks for the program’s schedule that could occur after the start of lead ship construction, such as risks related to the construction schedule or concurrency between ship testing and construction of subsequent ships. According to the HPIB risk management plan, the program should formally track risks, which includes developing risk mitigation plans and reporting risks to DHS. Prior to setting its baselines, the Coast Guard formally tracked some schedule risks that affect the program’s ability to start construction on time, such as an aggressive schedule for releasing the request for proposals for the detail design and construction contract. IPO officials told us they retired that risk because the Navy released the request for proposals in March 2018. However, our analysis of the HPIB construction schedule and 6- month margin for delays found the program’s schedule was optimistic, thereby warranting additional risk tracking and management. The DHS Office of Systems Engineering also identified and recommended the Coast Guard track and take steps to mitigate HPIB’s schedule risks, including those related to concurrency. In its technical assessment, this office noted that the program plans to deliver the first two ships prior to completing initial operational testing and evaluation for the lead ship. The assessment further noted that construction on the third ship is planned to be nearly three-quarters finished prior to completing initial operational testing and evaluation. DHS’s Office of Systems Engineering found that this concurrency creates cost, schedule, and technical risk resulting from rework that may be necessary to address deficiencies found during initial testing. By not comprehensively and formally tracking risks to the HPIB schedule that occur after the start of lead ship construction, the program may not sufficiently identify and take timely risk management actions to address this key phase in the acquisition. By not conducting a robust analysis to inform whether the HPIB’s schedule baselines are feasible, the Coast Guard is not providing Congress with realistic dates of when the ships may be delivered before requesting funding for the construction of the lead ship. While the Coast Guard is planning a service life extension of the Polar Star starting in 2020, as noted above, the HPIB’s optimistic schedule may put the Polar Star at risk of needing to operate longer than planned. The HPIB schedule’s optimism also puts the Coast Guard at risk of not fully implementing a knowledge-based acquisition approach to meet its aggressive schedule goals. Our prior work on shipbuilding programs has shown that establishing optimistic program schedules based on insufficient knowledge can create pressure for programs to make sacrifices elsewhere. For example, we found that the Navy moved forward with construction with incomplete designs and when key equipment was not available when needed. Additionally, some Navy programs pushed technology development into the design phase or pushed design into the construction phase. These concurrencies often result in costly rework to accommodate changes to the design, further delays, or lower than promised levels of capability. Polar Icebreaker Program’s Anticipated Contract May Be Funded by Both the Coast Guard and the Navy, but They Have Not Fully Documented Responsibility for Addressing Cost Growth According to the IPO, the HPIB’s anticipated detail design and construction contract may be funded by both Coast Guard and Navy appropriations, but how certain types of cost growth will be addressed between the Coast Guard and the Navy has not been fully documented. The HPIB’s acquisition strategy anticipates award of a contract that will have options, includes efforts aimed at mitigating cost risks, and acknowledges the use of foreign suppliers to provide components and design services as allowable under statute and regulation. Since 2013, the program has received $360 million in funding, which includes both Coast Guard and Navy appropriations. Moving forward, it is unclear how much Coast Guard and Navy funding will be used to fund the contract. The Coast Guard and the Navy have an agreement in place for funding issues, but the agreement does not fully address how they plan to address cost growth on the program. Acquisition Strategy Anticipates Use of Contract Options, Ways to Mitigate Cost Risks, and Foreign Suppliers As part of the HPIB’s acquisition strategy, the Navy structured the detail design and construction of each of the ships as contract options in the March 2018 request for proposals. Specifically, the request for proposals structured the detail design and construction work into four distinct contract line items, all under a fixed-price incentive (firm-target) contract type. Generally, this contract type allows the government and shipbuilder to share cost savings and risk through a specified profit adjustment formula, also known as a share ratio; ties the shipbuilder’s ability to earn a profit to performance by decreasing the shipbuilder’s profit after costs reach the agreed upon target cost; and, subject to other contract terms, fixes the government’s maximum obligation to pay at a ceiling price. Table 5 provides information on the HPIB’s request for proposals as of May 2018. According to the request for proposals, in addition to potentially earning profit by controlling costs, the shipbuilder may also earn up to $34 million in incentives for achieving other programs goals, such as quality early delivery, reducing operations and sustainment costs, and production readiness. IPO officials stated that they based the incentives on prior Navy shipbuilding contract examples. However, in March 2017, we found that the Navy had not assessed the effectiveness of added incentives for the reviewed fixed-price incentive contracts in terms of improved contract outcomes across the applicable shipbuilding portfolio. As a result, we recommended that DOD direct the Navy to conduct a portfolio-wide assessment of the Navy’s use of additional incentives on fixed-price incentive contracts across its shipbuilding programs. DOD concurred with this recommendation, but the Navy has not yet taken steps to implement it. As part of the HPIB acquisition strategy, the IPO is striving to control costs on the detail design and construction contract through the following: A fixed-price incentive (firm-target) contract type. Because the shipbuilder’s profit is linked to performance, fixed-price incentive contracts provide an incentive for the shipbuilder to control cost. Most of the Navy’s proposed share ratios and ceiling prices for the detail design and construction work are consistent with DOD’s November 2010 Better Buying Power memo, which states a 50/50 share ratio and 120 percent ceiling price should be the norm, or starting point, for fixed-price incentive contracts. Full and open competition. The Navy plans to competitively award the HPIB’s detail design and construction contract. From market research and industry engagement, the IPO determined that there were multiple viable competitors. In March 2017, we found that competition helped to strengthen the Navy’s negotiating position with shipbuilders when setting contract terms, such as the share line and ceiling price for fixed-price incentive type contracts. Providing offerors the government’s estimated ship costs. The request for proposals does not set affordability caps but does include information on the government’s estimated cost for the ships, including $746 million for the lead ship’s advance planning, engineering, detail design, and construction, and an average ship price of $615 million across all three ships. Navy contracting officials explained that offers will not be disqualified from the source selection solely for being higher than the estimated costs. Instead, the estimated costs provide the offerors with cost bounds to help appropriately scope the capabilities. For example, IPO officials stated that they are striving to appropriately size the integrated power plant so that it is generating sufficient power to meet key performance parameters but not so much power that it drives up the cost. Inquiries on block buys and economic order of quantity purchases. The Navy gave offerors an opportunity to provide the estimated savings that the government could achieve if it were to take a “block buy” approach in purchasing the ships or purchasing supplies in economic order quantities. The Navy did not include a definition of “block buy” in the HPIB request for proposals synopsis. Based on our prior work, block buy contracting generally refers to special legislative authority that agencies seek on an acquisition-by-acquisition basis to purchase more than one year’s worth of requirements. The request for proposals synopsis stated a preference for submission of the estimated savings within 60 days of the release of the request for proposals, or by May 2018. As of June 2018, the Navy had not received any formal responses from industry on potential savings from block buys or economic order quantities. For the HPIB request for proposals, the Navy stated that any information on block buys or economic order of quantities would be optional and would not be used as part of the evaluation of proposals submitted by offerors. Our prior work on block buy contracting approaches for the Littoral Combat Ship and F-35 Joint Strike Fighter programs found that the terms and conditions of the contracts affect the extent to which the government achieves savings under a block buy approach. For example, the Littoral Combat Ship’s block buy contracts indicated that a failure to fully fund the purchase of a ship in a given year would make the contract subject to renegotiation. DOD has pointed to this as a risk that the contractors would demand higher prices if DOD deviated from the agreed to block buy plan. In its HPIB acquisition strategy, the IPO has also considered the use of foreign suppliers as allowable under the law. According to the February 2018 HPIB acquisition plan, the HPIB must be constructed in a U.S. shipyard given statutory restrictions, including restrictions on construction of Coast Guard vessels and major components in foreign shipyards unless authorized by the President. However, foreign suppliers will be permitted to provide components and design services to the extent applicable statutes and regulations allow. According to Coast Guard officials, foreign design firms have extensive expertise and knowledge to produce the design for HPIBs. As a result, the U.S. shipbuilders planning to submit proposals on the HPIB solicitation may partner with these foreign design firms when submitting proposals. Similarly, Coast Guard officials stated that the azimuthing propulsors that have the necessary power and ice classification for the HPIB are manufactured by foreign companies. Therefore, the selected shipbuilder may subcontract with these companies to acquire the propulsors. In addition, Navy contracting officials stated that the program did not need to obtain a waiver from the Buy American Act—which generally requires federal agencies to purchase domestic end products when supplies are acquired for use in the United States, and use domestic construction materials on contracts performed in the United States—for certain components. The Act includes exceptions, such as when the domestic end products or construction materials are unavailable in sufficient and reasonably available commercial quantities and of a satisfactory quality. Program Has Received Both Coast Guard and Navy Funds, but Unclear How Program Will Be Funded Moving Forward From 2013 through 2018, the program has received $360 million in funding—$60 million in the Coast Guard’s Acquisition, Construction, and Improvement appropriations (hereafter referred to as Coast Guard funding) and $300 million in Navy’s Shipbuilding and Conversion, Navy advance procurement appropriations (hereafter referred to as Navy appropriations). In addition, according to Coast Guard officials, in fiscal year 2017, Coast Guard reprogrammed $30 million in fiscal year 2016 appropriations for the HPIB from another program (see figure 11). According to IPO and Navy contracting officials, the Navy plans to use $270 million of the $300 million in Navy appropriations to award the detail design and construction contract in fiscal year 2019, which would fund the advanced engineering, long lead time materials, and detail design work. Navy officials stated the remaining $30 million in Navy appropriations will be held in reserves for potential scope changes. Of the $60 million in Coast Guard funding, the IPO has used $41 million for program office costs and the February 2017 design study contracts, and plans to use the remaining $19 million for program office costs. Coast Guard officials stated that they used the $30 million in reprogrammed 2016 appropriations to fund the design studies, model testing, and Navy warfare center support. As the program prepares to award a contract worth billions of dollars if all the options are exercised, Congress, the Coast Guard, and the Navy face key funding considerations. These include the extent to which the program will be funded using Coast Guard and Navy appropriations in the future and whether each of the ships will be fully or incrementally funded. Navy contracting officials stated that by structuring the contract’s construction work as options, the contract has flexibility to accommodate any type of additional funding the program may receive. The National Defense Authorization Act for Fiscal Year 2018 authorized procurement of one Coast Guard heavy polar icebreaker vessel. The Navy did not request any funding in fiscal year 2019 for the HPIB, while Coast Guard requested $30 million. Subsequently, after discretionary budget caps were relaxed by Congress, the Administration’s fiscal year 2019 budget addendum requested an additional $720 million in fiscal year 2019 Coast Guard appropriations for the program. Although the Navy did not request fiscal year 2019 funding for the lead ship, and Navy officials told us they have no plans to budget for the HPIB program moving forward, Congress may still choose to appropriate funds for the HPIB to the Navy. For example, in fiscal years 2017 and 2018, the Navy did not request funding but received $150 million in appropriations each year for the HPIB (see figure 12). Additionally, the Coast Guard has been expressly authorized to use incremental funding for the HPIB. This authorization is reflected in the Coast Guard’s January 2018 affordability certification memo, submitted to DHS leadership. These memos are required to certify that a program’s funding levels are adequate and identify tradeoffs needed to address any funding gaps. However, as noted above, with the addition of the Administration’s fiscal year 2019 budget request addendum, the Coast Guard requested $750 million in full funding for the lead ship. The Navy has informed us that it plans to award the advance planning, design, engineering, long lead time material contract line item with its $270 million in appropriations. Navy officials also told us they are in the process of determining whether it needs to be authorized by Congress to use an incremental funding approach to fund the detail design and construction options if full funding is not received by the Navy. According to the Office of Management and Budget’s A-11 budget circular, full funding helps to ensure that all costs and benefits of an acquisition are fully taken into account at the time decisions are made to provide resources. The circular goes on to say that when full funding is not followed, without certainty if or when future funding will be available, the result is sometimes poor planning, higher acquisition costs, cancellation of major investments, or the loss of sunk costs. The circular, however, also notes that Congress may change the agency’s request for full funding to incremental funding to accommodate more projects in a year than would be allowed with full funding. Plans to Address Cost Growth Not Fully Documented Regardless of the funding strategy and which service funds the contract, the Coast Guard and the Navy do not have a clear agreement on how certain types of cost growth within the program will be addressed. The budgeting and financial management appendix of the July 2017 agreement between the Coast Guard and Navy for the HPIB notes that any cost overruns will be funded by the originating source of the appropriation and be the responsibility of the organization that receives the funding. However, the Coast Guard and the Navy have interpreted “cost overruns” differently in the context of the agreement. Coast Guard and Navy officials are in agreement that given the fixed- price incentive contract type, the government’s share of cost overruns between the target cost and ceiling price (based on the share ratio) will be the responsibility of the organization that provided the funding for the contract line item. Navy officials also noted that because the contract type is fixed-price incentive, any cost overruns above the ceiling price are generally the responsibility of the contractor, not the government. However, the Coast Guard and the Navy have not addressed in an agreement how they plan to handle any cost growth stemming from changes to the scope, terms, and conditions of the HPIB detail design and construction contract. For example, if the Coast Guard or the Navy revises the program’s requirements, this could increase the scope and value of the contract and result in additional contract costs. It is unclear in this instance, which organization is responsible for paying for the additional costs. Further, our 2005 work on Navy shipbuilding programs found that the most common causes of cost growth in these programs were related to design modifications, the need for additional and more costly materials, and changes in employee pay and benefits, some of which required changes in contract scope. IPO officials told us that unplanned changes to the program’s scope and any corresponding funding requests for unanticipated cost growth would require discussions and agreements with both Coast Guard and Navy leadership. Coast Guard and Navy officials stated that they are in the process of reviewing the July 2017 budget appendix of the agreement to clarify the definition of cost overruns and plan to finalize revisions no later than September 2018. Our prior work on implementing interagency collaborative mechanisms found that agencies that articulate their agreements in formal documents can strengthen their commitment to working collaboratively, which can help better overcome significant differences when they arise. Different interpretations or disagreements on financial responsibility between the Coast Guard and the Navy on cost growth for the HPIB program could result in funding instability for the program, which could affect the program’s ability to meet its cost and schedule goals. Conclusions In the last several years, the Coast Guard and the Navy have made significant strides in their efforts to acquire heavy polar icebreakers. It has been over 40 years since the United States has recapitalized its aging heavy polar icebreaker fleet, and Congress has expressed the need for investment in the HPIB program to help ensure our continued presence in the polar regions. The Coast Guard and the Navy have taken steps to examine design risks and expressed commitment to design maturity before starting construction on the lead ship. However, the Coast Guard and the Navy did not take key steps to reduce risks on the HPIB program before setting the HPIB’s program baselines— namely, conducting a preliminary design review, conducting a technology readiness assessment, developing a fully reliable cost estimate, and conducting analysis to determine a realistic schedule and risks to that schedule. By setting the program’s baselines prior to obtaining sufficient knowledge in the design, technologies, cost, and schedule of the HPIB, DHS, the Coast Guard, and the Navy are not establishing a sound business case for investing in the HPIB nor putting the program in a position to succeed. There is risk that the program will cost more than the planned $9.8 billion and the lead ship will not be delivered by 2023 as planned. Further, without clear agreement between the Coast Guard and the Navy on which service will be responsible for any cost growth on the HPIB, the program is at further risk of not meeting its ambitious goals. In the current budget environment, it is imperative that the Coast Guard and the Navy obtain sufficient acquisition knowledge and put together a sound business case before asking Congress and taxpayers to commit significant resources to the HPIB program. Recommendations for Executive Action We are making six recommendations total to the Coast Guard, DHS, and the Navy: The Commandant of the Coast Guard should direct the polar icebreaker program to conduct a technology readiness assessment in accordance with best practices for evaluating technology readiness, identify critical technologies, and develop a plan to mature any technologies not designated to be at least TRL 7 before detail design of the lead ship begins. (Recommendation 1) The Commandant of the Coast Guard, in collaboration with the Secretary of the Navy, should direct the polar icebreaker program and NAVSEA 05C to update the HPIB cost estimate in accordance with best practices for cost estimation, including (1) developing risk bounds for all phases of the program lifecycle, and on the basis of these risk bounds, conduct risk and uncertainty analysis, as well as sensitivity analysis, on all phases of the program lifecycle, and (2) reconciling the results with an updated independent cost estimate based on the same technical baseline before the option for construction of the lead ship is awarded. (Recommendation 2) The Commandant of the Coast Guard should direct the polar icebreaker program office to develop a program schedule in accordance with best practices for project schedules, including determining realistic durations of all shipbuilding activities and identifying and including a reasonable amount of margin in the schedule, to set realistic schedule goals for all three ships before the option for construction of the lead ship is awarded. (Recommendation 3) The Commandant of the Coast Guard should direct the polar icebreaker program office to analyze and determine appropriate schedule risks that could affect the program after construction of the lead ship begins to be included in its risk management plan and develop appropriate risk mitigation strategies. (Recommendation 4) The DHS Under Secretary for Management should require the Coast Guard to update the HPIB acquisition program baselines prior to authorizing lead ship construction, after completion of the preliminary design review, and after it has gained the requisite knowledge on its technologies, cost, and schedule, as recommended above. (Recommendation 5) The Commandant of the Coast Guard, in collaboration with the Secretary of the Navy, should update the financial management and budget execution appendix of the memorandum of agreement between the Coast Guard and the Navy to clarify and document agreement on how all cost growth on the HPIB program, including changes in scope, will be addressed between the Coast Guard and the Navy. (Recommendation 6) Agency Comments We provided a draft of this report to DHS and DOD for review and comment. In its comments, reproduced in appendix IV, DHS concurred with all six of our recommendations and identified actions it planned to take to address them. The Navy stated that it deferred to DHS and the Coast Guard on responding to our recommendations. DHS, the Coast Guard, and the Navy also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of Homeland Security, the Commandant of the Coast Guard, the Secretary of the Navy, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to the report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report examines (1) the extent to which the heavy polar icebreaker (HPIB) program has taken steps to develop mature designs and technologies consistent with best practices, (2) the extent to which the HPIB program has taken steps to set realistic cost and schedule estimates, and (3) the status of the HPIB program’s contracting efforts and funding considerations. To assess the extent to which the HPIB program has taken steps to develop mature designs and technologies consistent with GAO-identified best practices, we reviewed program performance and design requirements, including the program’s operational requirements documents, system specifications such as the power plant, propulsion system, and hull, and technical baseline; the program’s alternatives analysis study, tailored systems engineering plan, test and evaluation master plan, and model testing results; cooperative agreements with Canada related to the HPIB; excerpts from industry studies; and the March 2018 detail design and construction request for proposals and subsequent amendments. We also reviewed relevant Department of Homeland Security (DHS), Coast Guard, and Department of Defense (DOD) acquisition guidance and instructions. From these documents, we determined the program’s design and technology efforts and compared them to GAO’s various best practices, including using a knowledge-based approach to shipbuilding, knowledge-based approach to major acquisitions, and evaluating technology readiness. We also interviewed knowledgeable officials from the Coast Guard’s Capabilities Directorate, Research and Development Center, and Marine Transportation Systems Directorate; DHS’s Science and Technology Directorate’s Office of Systems Engineering; the Canadian Coast Guard; and the National Science Foundation. To assess the extent to which the HPIB program has taken steps to set realistic cost and schedule estimates, we determined the extent to which the estimates were consistent with best practices as identified in GAO’s Cost Estimating and Assessment and Schedule Assessment guides. To assess the cost estimate, we reviewed the HPIB’s January 2018 lifecycle cost estimate used to support the program’s initial cost baselines, Coast Guard and Navy documentation supporting the estimate, relevant program briefs to Coast Guard leadership, and HPIB program documentation containing cost, schedule, and risk information. We met with Naval Sea Systems Command (NAVSEA) officials responsible for developing the cost estimate to understand the processes used by the cost estimators, clarify information, and request additional documentation to support the estimate. Because we did not have direct access to the HPIB cost model, we observed portions of the model during a presentation and discussion with Navy cost estimators. We also reviewed the Naval Center for Cost Analysis’ September 2017 independent cost estimate for the HPIB program, the DHS Cost Analysis Division’s January 2018 independent cost assessment of the HPIB lifecycle cost estimate, and DHS Office of Systems Engineering’s January 2018 technical assessment of the HPIB program. We also conducted interviews with officials from the Naval Center for Cost Analysis, DHS Cost Analysis Division, and the DHS Office of Systems Engineering. To assess the program’s schedule, we compared the HPIB program’s schedule, including the program’s initial schedule baselines, delivery schedules from the HPIB’s request for proposals for the detail design and construction contract, and integrated master schedule, to selected GAO best practices for project schedules, including establishing the duration of activities, ensuring reasonable total buffer or margin, and conducting a schedule risk analysis. To specifically assess the HPIB lead ship’s 3- year construction schedule estimate, we reviewed the Coast Guard’s and the Navy’s analysis supporting the HPIB schedule. We did not assess the reliability of the historical ship construction data the Coast Guard and Navy used for this analysis. We also compared the HPIB lead ship’s 3- year construction schedule to historical construction cycle times of lead ships among a nongeneralizable sample of major Navy and Coast Guard shipbuilding programs. We selected programs that were active within the last 10 years and have completed construction of the lead ship. We also included the Coast Guard’s Healy medium polar icebreaker, even though it is not a recent shipbuilding program, because it is the most recent polar icebreaker to be built in the United States. We excluded the Coast Guard Fast Response Cutter, Navy submarines, and Navy aircraft carriers because we determined that their size and complexity did not make them reasonable comparisons to the HPIB for construction times. This resulted in an analysis of construction schedules for 10 shipbuilding programs. We obtained data on these programs’ construction schedules from program documentation, such as acquisition program baselines, Navy selected acquisition reports, and Navy and Coast Guard budget documentation. We selected only lead ships for comparison because we have found in our prior work that schedule delays are amplified for lead ships in a class. Lead ships are thus more comparable to the HPIB lead ship than follow- on ships. We reviewed ship displacement data from the Naval Vessel Registry and the Coast Guard to control for the size of the ships. To assess the reliability of Naval Vessel Registry data, we reviewed the Navy’s data collection and database maintenance documentation, cross- checked select data across Navy websites, and interviewed cognizant Navy officials regarding internal controls for the database. We determined the ship displacement data were reliable for our purposes. To assess the degree to which the 6-month schedule margin that the HPIB baseline affords the lead ship is in keeping with historical ship delivery delays, we reviewed Coast Guard, Navy, and DHS acquisition documentation from a nongeneralizable sample of major Navy and Coast Guard shipbuilding programs. We selected programs active within the last 10 years and analyzed changes in lead ship delivery dates. We excluded Navy submarines and aircraft carriers because we determined that their size and complexity did not make them reasonable comparisons to the HPIB for delivery delays. We included programs that have not yet delivered their lead ships. This resulted in an analysis of construction schedules for 12 shipbuilding programs. For delivered ships, we used the actual delivery date; for ships not yet delivered, such as the Offshore Patrol Cutter and DDG 1000, we used the most recent, planned delivery date in the program baseline. To determine the status of the HPIB program’s contracting efforts and funding considerations, we reviewed the program’s acquisition plan, March 2018 request for proposals and subsequent amendments, certification of funds memorandum, budget justifications, lifecycle cost estimate, and the Coast Guard’s fiscal year 2019 Capital Investment Plan. We also interviewed knowledgeable officials from the Coast Guard’s Office of Budget and Programs, NAVSEA Contracts Directorate, NAVSEA Comptroller Directorate, and the Office of the Assistant Secretary of Navy’s Financial Management and Comptroller. For all objectives, we reviewed relevant DHS and Coast Guard policies and interviewed knowledgeable officials from DHS, the Coast Guard’s and the Navy’s HPIB integrated program office, and ship design team. We conducted this performance audit from August 2017 to September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Shipbuilding Phases There are four primary phases in shipbuilding: pre-contracting activities and contract award, detail design and planning, construction, and post- delivery activities (see table 6). Appendix III: Summary of Results of Heavy Polar Icebreaker Program’s Cost Estimate Assessed against GAO’s Best Practices The GAO Cost Estimating and Assessment Guide (GAO-09-3SP) was used as criteria in this analysis. Using this guide, GAO cost experts assessed the heavy polar icebreaker (HPIB) program’s lifecycle cost estimate against measures consistently applied by cost-estimating organizations throughout the federal government and industry that are considered best practices for developing reliable cost estimates. For our reporting purposes, we grouped these best practices into four categories—or characteristics—associated with a reliable cost estimate: comprehensive, accurate, well documented, and credible. A cost estimate is considered reliable if the overall assessment ratings for each of the four characteristics are substantially or fully met. If any of the characteristics are not met, minimally met, or partially met, then the cost estimate does not fully reflect the characteristics of a high-quality estimate and cannot be considered reliable. After reviewing documentation the Navy submitted for its cost estimate, conducting interviews with the Navy’s cost estimators, and reviewing other relevant HPIB cost documents, we found the HPIB lifecycle cost estimate substantially met three and partially met one characteristic of reliable cost estimates. We determined the overall assessment rating by assigning each individual rating a number: Not Met = 1, Minimally Met = 2, Partially Met = 3, Substantially Met = 4, and Met = 5. Then, we took the average of the individual assessment ratings to determine the overall rating for each of the four characteristics. The resulting average becomes the Overall Assessment as follows: Not Met = 1.0 to 1.4, Minimally Met = 1.5 to 2.4, Partially Met = 2.5 to 3.4, Substantially Met = 3.5 to 4.4, and Met = 4.5 to 5.0. See table 7 for a high level summary of each best practice and the reasons for the overall scoring. Appendix IV: Comments from the Department of Homeland Security Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition the contact named above, the following staff members made key contributions to this report: Rick Cederholm (Assistant Director), Claire Li (Analyst-in-Charge), Peter Anderson, Brian Bothwell, Juaná Collymore, Laurier Fish, Kristine Hassinger, Karen Richey, Miranda Riemer, Roxanna Sun, David Wishard, and Samuel Woo.
Why GAO Did This Study To maintain heavy polar icebreaking capability, the Coast Guard and the Navy are collaborating to acquire up to three new heavy polar icebreakers through an integrated program office. The Navy plans to award a contract in 2019. GAO has found that before committing resources, successful acquisition programs begin with sound business cases, which include plans for a stable design, mature technologies, a reliable cost estimate, and a realistic schedule. Section 122 of the National Defense Authorization Act for Fiscal Year 2018 included a provision for GAO to assess issues related to the acquisition of the icebreaker vessels. In addition, GAO was asked to review the heavy polar icebreaker program's acquisition risks. This report examines, among other objectives, the extent to which the program is facing risks to achieving its goals, particularly in the areas of design maturity, technology readiness, cost, and schedule. GAO reviewed Coast Guard and Navy program documents, analyzed Coast Guard and Navy data, and interviewed knowledgeable officials. What GAO Found The Coast Guard—a component of the Department of Homeland Security (DHS)—did not have a sound business case in March 2018, when it established the cost, schedule, and performance baselines for its heavy polar icebreaker acquisition program, because of risks in four key areas: Design. The Coast Guard set program baselines before conducting a preliminary design review, which puts the program at risk of having an unstable design, thereby increasing the program's cost and schedule risks. While setting baselines without a preliminary design review is consistent with DHS's current acquisition policy, it is inconsistent with acquisition best practices. Based on GAO's prior recommendation, DHS is currently evaluating its policy to better align technical reviews and acquisition decisions. Technology. The Coast Guard intends to use proven technologies for the program, but did not conduct a technology readiness assessment to determine the maturity of key technologies prior to setting baselines. Coast Guard officials indicated such an assessment was not necessary because the technologies the program plans to employ have been proven on other icebreaker ships. However, according to best practices, such technologies can still pose risks when applied to a different program or operational environment, as in this case. Without such an assessment, the program's technical risk is underrepresented. Cost. The lifecycle cost estimate that informed the program's $9.8 billion cost baseline substantially met GAO's best practices for being comprehensive, well-documented, and accurate, but only partially met best practices for being credible. The cost estimate did not quantify the range of possible costs over the entire life of the program. As a result, the cost estimate was not fully reliable and may underestimate the total funding needed for the program. Schedule. The Coast Guard's planned delivery dates were not informed by a realistic assessment of shipbuilding activities, but rather driven by the potential gap in icebreaking capabilities once the Coast Guard's only operating heavy polar icebreaker—the Polar Star —reaches the end of its service life (see figure). GAO's analysis of selected lead ships for other shipbuilding programs found the icebreaker program's estimated construction time of 3 years is optimistic. As a result, the Coast Guard is at risk of not delivering the icebreakers when promised and the potential gap in icebreaking capabilities could widen. What GAO Recommends GAO is making six recommendations to the Coast Guard, DHS, and the Navy. Among other things, GAO recommends that the program conduct a technology readiness assessment, re-evaluate its cost estimate and develop a schedule according to best practices, and update program baselines following a preliminary design review. DHS concurred with all six of GAO's recommendations.
gao_GAO-19-155
gao_GAO-19-155_0
Background Air Force RPA Aircrews RPA aircrews consist of a pilot and a sensor operator. The Air Force in most cases assigns officers to fly its RPAs. The Air Force relied solely on manned aircraft pilots to fly remotely piloted aircraft until 2010 when it established a RPA pilot career field (designated as Air Force Specialty Code 18X) for officers trained to fly only RPAs. As of December 2013, approximately 42 percent of the RPA pilots were temporarily assigned, manned aircraft pilots and manned aircraft pilot training graduates. Both of those groups of RPA pilots are temporarily assigned to fly RPAs with the assumption that after their tour they will return to flying their manned aircraft. By comparison, as of September 2018, manned aircraft pilots and manned aircraft pilot training graduates comprised only 17 percent of the RPA pilots. Further, the number of permanent RPA pilots has increased from 58 percent of all RPA pilots in December 2013, to 83 percent as of September 2018, as shown in figure 1. Additionally, Air Force enlisted personnel operate the RPAs’ sensors, which provide intelligence, surveillance, and reconnaissance capabilities. As a crewmember, the RPA sensor operators provide assistance to the RPA pilot with all aspects of aircraft use, such as tracking and monitoring airborne, maritime and ground objects and continuously monitoring the aircraft and weapons systems status. Officer Promotion Process The Defense Officer Personnel Management Act, as amended, created a standardized system for managing the promotions for the officer corps of each of the military services. Pursuant to the established promotion system, the secretaries of the military departments must establish the maximum number of officers in each competitive category that may be recommended for promotion by competitive promotion boards. Within the Air Force, there are groups of officers with similar education, training, or experience, and these officers compete among themselves for promotion opportunities. There are several competitive categories including one that contains the bulk of Air Force officers called the Line of the Air Force, which includes RPA pilots, as well as pilots of manned aircraft and other operations-oriented careers. To determine the best-qualified officers for promotion to positions of increased responsibility and authority, the Air Force appoints senior officers to serve as members of a promotion selection board for each competitive category of officer in the Air Force. Promotion selection boards consist of at least five active-duty officers who are senior in grade to the eligible officers and who reflect the eligible population with respect to minorities and women, as well as career field, aviation skills, and command in an attempt to provide a balanced perspective. Promotion boards convene at the Air Force Personnel Center headquarters to perform a subjective assessment of each officer’s relative potential to serve in the next higher grade by reviewing the officer’s entire selection folder. This “whole-person concept” involves the assessment of such factors as job performance, professional qualities, leadership, job responsibility, depth and breadth of experience, specific achievements, and academic and professional military education. Developmental Education Program Selection Process The Air Force developmental education programs expand expertise and knowledge as well as a path that helps to ensure that personnel receive the appropriate level of education throughout their careers. Officers have three opportunities to compete for intermediate developmental education programs, which focus on warfighting within the context of operations and leader development, such as at the Air Command and Staff College. Officers have four opportunities to compete for senior developmental education programs, such as at the Air War College, which are designed to educate senior officers to lead at the strategic level in support of national security, and in joint interagency, intergovernmental and multinational environments. A subset of developmental education is Professional Military Education, which includes resident and non-resident attendance options open to officers in both the intermediate and senior developmental education programs. Nonresident programs exist to provide individuals who have not completed resident programs an opportunity to complete them via correspondence, seminar, or other approved methods. Prior to 2017, officers who were identified by their promotion board as a developmental education candidate or “selectee” were assured of the opportunity to attend some form of developmental education in-resident program. However, in March 2017, the Air Force announced changes to its nomination process for officer developmental education by separating in- residence school selection status from promotion decisions. Since that time, commanders nominate candidates for in-residence, developmental education programs based on individual performance. Various Career Assignments for Officers with Aviation Expertise Officers with aviation expertise, including RPA pilots, at various points in their careers, may rotate through both flying and nonflying positions to broaden their career experiences. Operational positions, whether flying or nonflying, include those positions that exist primarily for conducting a military action or carrying out a strategic, tactical, service, training or administrative military mission. Operational positions include a range of flying positions, such as for RPA pilots, operating aircraft to gather intelligence or conduct surveillance, reconnaissance or air strikes against a variety of targets. Operational positions that are non-flying positions could include assignments as a close-air-support duty officer in an Air Operations Center. Non-operational staff positions are generally non-flying positions and include assignments to headquarters or combatant command positions. Certain non-operational staff positions can be filled only by qualified pilots. Other non-operational positions are more general in nature and are divided among officer communities to help carry out support activities, training functions, and other noncombat related activities in a military service. These positions could include positions such as a recruiter, working as an accident investigator, advisor to foreign militaries, or a policy position at an Air Force major command. The Air Force views nonoperational staff positions as a means to develop leaders with the breadth and depth of experience required at the most senior levels inside and outside the Air Force. Roles and Responsibilities Related to Aircrew Management Various offices within the Air Force have roles and responsibilities for the management of aircrew positions and personnel. The Deputy Chief of Staff for Operations is to establish and oversee policy to organize, train and equip forces for the Department of the Air Force. This specifically includes the responsibility for all matters pertaining to aircrew management. The Directorate of Operations is responsible for developing and overseeing the implementation of policy and guidance governing aircrew training, readiness, and aircrew requirements. The directorate is the approval authority for aircrew distribution plans, rated allocation oversight and any other areas that have significant aircrew management implications. The Operational Training Division produces the official Air Force aircrew personnel requirements projections, and in conjunction with the Military Force Policy Division, develops and publishes the Rated Management Directive, formerly known as the Rated Staff Allocation Plan, as approved by the Chief of Staff of the Air Force as designed to meet near-term operational as well as long-term leadership development requirements. The Office of the Deputy Chief of Staff for Manpower, Personnel, and Services has responsibilities that include developing personnel policies, guidance, programs, and other initiatives to meet the Air Force’s strategic objectives to include accessions, assignments, retention, and career development. The Directorate of Force Management Policy, the Force Management Division analyzes officer, enlisted and civilian personnel issues. The division also maintains a variety of computer models and databases to analyze promotion, retention, accession, compensation and separation policy alternatives. Additionally, it is responsible for providing official aircrew personnel projections for use in various management analyses. The Air Force Personnel Center, one of three field-operating agencies reporting to the Deputy Chief of Staff of the Air Force, Manpower, Personnel and Services, conducts military and civilian personnel operations such as overseeing performance evaluations, promotions, retirements, separations, awards, decorations and education. The Center also directs the overall management and distribution of both military and civilian personnel. Since 2013 RPA Pilots Have Been Promoted and Nominated for Education Opportunities at Rates Generally Similar to Pilots in Other Fields RPA Pilots Have Been Promoted at Rates Generally Similar to Those of Pilots in Other Career Fields Based on our analysis of Air Force promotion data, the percentage of RPA pilots promoted were generally similar in comparison to the promotion rates of pilots in other career fields since 2013. However, it is important to note that since the population of eligible RPA pilots to be considered for promotion was smaller than other pilot populations, the promotion of one or two RPA pilots could have a large effect on their promotion rate. For example, the RPA pilot promotion rates were within 10 percentage points of the promotion rates for the other types of pilots in each year of those years in 8 out of 10 promotion boards to major and to lieutenant colonel held during that time frame. RPA pilot promotion rates from captain to major were generally similar as the promotion rates for other pilots from 2014 through 2017, as shown in figure 2. For example, in 2014, 94 percent of eligible RPA pilots (29 of 31), bomber pilots (47 of 50), fighter pilots (189 of 201) and 91 percent of eligible mobility pilots (355 of 388) were promoted from captain to major. This is an improvement in promotion rates for RPA pilots compared to 2006 through 2012, where RPA pilot promotion rates fell below those for all other pilots in 5 of the 7 promotion boards held. Additionally, the promotion rates for RPA pilots from major to lieutenant colonel relative to other types of pilots in 2013 through 2017 showed a similar improvement compared to 2006 through 2012, as shown in figure 3. For example, in 2017, 75 percent of eligible RPA pilots (15 of 20) were promoted, which is generally similar to the promotion rates for the other pilots—78 percent for bomber pilots (18 of 23), 83 percent for fighter pilots (75 of 90), and 72 percent for mobility pilots (143 of 199). However, in 7 of the 8 promotion boards held from 2006 through 2012, RPA pilot promotion rates from major to lieutenant colonel fell below the promotion rates for all other pilots. The one exception to the promotion rates being generally similar was the rate at which RPA pilots were promoted from lieutenant colonel to colonel. In this case, the rates for RPA pilots diverged notably from the promotion rates of bomber, fighter, and mobility pilots from 2013 to 2017. For example, in 2016, 1 out of the 5 (20 percent) eligible RPA pilots was promoted to colonel. In contrast, 13 of 21 (62 percent), bomber pilots, 32 of 51 (63 percent) fighter pilots, and 34 of 65 (52 percent) mobility pilots were promoted from lieutenant colonel to colonel. However, the promotion rates of RPA pilots from lieutenant colonel to colonel that we calculated should be considered cautiously as fewer than 10 RPA pilots were eligible for promotion boards each year through this time period. The promotion of one or two officers could have a large effect on the promotion rate due to the small number of eligible RPA pilots. In April 2014, we reported that Air Force officials attributed the low RPA pilot promotion rates from 2006 through 2012 generally to the process that it used to staff RPA pilot positions at that time. Specifically, they stated that commanders generally transferred less competitive pilots from other pilot career fields to RPA squadrons to address the increased demand. Air Force officials also stated that these officers generally had in their records fewer of the factors that the Air Force Personnel Center identified that positively influence promotions than their peers. They said that because the bulk of RPA pilots who competed for promotion during the time of our previous review was transferred using this process, these were the reasons that RPA pilots had been promoted at lower rates than their peers. Air Force officials stated that they believed the trend of increased promotion rates for RPA pilots from 2013 through 2017 mostly reflected the change in the population of eligible pilots who were recruited and specialized as an RPA pilot (i.e., the 18X career field). According to Air Force officials, the creation and establishment of this career field resulted in an increase in the number of skilled and more competitive promotion candidates. Specifically, as of September 2018, the number of permanent RPA pilots outnumbered all other types of pilots serving as RPA pilots combined. RPA Pilots Have Been Nominated to Developmental Education Programs at Rates Similar to Pilots in Other Career Fields RPA pilots were nominated to attend developmental education programs, such as professional military education, at rates similar to the rates for other pilots from academic years 2014 through 2018, according to our analysis of Air Force data. An officer’s attendance at developmental education programs can be a factor that is taken into consideration when being assessed for promotion. Our analysis showed that, for the academic years 2014 through 2018, nomination rates for RPA pilots to Intermediate and Senior Developmental Education programs combined ranged from a low of 25 percent for academic year 2016 to a high of 31 percent for academic year 2015. In comparison, nomination rates across the same time period for pilots in other career fields ranged from a low of 21 percent for mobility pilots for academic year 2016 to a high of 35 percent for fighter pilots for academic year 2014. Table 1 provides the various nomination rates for each of the different types of pilots that we analyzed. RPA Sensor Operators Have Been Promoted at Rates Similar to Other Enlisted Servicemembers The Air Force promoted enlisted RPA sensor operators at a rate similar to the rates of all enlisted servicemembers, according to our analysis of Air Force promotion data. Specifically, the Air Force promoted an average of 100 RPA sensor operators (or an average of 26 percent) annually for the period from 2013 through 2017. Similarly, the Air Force annually promoted an average of approximately 27,000 enlisted personnel (or an average of 25 percent) for the same period. Our analysis showed that in 2013 through 2017, promotion rates for RPA sensor operators ranged from a low of 18 percent in 2014 to a high of almost 35 percent in 2017. The promotion rates across the same time period for all other enlisted servicemembers ranged from a low of approximately 19 percent in 2014 to a high of 32 percent in 2017. Table 2 provides the various promotion rates that we analyzed. Air Force enlisted servicemembers in the lowest four levels (grades E1- E4) are selected for promotion based on time in grade and time in service. Selection for promotion to the next two levels, known as the non- commissioned officer levels (grades E5 and E6), is based on the Weighted Airman Promotion System to fill the requirement. This system provides weighted points for an individual’s performance record and service decorations received, and the results of tests to assess an individual’s promotion fitness and job skills and knowledge. Selection for promotion to the senior non-commissioned officer level (grades E7-E9) is based on the same Weighted Airman Promotion System plus the results from a central board evaluation. Servicemembers eligible for promotions to the non-commissioned ranks are assessed and then listed from the highest to lowest scores and offered promotion if they fall above a specific cutoff score established to meet quotas within each career field and for each rank. While enlisted servicemembers must pass knowledge and skills tests to qualify for promotions, officials explained that the resulting promotion rates essentially reflect requirements and are not indicative of competitiveness across career fields as with officer promotion rates. Officials stated that enlisted servicemember promotions are based on the service’s numeric personnel requirements for each enlisted grade. To consider an enlisted servicemember for promotion from among those who are eligible, a vacancy must first be required at the next higher grade within that servicemember’s occupational area, known as their Air Force Specialty Code that needs to be filled. For example, in 2017, the Air Force required promotions for 128 RPA sensor operators, and officials promoted that many enlisted servicemembers from the cohort of 370 eligible servicemembers. Air Force Assigned Non-operational Staff Positions Requiring RPA Pilots at High Rates Since 2013 For each year since 2013, the Air Force has assigned over 75 percent of the non-operational staff positions that require an RPA pilot to the organizations that had requested those positions, according to our analysis of service headquarters data. However, the overall number of non-operational staff positions that require an RPA pilot is about one- tenth of the number of those requiring pilots in other career fields. For example, in fiscal year 2018 the Air Force had 83 non-operational staff positions that required an RPA pilot compared to 330 positions requiring fighter pilots. Air Force officials stated that the number of RPA positions was smaller than for other pilots because the career field is relatively new and still growing. Non-operational staff positions are generally non-flying positions and include assignments to headquarters or combatant command positions. Certain non-operational staff positions can be filled only by qualified pilots. Other non-operational positions are more general in nature and are divided among officer communities in a military service. Officers with aviation expertise, including RPA pilots, at various points in their careers may rotate through both flying and nonflying positions to broaden their career experiences and Air Force officials stated that staff assignments are essential to the development of officers who will assume greater leadership responsibilities. Headquarters Air Force prepares allocation or “assignment” plans to provide positions requiring aviator expertise to various Air Force commands and other entities. Under this process, these organizations identify the number of non-operational staff positions requiring aviator expertise (e.g., pilots) they require as well as indicate the type of aviator expertise that is needed to fill those positions, (e.g., fighter, bomber, RPA). Headquarters Air Force then determines the extent to which the staff position requirements can be met in accordance with senior leadership priorities designed to equitably manage the shortage of officers with aviation expertise. The results of this process are outlined in the Air Force’s annual Rated Management Directive which reinforces each organization’s flexibility for using their entitlements in non- operational staff and other positions. In some instances, the Air Force is able to assign enough positions to an organization to meet nearly all of its non-operational staff position requirements. For the purposes of our analyses, the assignment rate is determined by the number of positions assigned compared to the number of positions the organization required. For example, in fiscal year 2018 the Air Force assigned 99 percent of the non-operational staff positions that require an RPA pilot to the requesting entities. In other instances, the Air Force assignment rate of non-operational staff positions may be much lower because of competing management priorities or shortages of personnel in a career field. As a result, the Air Force’s assignment of staff positions can vary across the different career fields. For example, the Air Force fighter pilot career field has had fewer fighter pilots than its authorization number since 2013. Therefore, the Air Force assignment rate for staff positions requiring fighter pilots is significantly lower than the rate for staff positions requiring other types of pilots. For example, in fiscal year 2017, the Air Force assignment rate for staff positions requiring a fighter pilot was 18 percent, which was less than a quarter of the rate for staff positions requiring an RPA pilot, as shown in table 3. The Air Force Has Not Reviewed Its Oversight Process to Manage Its Non- operational Staff Positions That Require Aviator Expertise The Air Force has not reviewed its oversight process to ensure that it is effectively and efficiently managing its review of non-operational staff positions that require aviator expertise, such as RPA pilots. Air Force officials explained that its oversight process for managing these positions requiring pilot expertise consists of a time-consuming, labor-intensive process of exchanging emails and spreadsheets with 57 organizations, such as various Air Force Major commands like the Air Combat Command, the Air Force Special Operations Command, and the National Guard Bureau. According to these officials, this process consists of the maintenance and exchange of spreadsheets and briefing slides with information about every position found throughout the Air Force and in various other entities that are required to be reviewed and validated annually. Additionally, this process is maintained by one official within the Headquarters Air Force who must exchange the spreadsheets via email approximately twice a year with officials from each of the organizations that are responsible for annually justifying their continued need for non- operational staff positions requiring aviator expertise. Air Force officials stated that this process does not always produce complete and accurate information in a timely manner as in some instances the information produced is not relevant by the time a complete review of the positions is accomplished. Headquarters Air Force officials familiar with its oversight responsibilities stated that using a different system would more efficiently and effectively support their ability to manipulate, analyze and share information among the applicable organizations and make informed decisions. For example, these officials explained that over the last 10 years, the Air Force drew down the number of squadrons, but did not do a good job of cross checking that reduced number of squadrons with a revised number of staff positions required for support. Therefore, the number of non- operational staff positions was not adjusted and are now artificially high in some career fields and others may have fewer non-operational staff positions than needed. These officials added that as the new RPA pilot career field has developed, there has been no timely and widely accessible system of checks and balances to establish an accurate number of non-operational staff positions required to support the career field. Further, they said that using a different system that allows them to have more timely and quality information would enhance their ability to manage and make decisions regarding the appropriate mix of expensive pilots and others with aviator expertise between operational line positions and non-operational staff position needs. They said this would better ensure that there is a reasonable range of non-operational staff positions required for each career field, such as for the growing RPA pilot career field. An October 2017 memorandum from the Air Force Chief of Staff stated that the number of non-operational staff positions which require aviation expertise must be brought into balance with the Air Force’s ability to produce the appropriate number of officers with aviator expertise. The memorandum also stated that organizations were strongly encouraged to change their current requirements to meet the available current force levels including converting chronically unfilled non-operational staff positions requiring aviator expertise to positions specifically designated for RPA pilots. As a result of two separate reviews, Air Force officials identified hundreds of these positions that lacked adequate justification or qualifications to support the positions’ requirement to be filled by officers with aviator expertise. For example, in August 2018, out of 2,783 non- operational staff positions, the Air Force found that 513 of these positions were evaluated as lacking adequate justification or mission qualifications to support the need for aviator expertise and 61 positions were eliminated after further review. Prior to 2010, according to officials, the Headquarters Air Force maintained a web-based management oversight system to review and approve the justifications for its non-operational staff positions requiring aviator expertise that allowed for wide access to and manipulation and timely analyses of information. Additionally, this former system provided multilevel coordination among Headquarters Air Force and its major commands for reviewing the justifications of all of the positions. According to Headquarters Air Force officials, the use of this management oversight system was discontinued in 2010 due to a decision to no longer fund the contractor maintaining the system. In October 2018, officials from one of the Air Force’s Major Commands confirmed that the current oversight system in use is time-consuming, does not readily support information analysis and that plans to integrate it with another existing management system had not happened. The Headquarters Air Force official in charge of managing this process told us that he had submitted multiple requests over the last 3 years to integrate the information being managed with spreadsheets and emails into an existing personnel management system to improve the efficiency of the process. However, according to this official, higher priorities and funding issues have precluded the information from being integrated into another existing system. In September, 2018, another Air Force official told us that the Program Management Office that manages a system into which the information could be integrated was behind schedule in implementing several other system updates. Because of these delays, the official acknowledged that no review has yet been done of what is needed to provide the most efficient management oversight process of the information currently being managed via the spreadsheet process. The official said that before any actions could take place, a review of requirements and priorities would be needed in order to make a determination as to what changes could be made. Therefore, he said that there are no decisions or timelines available for reviewing a process that would provide the validation information for non-operational staff positions in a timelier and widely accessible manner. Air Force instructions state that major commands are required to perform annual aircrew requirements reviews including review and revalidation of all aircrew positions, except for rank of colonel or higher, to ensure aviator expertise is required, and report the results to the Headquarters Air Force Operations Training Division. Further, the Headquarters Air Force Operations Training Division has the responsibility to ensure a management process is in place to provide efficient and effective oversight of the major commands’ annual review and revalidation of the aircrew position requirements process. Additionally, Standards for Internal Control in the Federal Government states that management should identify needed information, obtain the relevant information from reliable sources in a timely manner, and process the information into quality data to make informed decisions and evaluate its performance in achieving key objectives and addressing risks. By reviewing its oversight process, the Air Force may be able to identify a more efficient manner to manage its non-operational staff positions that require aviator expertise. A management oversight process that provides timely and widely accessible position justification information may help ensure that the proper type of aviator expertise needed in these positions is up to date. In turn, this could result in a more efficient use of the Air Force’s short supply of expensive pilot resources, particularly fighter pilots, and could potentially improve its ability to assign and develop effective leaders, such as those within the growing RPA career field. Conclusions The Air Force continues to expand the use of RPAs in its varied missions of intelligence gathering, surveillance and reconnaissance, and combat operations. While the overall number of eligible RPA pilots is much smaller compared to other pilots, over the last 5 years RPA pilots have achieved promotions and nominations to attend developmental education programs at rates that were generally similar in comparison to pilots in other career fields. Additionally, non-operational staff positions requiring RPA pilots have been assigned to entities at high rates since 2013, but the number of positions available to them is smaller than the number that require fighter, bomber, and mobility pilots because the career field is still growing. Air Force officials have noted problems with the current oversight process which may be hindering its ability to efficiently and effectively manage these non-operational staff positions as required by Air Force policy. For example, the Air Force has recently identified that a large number of these positions designated as requiring officers with aviator expertise lacked adequate justification for that requirement. By reviewing the efficiency and effectiveness of its management oversight process that provides information in a timelier and more widely accessible manner, the Air Force could better ensure that it makes informed decisions regarding the need for pilots in certain non-operational staff positions and is in compliance with policy. It also could help ensure that the Air Force more efficiently uses its short supply of expensive pilot resources. Ultimately, this may positively affect its ability to assign and develop effective leaders, such as those within the growing RPA career field. Recommendation for Executive Action The Secretary of the Air Force should review its management oversight process that provides information and documents the justifications of the Air Force’s non-operational staff positions requiring aviator expertise, including RPA positions, to identify opportunities for increased efficiency and effectiveness and take any necessary actions. (Recommendation 1) Agency Comments and Our Evaluation In written comments reproduced in appendix II, DOD concurred with comments to the recommendation, and provided separate technical comments, which we incorporated as appropriate. DOD concurred with the recommendation to review the management oversight process that provides information and documents the justifications of the Air Force’s non-operational staff positions requiring aviator expertise, including RPA positions, to identify opportunities for increased efficiency and effectiveness and to take any necessary actions. In its comments, DOD stated that it agrees the current oversight process is time-consuming and could be more efficient. However, it believes this process is effective because the Air Force was able to validate the need for having pilots fill a majority of its non-operational staff positions during a recent congressionally-mandated review of these positions. As we reported, this review of all staff positions requiring aviator expertise across the Air Force and other defense entities discovered more than 500 of approximately 2,800 positions that were initially found to be lacking adequate justifications, and 61 positions eventually were eliminated. We believe the Air Force’s results from this one-time review is an example of how the current process is not consistently yielding up-to-date validations of positions. Further, DOD also stated that while a move to automating the process again has been considered, current funding shortfalls prevent the Air Force from establishing an automated system to increase the process’s efficiency. We continue to believe that the Air Force should review its current process in order to identify any viable means to increase its efficiency and effectiveness. Such a review may provide the Air Force with opportunities to more consistently provide the proper type of aviator expertise needed to fill its staff positions as well as potentially provide more leadership opportunities to those within growing career fields, such as RPA pilots. We provided a draft of this report to DOD for review and comment. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Defense, and the Secretary of the Air Force. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Steps Taken by the Department of Defense and the Air Force to Address Prior GAO Report Recommendations Since 2014, we have issued three reports assessing the Air Force’s remotely piloted aircraft (RPA) workforce management. In April 2014, we found that the Air Force had shortages of pilots of remotely piloted aircraft (RPA) and faced challenges to recruit, develop, and retain pilots and build their morale. We also found that Air Force RPA pilots experienced potentially challenging working conditions and were promoted at lower rates than other career fields. We made seven recommendations, and the Air Force generally concurred with our recommendations. It has fully implemented all but one recommendation to analyze the career field effect of being an RPA pilot to determine whether and how being an RPA pilot is related to promotions. In May 2015, we found that the Air Force faced challenges ensuring that their RPA pilots completed their required training and that the Office of the Deputy Assistant Secretary of Defense for Readiness had not issued a training strategy that addresses if and how the services should coordinate with one another to share information on training pilots who operate unmanned aerial systems. We made one recommendation related to these findings with which DOD concurred. However, in September 2018, an official from the Office of Secretary of Defense for Readiness stated that there are compelling reasons why a training strategy is no longer necessary and that no action is planned to implement the recommendation. In January 2017, we found, among other things, that the Air Force had not fully tailored a strategy to address the UAS pilot shortage and evaluated their workforce mix of military, federal civilian, and private- sector contractor personnel to determine the extent to which these personnel sources could be used to fly UAS. We made five recommendations related to these findings with which the Air Force and DOD generally concurred. As of July 2018, the Air Force has taken some action to address the first three recommendations and officials from the Office of the Under Secretary of Defense for Personnel and Readiness have fully implemented the other two recommendations. In table 4, we present the recommendations that we made to the Air Force and the Under Secretary of Defense for Personnel and Readiness and summarize the actions taken to address those recommendations as of September 2018. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Lori Atkinson (Assistant Director), Rebecca Beale, Amie Lesser, Felicia Lopez, Grant Mallie, Ricardo Marquez, Richard Powelson, Amber Sinclair, and John Van Schaik made key contributions to this report.
Why GAO Did This Study An increasing number of Air Force missions use unmanned aerial systems, or RPAs, to provide their specialized capabilities in support of combat operations. The demand for crew members for these systems has grown rapidly. For example, RPA pilot requirements increased by 76 percent since fiscal year 2013 while those for fighter pilots stayed about the same. These requirements include pilots who serve in non-operational staff positions, such as trainers. Senate Report 115-125 included a provision that GAO review career advancement for Air Force RPA pilots compared to other pilots. This report, among other things, describes (1) the rates that RPA and other pilots were promoted; (2) the rates that non-operational staff positions requiring RPA pilot expertise were assigned to various organizations, and (3) the extent to which the Air Force has reviewed its oversight process to effectively manage non-operational staff positions requiring aviator expertise. Among other things, GAO analyzed Air Force pilot promotion data from 2006-2017. GAO also analyzed non-operational staff position data from fiscal years 2013-2018 and interviewed officials regarding the management and oversight of these positions. What GAO Found The promotion rates for Air Force Remotely Piloted Aircraft (RPA) pilots have been generally similar to those of other pilots since 2013 and have increased over time. See figure below for promotion rates from major to lieutenant colonel. Air Force officials stated that RPA pilot promotion rates increased because the creation of a dedicated career field resulted in more competitive candidates. Since 2013, over 75 percent of non-operational staff positions requiring RPA pilot expertise were assigned to various organizations within the Air Force, according to GAO's analysis. These positions carry out support and other noncombat-related activities as well as training functions and are essential to the development of officers. However, the overall number of these positions that require a RPA pilot is about one-tenth of the combined number of those requiring other pilots. For example, in fiscal year 2018, 83 non-operational staff positions required RPA pilots compared to 330 requiring fighter pilots. Air Force officials stated that the small number of RPA positions is because the career field is new. The Air Force has not reviewed its oversight process to ensure that it is efficiently managing its non-operational staff positions that require aviator expertise. Air Force officials explained that over the last 10 years, the Air Force reduced the number of squadrons but had not reviewed the number of non-operational staff positions. Similarly, the Air Force has had no widely accessible oversight process to monitor whether it had established an accurate number of non-operational staff positions required to support the new RPA career field. In August 2018, the Air Force identified 513 non-operational staff positions (out of 2,783) as needing further review because they lacked adequate justification of the need for aviator expertise. Officials described the process for managing these positions as time and labor intensive, which can cause delays in obtaining reliable information needed to inform decision-making. By reviewing this process, the Air Force may be able to identify opportunities to create efficiencies and more effectively manage its non-operational staff positions requiring aviator expertise. What GAO Recommends GAO recommends that the Air Force review its oversight process for managing the non-operational staff positions, including those for RPA pilots, to identify opportunities to increase efficiencies. DOD concurred with this recommendation.
gao_GAO-18-325
gao_GAO-18-325_0
Background Just after World War II, VA developed affiliations with medical schools to improve acute care and physical and mental rehabilitation for veterans. As part of the relationship, VA medical centers have contributed to the education of medical students and residents. Besides medical students and residents, other dual appointees—clinicians and researchers—spend either a full 40-hour week or a fraction of the work week at VA and other time at the affiliated university. On January 23, 1950, Executive Order 10,096 established that the government shall obtain the entire right, title, and interest in and to all inventions made by government employees during working hours; with a contribution by the government of facilities, equipment, materials, funds, or information, or time and services of other government employees on official duty; or which bear a direct relation to or are made as a consequence of the employee’s duties. Since the early 1980s, the federal government has taken several actions related to technology transfer from federal laboratories. Technology transfer is the process of transferring scientific findings from one organization to another for the purpose of further development and commercialization. In this regard, federal agencies are authorized to issue licenses to outside entities granting rights to make, use, or sell government owned inventions. One of the first technology transfer laws, the Stevenson-Wydler Technology Innovation Act of 1980, articulated the need for a strong national policy supporting domestic technology transfer. The law requires federal laboratories to establish an office of research and technology applications and devote budget and personnel resources to promoting technology cooperation and the transfer of technologies to private industry and state and local governments. In addition, the act requires federal agencies that operate or direct federal laboratories to report information on technology transfer performance annually to the Office of Management and Budget, as part of their annual budget submission. Copies of those reports should be transmitted to the Secretary of Commerce who must submit a summary report to Congress and the President. For many years after the Stevenson-Wydler Technology Innovation Act of 1980, VA waived ownership rights to inventions generated by its researchers, leaving the responsibilities for patenting, marketing, and licensing with the inventor and the VA medical center’s university partner. As a result of this practice, according to former VA officials, some VA research was not commercialized because VA did not have a technology transfer program or other means to promote commercialization. In 2000, VA created the VA Technology Transfer Program to facilitate the commercialization of VA inventions to benefit veterans and the American public. VA developed technology transfer agreements with universities to help facilitate technology transfer. Under the terms of the agreements, the universities can take the lead on patenting and commercialization, and VA can retain joint ownership of inventions. Among other things, the original agreements gave the universities the right of first refusal to apply for and manage patents, market the technologies, negotiate licenses, and collect royalties to be shared with VA. As of November 2017, the VA Technology Transfer Office, located in Washington, D.C., employed five technology transfer specialists responsible for all technology transfer activities for VA’s solely owned inventions. These inventions may come from more than 3,000 VA researchers at over 100 VA medical centers, as well as from VA employees at other VA locations. In addition, the technology transfer specialists are responsible for coordinating with universities on inventions made by dually appointed researchers. According to VA officials, VA relies on affiliated universities for most of the technology transfer efforts connected with such inventions, since the universities have their own offices with expertise in technology transfer and are usually willing to take the lead. Under a Veterans Health Administration 2002 policy on invention disclosures, which was revised in January 2017, VA employees who invent something are directed to disclose those inventions to VA using a disclosure form and complete a certification form to certify whether VA resources were used. VA employees are to disclose inventions to VA even if they were not created with VA resources. Affiliated universities may also require dually appointed researchers to disclose inventions to the university. Under agreements between the universities and VA, universities are required to disclose a dually appointed researcher’s invention to VA, as an additional assurance to aid VA in capturing relevant inventions. Similarly, VA is to notify the university when a dually appointed researcher’s invention comes to its attention. According to VA policy, researchers’ supervisors or research administrators at VA medical centers are to review the disclosure forms and send them to the VA Technology Transfer Office. The office evaluates the information and provides a recommendation to VA’s General Counsel on whether VA should assert ownership. If General Counsel’s review finds that VA should assert ownership, the General Counsel notifies the VA researcher’s and the VA medical center’s research and development office of the determination. The Technology Transfer Office then notifies the researcher’s university about VA’s ownership of the invention. At this point the department expects the university to include VA as an owner during the patenting process, according to VA officials. Figure 1 shows VA’s process for determining ownership of inventions created by dually appointed researchers, according to VA policies. If the university takes the lead on an invention of a dually appointed researcher, original VA agreements require universities to provide annual reports to update VA on commercialization activities, such as progress in licensing inventions or collecting royalties from licensees. While less commonly used, alternative processes for commercialization are shown in appendix I. We and others have identified a number of challenges associated with technology transfer from federal research facilities. For example, we found that technology transfer is often not a priority for laboratory managers; researchers may not understand the potential commercial applicability of their innovations; or the technologies are often not developed enough for use in market-ready products and may require investment of additional time and money to develop. We also have reported that pharmaceutical inventions in particular may take a relatively long time to develop. For example, the entire discovery, development, and review process of a new drug can take up to 15 years. VA Has Taken Steps to Educate Researchers and Universities about Requirements but Could Enhance Researchers’ Training VA Has Taken Steps to Educate Researchers but Reported that Researchers Have Not Always Disclosed Their Inventions Although VA has taken steps to educate researchers about disclosure of inventions, VA officials reported that the researchers have not consistently disclosed inventions to the department because they did not always fully understand VA’s disclosure policy. Officials from VA’s technology transfer office told us on multiple occasions that they believed researchers did not consistently disclose inventions. For example, in December 2016, VA officials said that once the technology transfer office began sending researchers e-mail notices about the need to disclose inventions, the number of disclosures increased, which they said suggested underreporting had been occurring. In March 2017, the officials told us that many of the inventions from more than 50 researchers during a 5-year period at one university had not been disclosed until VA checked with the university and discovered the error. By November 2017, VA technology transfer officials thought disclosure had improved throughout VA, but they were still not able to describe the extent of the problem. The researchers we interviewed at the six medical centers in our sample generally believed that they had properly disclosed inventions. However, according to VA officials, a university official, and two VA researchers, there could be several reasons that contributed to researchers not consistently disclosing their inventions to VA, including the following: Researchers may have disclosed inventions to their university, assuming the university would disclose them to VA on their behalf. Researchers may have disclosed their inventions to the university because it was more convenient than disclosing to VA, as the university’s technology transfer officials were more accessible to answer questions. Researchers were not familiar with VA’s invention disclosure process because the process was not routine to them. Researchers may have believed they did not use VA resources and did not realize they were still required to disclose to VA. VA research administrators may not always have reminded researchers of the need to disclose inventions, as they did not consider this requirement a priority. VA made efforts since fiscal year 2016 to inform researchers about its disclosure policy. For example, according to VA officials, the department has increased its in-person communication with VA researchers. In the first 8 months of fiscal year 2017, VA staff made 26 visits to universities and VA medical centers to meet with researchers to encourage the disclosure of inventions. However, VA officials said participation rates among researchers at these voluntary meetings were low in some cases. At one medical center, only the research administrator and one other researcher attended the meeting, according to the administrator. In addition, VA established an online training program in 2017 covering the invention disclosure process, but the training is not mandatory. VA provided us with a report from October 2017 indicating that out of over 3,000 eligible researchers, 130 had taken the training (about 4 percent). One VA research administrator said that mandatory training would be helpful. Under federal internal control standards, management is to internally communicate the necessary quality information to achieve the entity’s objectives, such as by communicating that information down and across reporting lines to enable personnel to perform key roles in achieving those objectives. Given that VA has not made the meetings or online training on disclosure policy mandatory, its importance may not be clear to all researchers. Also, because researchers do not make discoveries every year, and the process is not routine, taking such training once may not be sufficient to educate users. Without requiring researchers to take online training on the invention disclosure process annually, researchers may not be fully informed about the requirement to disclose inventions, which can result in lost technology transfer opportunities and lost royalties for VA if the inventions are not disclosed. VA Has Taken Steps to Make Universities Aware of VA Researchers and Disclosure Requirements Based on our interviews with VA and university officials in our sample, since fiscal year 2016, the department took steps to make universities aware of VA researchers and their disclosure requirements in an effort to improve university disclosures to VA. We reviewed 16 agreements between VA and affiliated universities, including the five universities with agreements in our sample, and all of the universities agreed to disclose joint inventions to the department. However, VA officials we interviewed said that universities may not always disclose all inventions to VA. Although they said they could not identify the extent of the problem, the officials highlighted one university in our sample that had not disclosed inventions to VA for at least 5 years. This university did not disclose inventions to VA, as agreed, until prompted by VA’s technology transfer office late in fiscal year 2016. Responsible university officials said they had assumed the dually appointed researchers were disclosing the inventions to VA. According to VA officials, when the VA technology transfer office received a report from the university in fiscal year 2017 that covered 5 years of disclosures, VA learned it had not received 80 percent of the disclosures from that university for that period. VA officials said they had not contacted the university sooner because their technology transfer office had been understaffed until early in calendar year 2016. VA officials from the technology transfer office had not identified a similar problem of this magnitude with the other universities, including those in our sample. According to our interviews with VA and university officials, some of VA’s university partners may not have been aware of which researchers were also VA employees because the universities’ lists of VA researchers were not current and universities generally relied on the researchers to disclose whether they were VA employees. Furthermore, in some cases, the university disclosure forms did not specifically ask whether the researcher also worked at VA. For example, two of the six forms used by universities in our sample did not specifically ask the researcher to indicate whether they were VA employees. Upon recognizing some shortcomings in universities’ disclosures to VA, the department provided current lists of VA researchers at affiliated VA medical centers to their respective universities in fiscal year 2017, and VA technology transfer officials said they intend to provide such updated lists to the universities semi-annually. VA officials said that universities may not be using these lists, but they will not know until time has elapsed. VA technology transfer officials said their site visits to VA medical centers—they conducted 26 visits in fiscal year 2017—along with other communications with their counterparts at the universities should help the disclosure process. VA Increased Communication with Universities about Reporting Commercialization Activities but Has Not Ensured that Such Activities Are Consistently Reported VA has increased communication with universities since 2016 to help ensure that universities report information about commercialization activities for joint inventions, but universities’ reporting remained inconsistent as of January 2018, according to VA. Under the original agreements, such as the ones in our sample of eight agreements, universities have the exclusive right to license and commercialize joint inventions. VA’s awareness of the commercialization of such inventions depends on universities providing this information through annual reports, as required by the agreements. However, according to VA officials, prior to 2011, only about 20 percent of the 79 universities with which VA has agreements submitted annual reports. According to VA officials, VA made an effort to increase annual reporting, and by 2013 it was up to 80 percent. The officials said, however, that the percentage of universities submitting annual reports dropped again after losing staff in the technology transfer office—the office retained only three staff in subsequent years until fiscal year 2017 when there were 11 staff, including 5 technology transfer specialists. In addition, VA officials we interviewed said that there was some confusion among universities regarding when they needed to submit annual reports. For example, they said that some universities may not have understood whether they needed to provide annual reports during years when there was no new patenting or licensing activity. The officials said that this was at least part of the reason some universities did not submit annual reports. VA officials told us that they expect universities to provide annual reports even when there is no new patenting or licensing activity, and in fiscal year 2016 technology transfer officials e-mailed universities to clarify this expectation. The officials also said that in October 2016 they sent a letter to each of the 79 universities with which the department has agreements to remind universities to submit the required annual reports. Further, as stated earlier, VA staff made 26 visits to VA medical centers and universities in the first 8 months of fiscal year 2017 to discuss reporting and disclosure requirements. However, VA reported that 24 percent of the 79 affiliated universities provided annual reports in fiscal year 2017 even after VA’s outreach. Because they did not always receive annual reports, VA officials said they were often not aware of a joint license until the university sent VA the first royalty check for a joint invention. VA officials said they plan to conduct audits to check the accuracy of university information. Beginning in fiscal year 2015, VA began creating new agreements with universities to give VA enhanced responsibility in licensing and commercialization of joint inventions. By the end of fiscal year 2017, VA had new agreements in place with 11 of the 79 universities. Based on our review of 8 of the new agreements, VA will now, for the first time, have the option to take the lead in licensing joint inventions. For inventions for which VA does not take the lead role, under the new agreements, it will have the right to review and provide input on all joint licenses. This new provision improves VA’s awareness of any joint licenses created in the future. However, because original agreements did not include this provision, VA will still need to rely on accurate and updated annual reports from universities for information on licenses negotiated under those agreements. In addition, the new agreements do not improve or clarify language from the original agreements about what details need to be included in the annual reports. According to our analysis, these eight new agreements, similar to the original eight agreements we reviewed, do not contain details on the specific information and format in which to present the annual report. For example, both the original and four of the new agreements we reviewed require universities to provide an annual report, but four other new agreements state that the universities will provide annual reports upon request. The original agreements as well as all eight of the new agreements indicate that reports should include the status of all patent prosecution, commercial development, and licensing activity on joint inventions but do not explain whether an annual report is needed if there has been no commercialization activity. As noted above, VA officials said universities were confused about whether they were required to report to VA if they had no new activity in a given period; however, VA officials told us they still need reports in these situations. Furthermore, based on our analysis of 12 annual reports from eight universities, the format and content of the reports has been inconsistent. Four universities submitted reports in a spreadsheet format; two universities submitted reports in portable document format (PDF); one university submitted a report in a Word format; and one submitted five different documents, including both PDF and spreadsheet. In addition to differences in format, the annual reports differed considerably in the content they provided—the more detailed annual reports included patent application numbers, patent expenses, the status of patent applications, and information about whether the patent had been licensed. In contrast, the less detailed annual reports did not provide any of this information on patents for the joint inventions. Moreover, one university only included active license agreements in its annual report, while other universities also included license agreements that were terminated. VA officials we interviewed agreed that the reports are not very detailed or standardized but said they would like to eventually standardize the annual report format and content so they can use the reports to track and audit joint inventions The differences in annual reports exist because VA has not provided the universities with a standardized method for reporting, including the format that should be used for the annual reports and the content to include in them. Under federal standards for internal control, management should design control activities to achieve objectives and respond to risks. Such control activities include providing a standardized method that guides universities in fulfilling VA’s reporting requirements to ensure the objectives of the program are being achieved. Without providing a standardized method that clearly guides universities in fulfilling VA’s reporting requirements for these annual reports, including their format and content, the department will not be able to ensure detailed and standardized annual reports that include details about licenses and royalties. VA officials said that they were working on a template for universities to use in reporting on commercialization activities for joint inventions. However it is not clear whether the template would inform universities of VA’s requirements to submit an annual report even if they had no new commercialization activity in a given period. Conclusions VA manages a research program unique within the federal government in that most of its researchers are dually appointed to universities, and their inventions are jointly owned by VA and the universities, which typically take the lead on commercialization activities. While VA has taken steps to educate researchers about requirements for researchers and universities to disclose inventions to VA, VA officials reported that researchers have not consistently done so, because they did not always fully understand the policy. Given that VA has not made its online training on disclosure policy mandatory, the policy’s importance may not be clear to researchers. Also, because researchers do not make discoveries every year, and the process of disclosure is not routine, taking such training once may not be sufficient. Without requiring researchers to take online training on the invention disclosure process annually, researchers may not be fully informed about the requirement to disclose inventions, which can result in lost technology transfer opportunities as well as lost royalties for VA if the inventions are not disclosed. VA has also taken steps to improve communication with universities to increase reporting of commercialization activities, but said that such reporting by universities is inconsistent, and VA may not have adequate information to account for all of its licenses and royalties. Without providing a standardized method that clearly guides universities in fulfilling VA’s reporting requirements for these annual reports, including their format and content, the department will not be able to ensure detailed and standardized annual reports. Recommendations for Executive Action We are making the following two recommendations to VA: The Under Secretary of Health should make VA’s online training on invention disclosure mandatory for researchers and require that it be completed annually. (Recommendation 1) The Under Secretary of Health should provide a standardized method that guides universities in fulfilling VA’s reporting requirements for these annual reports, including their format and content. (Recommendation 2) Agency Comments We provided a draft of this report to the Department of Veterans Affairs for review and comment. In written comments reproduced in appendix II, VA agreed with our recommendations. Specifically, for our first recommendation, VA said it will develop a plan to ensure its researchers complete online technology transfer training on invention disclosure annually. Furthermore, the plan will contain contingencies for those who do not meet the requirements. The department expects to issue a training requirement, train staff, and also demonstrate training is done by September 2019. In addition, for our second recommendation, VA said it will develop a standardized method that guides universities in fulfilling VA’s reporting requirements for the university technology transfer annual reports. VA has a target completion date of December 2018. VA also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At this time, we will send copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to the report are listed in appendix III. Appendix I: Department of Veterans Affairs Commercialization Process, Revenue, and Total Joint Patents with Universities The process for commercializing a Department of Veterans Affairs (VA) invention can take several avenues. Generally a university takes the lead on inventions of dual appointees who work for VA and a university, and VA researchers who are not dually appointed rely on VA to patent and license their inventions. Also, VA can take the lead on joint inventions, for example, if the university is not interested in ownership. (see fig. 2). VA received about $316,000 in royalties from 45 licenses for its inventions in fiscal year 2016 (see table 1). VA has U.S. and foreign patents. From calendar years 2000 through November 2017, the U.S. Patent and Trademark Office has granted VA 82 patents for which VA is the sole assignee, according to VA officials. Also, table 2 shows by university the breakdown of the 206 patents for which VA shares ownership with an affiliate. Appendix II: Comments from the Department of Veterans Affairs Appendix III: GAO Contacts and Staff Acknowledgments GAO Contact John Neumann, (202) 512-3841 or [email protected]. Staff Acknowledgments In addition to the contact named above, Rob Marek (Assistant Director), Daniel Semick (Analyst in Charge), Ivelisse Aviles, Navaiyoti Barkakati, Kevin Bray, Ellen Fried, Matthew Hunter, Cynthia Saunders, Dan C. Royer, Ardith Spence, and Kiki Theodoropoulos made key contributions to this report.
Why GAO Did This Study VA manages a $1.9 billion research program that has produced numerous healthcare inventions, such as the pacemaker. In 2000, VA created a program to help transfer VA inventions to the private sector so that they can be commercialized and used by veterans and the public, while VA retains ownership and collects royalties. Many of VA's 3,000 researchers also hold positions at universities, which take the lead in commercializing inventions developed by these researchers. Researchers and universities are required to disclose such inventions to VA, and universities are to report on commercialization activities according to their agreements with VA. GAO was asked to examine VA's ability to ensure its ownership of inventions made with VA resources. This report examines, among other things, the extent to which VA has taken steps to ensure that (1) researchers disclose inventions and (2) universities report on commercialization activities for joint inventions. GAO reviewed laws; policies; a nongeneralizable sample of university agreements based on backlogs of disclosures, among other factors; and interviews with officials and researchers from VA medical centers and their affiliated universities. What GAO Found The Department of Veterans Affairs (VA) has taken steps to educate agency researchers about its requirements to disclose inventions to VA, but officials reported that researchers have not consistently done so. VA policy requires researchers to disclose inventions to both VA and the university they work for even when they do not use VA resources. GAO found, through discussions with VA officials and researchers, that several factors contribute to researchers not consistently disclosing their inventions, including that VA researchers may have: disclosed inventions to their university, assuming the university would then disclose them to VA; not been familiar with VA's invention disclosure process, because they may not have frequently developed inventions; or thought that invention disclosure was unnecessary when they did not use VA resources to develop their invention. In 2017, VA staff visited universities and VA medical centers 26 times to meet with researchers about invention disclosure. Also, VA created an online training course to educate researchers on the need to disclose inventions, but the training is not mandatory, and about 4 percent of researchers took it. Without mandatory training to communicate invention disclosure requirements—consistent with federal internal control standards for internally communicating quality information—VA researchers may not be fully informed about those requirements, which can result in lost technology transfer opportunities and royalties for VA. VA has improved communication with universities but has not ensured that they are consistently reporting information on commercialization activities for joint inventions. VA reported that about three-quarters of VA's 79 university partners did not submit the annual reports required by VA in 2017. GAO reviewed a nongeneralizable sample of agreements VA has with universities and found that reporting requirements about timing and content of reports were unclear. Without providing a standardized method that clearly guides universities in fulfilling VA's reporting requirements, consistent with federal standards for internal control, VA cannot ensure that it has adequate information to account for its licenses and royalties. What GAO Recommends GAO recommends that VA (1) make training about invention disclosure mandatory and (2) provide universities with a standardized method for annual reporting. VA concurred with GAO’s recommendations.
gao_GAO-18-667T
gao_GAO-18-667T_0
Background The design and development of information systems can be complex undertakings, consisting of a multitude of pieces of equipment and software products, and service providers. Each of the components of an information system may rely on one or more supply chains—that is, the set of organizations, people, activities, information, and resources that create and move a product or service from suppliers to an organization’s customers. Obtaining a full understanding of the sources of a given information system can also be extremely complex. According to the Software Engineering Institute, the identity of each product or service provider may not be visible to others in the supply chain. Typically, an acquirer, such as a federal agency, may only know about the participants to which it is directly connected in the supply chain. Further, the complexity of corporate structures, in which a parent company (or its subsidiaries) may own or control companies that conduct business under different names in multiple countries, presents additional challenges to fully understanding the sources of an information system. As a result, the acquirer may have little visibility into the supply chains of its suppliers. Federal procurement law and policies promote the acquisition of commercial products when they meet the government’s needs. Commercial providers of IT use a global supply chain to design, develop, manufacture, and distribute hardware and software products throughout the world. Consequently, the federal government relies heavily on IT equipment manufactured in foreign nations. Federal information and communications systems can include a multitude of IT equipment, products, and services, each of which may rely on one or more supply chains. These supply chains can be long, complex, and globally distributed and can consist of multiple tiers of outsourcing. As a result, agencies may have little visibility into, understanding of, or control over how the technology that they acquire is developed, integrated, and deployed, as well as the processes, procedures, and practices used to ensure the integrity, security, resilience, and quality of the products and services. Table 1 highlights possible manufacturing locations of typical components of a computer or information systems network. Moreover, many of the manufacturing inputs required for these components—whether physical materials or knowledge—are acquired from various sources around the globe. Figure 1 depicts the potential countries of origin of common suppliers of various components in a commercially available laptop computer. Federal Laws and Guidelines Require the Establishment of Information Security Programs and Provide for Managing Supply Chain Risk The Federal Information Security Modernization Act (FISMA) of 2014 requires federal agencies to develop, document, and implement an agency-wide information security program to provide information security for the information systems and information that support the operations and assets of the agency. The act also requires that agencies ensure that information security is addressed throughout the life cycle of each agency information system. FISMA assigns NIST the responsibility for providing standards and guidelines on information security to agencies. In addition, the act authorizes DHS to develop and issue binding operational directives to agencies, including directives that specify requirements for the mitigation of exigent risks to information systems. NIST has issued several special publications (SP) that provide guidelines to federal agencies on controls and activities relevant to managing supply chain risk. For example, NIST SP 800-39 provides an approach to organization-wide management of information security risk, which states that organizations should monitor risk on an ongoing basis as part of a comprehensive risk management program. NIST SP 800-53 (Revision 4) provides a catalogue of controls from which agencies are to select controls for their information systems. It also specifies several control activities that organizations could use to provide additional supply chain protections, such as conducting due diligence reviews of suppliers and developing acquisition policy, and implementing procedures that help protect against supply chain threats throughout the system development life cycle. NIST SP 800-161 provides guidance to federal agencies on identifying, assessing, selecting, and implementing risk management processes and mitigating controls throughout their organizations to help manage information and communications technology supply chain risks. In addition, as of June 2018, DHS has issued one binding operational directive related to an IT supply chain-related threat. Specifically, in September 2017, DHS issued a directive to all federal executive branch departments and agencies to remove and discontinue present and future use of Kaspersky-branded products on all federal information systems. In consultation with interagency partners, DHS determined that the risks presented by these products justified their removal. Beyond these guidelines and requirements, the Ike Skelton National Defense Authorization Act for Fiscal Year 2011 also included provisions related to supply chain security. Specifically, Section 806 authorizes the Secretaries of Defense, the Army, the Navy, and the Air Force to exclude a contractor from specific types of procurements on the basis of a determination of significant supply chain risk to a covered system. Section 806 also establishes requirements for limiting disclosure of the basis of such procurement action. IT Supply Chains Introduce Numerous Information Security Risks to Federal Agencies In several reports issued since 2012, we have pointed out that the reliance on complex, global IT supply chains introduces multiple risks to federal information and telecommunications systems. This includes the risk of these systems being manipulated or damaged by leading foreign cyber-threat nations such as Russia, China, Iran, and North Korea. Threats and vulnerabilities created by these cyber-threat nations, vendors or suppliers closely linked to cyber-threat nations, and other malicious actors can be sophisticated and difficult to detect and, thus, pose a significant risk to organizations and federal agencies. As we reported in March 2012, supply chain threats are present at various phases of a system’s development life cycle. Key threats that could create an unacceptable risk to federal agencies include the following. Installation of hardware or software containing malicious logic, which is hardware, firmware, or software that is intentionally included or inserted in a system for a harmful purpose. Malicious logic can cause significant damage by allowing attackers to take control of entire systems and, thereby, read, modify, or delete sensitive information; disrupt operations; launch attacks against other organizations’ systems; or destroy systems. Installation of counterfeit hardware or software, which is hardware or software containing non-genuine component parts or code. According to the Defense Department’s Information Assurance Technology Analysis Center, counterfeit IT threatens the integrity, trustworthiness, and reliability of information systems for several reasons, including the facts that (1) counterfeits are usually less reliable and, therefore, may fail more often and more quickly than genuine parts; and (2) counterfeiting presents an opportunity for the counterfeiter to insert malicious logic or backdoors into replicas or copies that would be far more difficult in more secure manufacturing facilities. Failure or disruption in the production or distribution of critical products. Both man-made (e.g., disruptions caused by labor, trade, or political disputes) and natural (e.g., earthquakes, fires, floods, or hurricanes) causes could decrease the availability of material needed to develop systems or disrupt the supply of IT products critical to the operations of federal agencies. Reliance on a malicious or unqualified service provider for the performance of technical services. By virtue of their position, contractors and other service providers may have access to federal data and systems. Service providers could attempt to use their access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. Installation of hardware or software that contains unintentional vulnerabilities, such as defects in code that can be exploited. Cyber attackers may focus their efforts on, among other things, finding and exploiting existing defects in software code. Such defects are usually the result of unintentional coding errors or misconfigurations, and can facilitate attempts by attackers to gain unauthorized access to an agency’s information systems and data, or disrupt service. We noted in the March 2012 report that threat actors can introduce these threats into federal information systems by exploiting vulnerabilities that could exist at multiple points in the global supply chain. In addition, supply chain vulnerabilities can include weaknesses in agency acquisition or security procedures, controls, or implementation related to an information system. Examples of the types of vulnerabilities that could be exploited include acquisitions of IT products or parts from sources other than the original manufacturer or authorized reseller, such as independent distributors, brokers, or on the gray market; lack of adequate testing for software updates and patches; and incomplete information on IT suppliers. If a threat actor exploits an existing vulnerability, it could lead to the loss of the confidentiality, integrity, or availability of the system and associated information. This, in turn, can adversely affect an agency’s ability to carry out its mission. Four National Security-Related Agencies Have Acted to Better Address IT Supply Chain Risks for Their Information Systems In March 2012, we reported that the four national security-related agencies (i.e., Defense, Justice, Energy, and DHS) had acknowledged the risks presented by supply chain vulnerabilities. However, the agencies varied in the extent to which they had addressed these risks by (1) defining supply chain protection measures for department information systems, (2) developing implementing procedures for these measures, and (3) establishing capabilities for monitoring compliance with, and the effectiveness of, such measures. Of the four agencies, the Department of Defense had made the most progress addressing the risks. Specifically, the department’s supply chain risk management efforts began in 2003 and included: a policy requiring supply chain risk to be addressed early and across a system’s entire life cycle and calling for an incremental implementation of supply chain risk management through a series of pilot projects; a requirement that every acquisition program submit and update a “program protection plan” that was to, among other things, help manage risks from supply chain exploits or design vulnerabilities; procedures for implementing supply chain protection measures, such as an implementation guide describing 32 specific measures for enhancing supply chain protection and procedures for program protection plans identifying ways in which programs should manage supply chain risk; and a monitoring mechanism to determine the status and effectiveness of supply chain protection pilot projects, as well as monitoring compliance with and effectiveness of program protection policies and procedures for several acquisition programs. Conversely, our report noted that the other three agencies had made limited progress in addressing supply chain risks for their information systems. For example: The Department of Justice had defined specific security measures for protecting against supply chain threats through the use of provisions in vendor contracts and agreements. Officials identified (1) a citizenship and residency requirement and (2) a national security risk questionnaire as two provisions that addressed supply chain risk. However, Justice had not developed procedures for ensuring the effective implementation of these protection measures or a mechanism for verifying compliance with, and the effectiveness of these measures. We stressed that, without such procedures, Justice would have limited assurance that its departmental information systems were being adequately protected against supply chain threats. In May 2011, the Department of Energy revised its information security program, which required Energy components to implement provisions based on NIST and Committee on National Security Systems guidance. However, the department was unable to provide details on implementation progress, milestones for completion, or how supply chain protection measures would be defined. Because it had not defined these measures or associated implementing procedures, we reported that the department was not in a position to monitor compliance or effectiveness. Although its information security guidance mentioned the NIST control related to supply chain protection, DHS had not defined the supply chain protection control activities that system owners should employ. The department’s information security policy manager stated that DHS was in the process of developing policy that would address supply chain protection, but did not provide details on when it would be completed. In the absence of such a policy, DHS was not in a position to develop implementation procedures or to monitor compliance or effectiveness. To assist Justice, Energy, and DHS in better addressing IT supply chain- related security risks for their departmental information systems, we made eight recommendations to these three agencies in our 2012 report. Specifically, we recommended that Energy and DHS: develop and document departmental policy that defines which security measures should be employed to protect against supply chain threats. We also recommended that Justice, Energy, and DHS: develop, document, and disseminate procedures to implement the supply chain protection security measures defined in departmental policy, and develop and implement a monitoring capability to verify compliance with, and assess the effectiveness of, supply chain protection measures. The three agencies generally agreed with our recommendations and, subsequently, implemented seven of the eight recommendations. Specifically, we verified that Justice and Energy had implemented each of the recommendations we made to them by 2016. We also confirmed that DHS had implemented two of the three recommendations we made to that agency by 2015. However, as of fiscal year 2016, DHS had not fully implemented our recommendation to develop and implement a monitoring capability to verify compliance with, and assess the effectiveness of, supply chain protections. Although the department had developed a policy and approach for monitoring supply chain risk management activities, it could not provide evidence that its components had actually implemented the policy. Thus, we were not able to close the recommendation as implemented. Nevertheless, the implementation of the seven recommendations and partial implementation of the eighth recommendation better positioned the three agencies to monitor and mitigate their IT supply chain risks. In addition, we reported in March 2012 that the four national security- related agencies had participated in interagency efforts to address supply chain security, including participation in the Comprehensive National Cybersecurity Initiative, development of technical and policy tools, and collaboration with the intelligence community. In support of the cybersecurity initiative, Defense and DHS jointly led an interagency initiative on supply chain risk management to address issues of globalization affecting the federal government’s IT. Also, DHS had developed a comprehensive portfolio of technical and policy-based product offerings for federal civilian departments and agencies, including technical assessment capabilities, acquisition support, and incident response capabilities. The efforts of the four agencies could benefit all federal agencies in addressing their IT supply chain risks. In summary, the global IT supply chain introduces a myriad of security risks to federal information systems that, if realized, could jeopardize the confidentiality, integrity, and availability of federal information systems. Thus, the potential exists for serious adverse impact on an agency’s operations, assets, and employees. These factors highlight the importance and urgency of federal agencies appropriately assessing, managing, and monitoring IT supply chain risk as part of their agencywide information security programs. Chairmen King and Perry, Ranking Members Rice and Correa, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to answer your questions. Contact and Acknowledgments If you have any questions regarding this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected]. Other key contributors to this statement include Jeffrey Knott (assistant director), Christopher Businsky, Nancy Glover, and Rosanna Guerrero. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study IT systems are essential to the operations of the federal government. The supply chain—the set of organizations, people, activities, and resources that create and move a product from suppliers to end users—for IT systems is complex and global in scope. The exploitation of vulnerabilities in the IT supply chain is a continuing threat. Federal security guidelines provide for managing the risks to the supply chain. This testimony statement highlights information security risks associated with the supply chains used by federal agencies to procure IT systems. The statement also summarizes GAO's 2012 report that assessed the extent to which four national security-related agencies had addressed such risks. To develop this statement, GAO relied on its previous reports, as well as information provided by the national security-related agencies on their actions in response to GAO's previous recommendations. GAO also reviewed federal information security guidelines and directives. What GAO Found Reliance on a global supply chain introduces multiple risks to federal information systems. Supply chain threats are present during the various phases of an information system's development life cycle and could create an unacceptable risk to federal agencies. Information technology (IT) supply chain-related threats are varied and can include: installation of intentionally harmful hardware or software (i.e., containing “malicious logic”); installation of counterfeit hardware or software; failure or disruption in the production or distribution of critical products; reliance on malicious or unqualified service providers for the performance of technical services; and installation of hardware or software containing unintentional vulnerabilities, such as defective code. These threats can have a range of impacts, including allowing adversaries to take control of systems or decreasing the availability of materials needed to develop systems. These threats can be introduced by exploiting vulnerabilities that could exist at multiple points in the supply chain. Examples of such vulnerabilities include the acquisition of products or parts from unauthorized distributors; inadequate testing of software updates and patches; and incomplete information on IT suppliers. Malicious actors could exploit these vulnerabilities, leading to the loss of the confidentiality, integrity, or availability of federal systems and the information they contain. GAO reported in 2012 that the four national security-related agencies in its review—the Departments of Defense, Justice, Energy, Homeland Security (DHS)—varied in the extent to which they had addressed supply chain risks. Of the four agencies, Defense had made the most progress addressing the risks. It had defined and implemented supply chain protection controls, and initiated efforts to monitor the effectiveness of the controls. Conversely, Energy and DHS had not developed or documented policies and procedures that defined security measures for protecting against IT supply chain threats and had not developed capabilities for monitoring the implementation and effectiveness of the measures. Although Justice had defined supply chain protection measures, it also had not developed or documented procedures for implementing or monitoring the measures. Energy and Justice fully implemented the recommendations that GAO made in its 2012 report and resolved the deficiencies that GAO had identified with their supply chain risk management efforts by 2016. DHS also fully implemented two recommendations to document policies and procedures for defining and implementing security measures to protect against supply chain threats by 2015, but could not demonstrate that it had fully implemented the recommendation to develop and implement a monitoring capability to assess the effectiveness of the security measures. What GAO Recommends In its 2012 report, GAO recommended that Justice, Energy, and DHS take eight actions, as needed, to develop and document policies, procedures, and monitoring capabilities that address IT supply chain risk. The departments generally concurred with the recommendations and subsequently implemented seven recommendations and partially implemented the eighth recommendation.
gao_GAO-18-89
gao_GAO-18-89_0
Background This section discusses the purpose, types, and locations of natural gas storage sites; leaks from such sites; safety enforcement prior to 2017; and the PIPES Act. Purpose, Types, and Locations of Natural Gas Storage Sites Natural gas storage sites—geologic formations where natural gas is stored deep underground and retrieved for later use—are key parts of our energy system. Natural gas provides about 30 percent of U.S. energy needs, is used to generate a third of the nation’s electricity, is widely used for heating homes and businesses, and is used in a variety of industrial processes, according to Energy Information Administration (EIA) information. Natural gas storage sites provide a way to meet peak energy needs—such as during a cold spell in the winter or during periods of high electricity demand in the summer—more quickly than would be possible if relying solely on pipelines that transport natural gas from distant production fields. Natural gas storage sites are privately owned and operated by a variety of companies in the energy industry, including local utilities, independent companies that store gas for sale at peak times to other companies, and interstate pipeline companies. There are three major types of underground geologic formations where natural gas storage sites are found: (1) underground salt caverns, (2) depleted aquifers, and (3) depleted oil and gas reservoirs. The wells that inject or withdraw natural gas from the underground formations can extend thousands of feet underground. The 415 natural gas storage sites in the United States contain about 17,000 wells, ranging from a few wells per site to over a hundred wells at some larger sites. Figure 1 illustrates the types of geologic formations where natural gas storage sites are constructed and operated. Natural gas storage sites are found in 31 states across the country, according to EIA data. Over 300 cities, towns, and other populated areas are located near a natural gas storage site, according to a DOE analysis. Operators often locate natural gas storage sites near major population centers or large gas pipelines to improve their ability to deliver natural gas when needed. Figure 2 shows the approximate location of natural gas storage sites located within counties populated by 100,000 or more people. Leaks from Natural Gas Storage Sites Leaks from natural gas storage sites can be caused by a variety of factors—such as underground fissures or inadequately designed or damaged wells—and have the potential to affect human health, cause economic disruption, and harm the environment. For example, natural gas poses the risk of explosion and asphyxiation within enclosed spaces. In addition, other components of natural gas can cause short-term neurological, gastrointestinal, and respiratory symptoms, according to the Los Angeles County Department of Public Health. Moreover, if a large gas storage facility unexpectedly goes offline due to a major leak, it can disrupt the natural gas supply system, which in turn may affect the flow of gas to heat homes and businesses or may cause electrical blackouts due to the loss of fuel for gas-fired electrical generators. According to a DOE report, the natural gas stored in geologic formations is under high pressure and may find its way to the surface if underground fissures or unplugged oil and gas wells allow the geologic formation to be breached. Leaks can also occur if the wells used to inject and withdraw natural gas from geologic formations lose integrity due to cracking of cement used to seal the well or other factors. Older wells used for natural gas storage were often drilled for other reasons, such as oil and gas production, and are more likely to have age-related degradation, according to DOE. About half of the about 17,000 wells that inject and withdraw natural gas from storage sites are more than 50 years old, and many wells are more than 100 years old, according to DOE. In addition, DOE reported that other factors may contribute to leaks, such as earthquake activity, nearby drilling activity, or other mechanical stresses and undetected corrosion that may not be known by the natural gas storage site operators. Further, DOE has reported that operators can sustain safety by regularly maintaining site equipment, monitoring and repairing leaks, keeping records about the site, and planning for possible emergencies, among other things. Leaks from natural gas storage sites can result in significant and harmful effects on public health and safety, the environment, and the energy system. DOE, PHMSA, and others have identified three major leaks from natural gas storage sites since 2000 that illustrate these potential negative effects: The Aliso Canyon leak, which was detected in October 2015 and continued for nearly 4 months, focused national attention on natural gas storage safety. As of August 2017, the cause of the leak had not been conclusively determined. However, the leak occurred in a well that, at the time, was about 60 years old, according to DOE. The operator of the Aliso Canyon site unsuccessfully attempted to stop the leak several times over the 4-month event and eventually was able to do so in February 2016 by permanently sealing the well. According to the private operator, it temporarily relocated about 8,000 neighboring families until the leak was abated. Also, the leak disrupted the Aliso Canyon site’s ability to supply natural gas to electricity generating plants. Because the Aliso Canyon site supplies gas for nearly 10 gigawatts of electricity in the Los Angeles basin, the leak led to concerns that there may not be enough gas to serve the electricity needs of the surrounding region during peak times. In July 2017, California state regulators announced that the operator had conducted a comprehensive safety review and that the regulators would allow Aliso Canyon to reopen at a greatly reduced capacity in order to prevent energy shortages. In August 2004, the Moss Bluff natural gas storage site in Liberty County, Texas, experienced a major leak due to a damaged well. The leaking gas caught fire and burned for over 6 days, according to DOE and PHMSA documents. As a result, the gas was released into the atmosphere as carbon dioxide, which, according to an EPA analysis, is a less potent greenhouse gas than natural gas, which was released by the Aliso Canyon leak. In January 2001, the Yaggy natural gas storage site leaked through underground fissures from the site’s salt caverns into the nearby city of Hutchinson, Kansas, eventually causing an explosion in the city’s downtown business district, DOE reported. Two people were killed, and several businesses were damaged or destroyed by the explosion. Safety Enforcement for Natural Gas Storage Sites Prior to 2017 Before 2017, many natural gas storage sites were subject to varied, state- by-state safety enforcement. States were responsible for regulating and enforcing safety at sites that were located solely within their boundaries and only linked to pipelines within the state. Agencies representing 26 state governments licensed 211 such sites, which amounted to about half of the 415 active sites in the United States. Prior to 2017, these state governments applied various safety standards that addressed underground conditions, such as the integrity of the geologic formations that store natural gas, or the construction and maintenance of wells that inject and withdraw gas. For example, according to a DOE report, some states’ standards specified how site operators should safely construct the wells. Other states’ standards specified how wells were to be maintained during their useful life, or how they were to be safely plugged and abandoned after their useful life ended. Prior to 2017, the remaining 204 interstate natural gas storage sites were subject solely to federal oversight. However, the federal government had not issued safety standards for them. The Federal Energy Regulatory Commission (FERC) licenses storage sites that serve the interstate natural gas market—a market regulated by FERC. However, according to FERC, its licensing process focuses on whether a proposed site serves an economic need, and it does not review the safety conditions of a site when reviewing whether to grant a license. In this role, FERC has licensed 204 sites in 24 states. As part of its mission to ensure the safety of the interstate natural gas pipeline system—of which natural gas storage sites are a part—PHMSA had the regulatory authority to issue and enforce safety standards for interstate natural gas storage sites. However, PHMSA’s interstate pipeline safety regulations did not extend to underground natural gas storage facilities, even when connected to interstate pipelines. Moreover, because interstate sites were under federal jurisdiction, state safety standards could not be applied to such sites. Other federal agencies had responsibilities that addressed limited aspects of safety at natural gas storage sites. DOE provided technical assistance to California during the Aliso Canyon incident, and has researched the effects of natural gas storage leaks on the reliability of the electricity grid. The Bureau of Land Management (BLM), within the Department of the Interior, manages public lands that overlap, either partially or fully, with 33 natural gas storage sites. EPA provides funding and oversight to help states and local pollution control agencies meet their responsibility to monitor air quality within their jurisdictions, according to EPA officials. EPA can also provide its expertise and support to states and local communities in the event of natural gas storage leaks, as it did during the leak at Aliso Canyon. However, EPA does not regulate underground conditions at gas storage sites. The PIPES Act In June 2016, Congress passed and the President signed the PIPES Act, which, among other things, directed DOT to establish minimum safety standards for all natural gas storage sites by June 2018 after considering recommendations from a federal task force and industry standards. PHMSA sets and enforces these standards. The PIPES Act also directed DOE to establish and lead the task force, which was charged with analyzing the Aliso Canyon incident and making recommendations to reduce the occurrence of similar incidents in the future. The task force published its report in October 2016. The report included findings in three areas—well integrity, environmental and health protection, and energy reliability. The report also made 44 recommendations to enhance natural gas storage safety, including 3 key recommendations: Operators of natural gas storage sites should make advance preparations with appropriate federal, state, and local governments to mitigate potential future leaks. Electrical grid operators should prepare for the risks that potential gas storage disruptions create for the electric system. Operators of natural gas storage sites should begin a rigorous program to evaluate the status of the wells, establish risk management planning, and, in most cases, phase out old wells with single-point-of-failure designs. The PIPES Act directed DOT to consider industry consensus standards to the extent practicable in establishing its minimum safety standards. Consensus standards for the oil and gas industry—including those for natural gas storage—are issued by various entities, including the American Petroleum Institute (API). API consensus standards describe how to safely perform technical procedures, such as drilling wells for oil and gas production, refining produced natural gas into usable gas for heating and electricity generation, and conducting “workover” operations to refurbish existing wells. API develops its consensus standards involving industry, manufacturers, engineering firms, the public, academia, and government, and API’s recommended practices are frequently adopted by a majority of the industry, according to API and PHMSA. Following several years of study and discussion by industry experts and government officials, including participation by PHMSA, API issued two documents outlining recommended practices for the development and operations of natural gas storage sites. These recommended practices describe the procedures for designing, locating, constructing, and operating natural gas storage sites, and include such activities as inspecting and testing the wells used to inject and withdraw gas from natural gas storage sites and monitoring the integrity of the underground formations where natural gas is stored. The API documents also recommend that operators prepare for emergencies and train the personnel who operate the sites. Under the PIPES Act, state governments also have a continuing role in enforcing natural gas storage safety for the sites in their states. The act allows states to certify with PHMSA that they have adopted state standards that meet or exceed the federal standards and can enforce these standards. Once a state certifies that it has met these conditions, the state is responsible for enforcing safety standards on state-regulated intrastate natural gas underground storage sites through inspections conducted by state employees, according to PHMSA officials. In addition, PHMSA officials told us that they would periodically assess whether states are meeting these conditions. PHMSA officials told us that PHMSA will have direct responsibility for inspecting federally-licensed interstate facilities for the next few years because federal safety standards are still being established, but officials noted that state inspectors could eventually seek permission from PHMSA to assume the role of inspecting interstate natural gas storage sites on behalf of PHMSA in the future. PHMSA officials also noted that PHMSA does not force states to participate in their pipeline safety program, and so in cases where a state chooses not to certify its safety enforcement program, PHMSA has stated that it will assign its own inspectors and staff to enforce federal natural gas storage safety standards in that state. The PIPES Act also requires PHMSA to set and charge user fees to operators that it can use for activities related to underground natural gas storage facility safety, subject to the expenditure of these fees being provided in advance in an appropriations act. PHMSA Has Issued Interim Safety Standards and Plans to Finalize Them by January 2018 Citing an urgent need to improve safety at natural gas storage sites, PHMSA issued an interim final rule that includes minimum safety standards based largely on API recommended practices in December 2016. The rule took effect in January 2017 and provided that existing facilities (and those constructed by July 18, 2017) must meet the standards by January 18, 2018. PHMSA is now considering public comments on its interim standards, and it plans to finalize them by issuing a final rule by January 2018. PHMSA also has stated that it will delay enforcement of certain standards in the interim final rule until 1 year after issuance of the final rule. PHMSA Has Issued Minimum Standards in an Interim Final Rule To meet the requirement under the PIPES Act, PHMSA issued minimum safety standards for natural gas storage through an interim final rule in December 2016, which took effect in January 2017. PHMSA issued the interim final rule—which allowed the safety standards to take effect more quickly than under the conventional regulatory process—and stated that any delay in adopting the standards would jeopardize the public interest through risks to public safety and the environment. As a result, all 415 natural gas storage sites are for the first time subject to federal regulation, including minimum safety standards as set forth in the interim final rule, and subject to revision in a final rule. To develop the minimum safety standards, PHMSA considered industry consensus standards, as required by the PIPES Act. PHMSA had already advised operators to follow industry-recommended practices published by API, which develops consensus standards for the oil and gas industry. Specifically, in February 2016, before the passage of the PIPES Act, PHMSA issued a bulletin encouraging operators to follow the API recommended practices to update their safety programs. The API recommended practices contain many provisions that are mandatory, and other provisions that are nonmandatory. The interim final rule provides that the nonmandatory provisions of the recommended practices that are incorporated by reference in the rule are adopted as mandatory. PHMSA’s interim final rule requires operators of existing natural gas sites, and those constructed by July 18, 2017, to meet the requirements of certain sections of the API recommended practices identified in the rule by January 18, 2018. The API recommended practices address, among other things, general operations, monitoring the sites for potential leaks, and emergency response and preparedness. For new storage sites starting construction after July 18, 2017, the rule requires operators to meet all sections of the applicable API recommended practices. According to PHMSA officials, PHMSA considered the recommendations of the task force in developing its minimum safety standards, as required by the PIPES Act, and continues to do so. PHMSA’s minimum safety standards addressed certain recommendations made by the task force, according to an analysis performed by PHMSA. However, PHMSA did not require operators to implement one key recommendation of the task force report with its minimum standards, according to PHMSA officials. In particular, the October 2016 task force report recommended that operators phase out most storage wells with single-point-of-failure designs—where the failure of a single component, such as a well casing, could lead to a large release of gas—by installing multiple points of control at each well. According to an API official, its recommended practices do not direct operators to phase out such wells because this practice may not significantly improve safety in all cases; for example, this practice may not have prevented the leak at Aliso Canyon. The API official and PHMSA officials noted that API recommended practices direct operators to assess the risks at their sites and to take steps to address these risks. According to PHMSA officials, assessing the risks of a site could include identifying wells with a single point of failure and developing steps to mitigate this risk. Mitigating the risk could include installing multiple points of control for certain wells, among other possible mitigation steps. Neither PHMSA nor API officials could tell us how many of the approximately 17,000 wells at the nation’s 415 natural gas storage sites have single-point-of-failure designs, because this information has not been centrally gathered to date. However, PHMSA plans to gather information about how many storage wells have single-point-of-failure designs by asking operators to provide this information as part of a required annual report. To fund its enforcement of its minimum safety standards, PHMSA also issued a notice to set the user fees that PHMSA charges operators, as required by the PIPES Act. In November 2016, PHMSA published a notice of agency action and request for comment, describing its user fee structure. PHMSA collected public comments, evaluated them, and finalized its user fee structure in April 2017. As set forth in this notice, PHMSA will charge each operator based on the size of the operator’s storage sites as measured by working gas capacity range. The notice stated that PHMSA plans to collect a total of up to $8 million annually in fees from all operators combined; however, PHMSA may seek authority to increase or decrease the amount it charges operators if it finds that the cost of inspection and enforcement is more or less than it initially estimated, according to PHMSA officials. Following enactment of an appropriations act provision, PHMSA is authorized to use the fees it collects to fund its enforcement activities and plans to use a portion of the fees to reimburse states for enforcing its minimum safety standards, according to PHMSA officials. Table 1 provides a timeline of key events in the development of PHMSA’s minimum safety standards. PHMSA Is Considering Comments on Its Interim Final Rule and Plans to Issue Final Safety Standards in January 2018 Since issuing its interim final rule, PHMSA has been collecting public comments and plans to adjust some aspects of the rule in response to comments from the public, industry representatives, and others. PHMSA plans to finalize its minimum safety standards by replacing its interim final rule with a final rule in January 2018, and has delayed some dates for when it expects operators to comply with some aspects of its standards. PHMSA’s interim final rule states that, with respect to incorporation by reference of the standards, the nonmandatory provisions it adopted are adopted as mandatory provisions. API and two other organizations representing natural gas utilities and transmission companies submitted comments asking PHMSA to reconsider how it used the API recommended practices in its minimum safety standards. While API and the other industry representatives agreed that it was appropriate for PHMSA to use API recommended practices for its minimum safety standards, they stated that making all portions mandatory would make the standards burdensome. In June 2017, PHMSA published a notice in the Federal Register stating that it would consider these comments as it finalized its minimum safety standards, which it stated it expects to issue by January 2018. The notice stated further that PHMSA will not issue any enforcement citations to operators for failure to meet any standards that were nonmandatory but that were converted to mandatory by provisions of the interim final rule until 1 year after it issues the final rule. PHMSA also provided additional guidance and clarifications to operators about scheduling and its plans for enforcement. During the development of its interim final rule, PHMSA noted that some of the provisions in the minimum safety standards may take operators several years to fully implement. According to PHMSA officials, these provisions recommend that operators carefully inspect their natural gas storage sites, identify any conditions that do not meet industry-recommended practices, and then improve conditions at the sites by prioritizing the greatest risks and implementing preventative measures to mitigate and remediate these risks over a number of years. As a result, PHMSA published guidance on its website stating that it expects operators to make and implement plans to inspect and remediate risks found at their sites within 3 to 8 years following the effective date of the interim final rule. PHMSA Has Taken Steps to Establish an Enforcement Program but Has Not Yet Followed Certain Leading Practices of Strategic Planning To enforce PHMSA’s safety standards, the agency’s officials have taken a variety of steps to establish a safety enforcement program for natural gas storage sites, but they have not yet followed certain leading practices of strategic planning in starting PHMSA’s natural gas storage program. Specifically, PHMSA officials have started developing a training program for natural gas storage inspectors. They also have established a strategic goal and begun developing a training performance goal for their natural gas safety enforcement program. However, they have not yet followed certain leading practices for strategic planning—the systematic process for defining desired outcomes and translating this vision into goals and steps to achieve them. For example, PHMSA’s training performance goal does not define the level of performance officials hope to achieve or address all core program activities, such as conducting effective inspections. In addition, PHMSA has not used baseline data or budgetary information to inform the development of performance goals. PHMSA officials explained that they are still developing performance goals for their new program and collecting relevant data. PHMSA Has Taken Steps to Establish a Natural Gas Storage Safety Enforcement Program To enforce the agency’s safety standards, PHMSA officials have taken a variety of steps to establish a safety enforcement program for natural gas storage sites by January of 2018. For example, PHMSA officials have started developing a training program for natural gas storage inspectors. They have identified learning objectives for the program and have begun developing learning materials. According to PHMSA officials, developing a training program for inspectors is central to safety enforcement efforts, in part because PHMSA has a limited number of staff members with expertise in natural gas storage. For example, PHMSA had 10 employees with natural gas storage experience as of August 2017, according to PHMSA officials. In addition, PHMSA officials have completed eight safety assessments of selected natural gas storage operators to document the initial condition of gas storage sites and safety practices. According to PHMSA officials, their methodology for conducting these assessments involved visiting a cross section of operators, including operators of interstate and intrastate sites and multiple types of facilities. PHMSA officials also have developed workload and budget estimates for their new program, according to PHMSA documentation. In recent years, the Office of Pipeline Safety, which will be responsible for natural gas storage inspections in addition to pipeline inspections and other activities, has initiated about 1,100 inspections annually, according to PHMSA data. When natural gas storage site inspections begin, PHMSA officials estimate that the Office of Pipeline Safety’s inspection workload could increase 14 percent due to their new responsibilities. They reached this estimate by dividing the 203 new natural gas storage units they anticipate needing to inspect by the total number of inspection units they currently inspect. To meet the demands of this increased workload, officials estimate that PHMSA will need $2 million annually to fund 6 new inspector positions, training, travel, and other expenses associated with managing the natural gas storage safety enforcement program. With this number of inspectors, PHMSA officials believe that they can inspect all 203 natural gas storage units within about 4 years. Because PHMSA officials expect that many states that have previously conducted similar inspections will help PHMSA conduct inspections, officials also estimate that PHMSA will need to provide $6 million annually to states. However, PHMSA officials noted that their estimates may change as they gain additional information about the program. Specifically, after PHMSA begins initial inspections in early 2018, officials will have more information about the time it takes to inspect natural gas storage sites. By the end of fiscal year 2018, they will have even more information with which to develop more precise workload and budget estimates for the program, according to these officials. To ensure that the states assisting PHMSA are fully qualified to enforce the federal government’s minimum safety standards, PHMSA officials have begun developing a state certification program. This has involved drafting certification documents and contacting potential state partners. As of June 2017, PHMSA officials expected all states with intrastate natural gas storage sites to pursue certification. However, officials explained that they may not know until the end of fiscal year 2017 exactly how many states will pursue certification. If some states choose not to pursue certification or are not approved by PHMSA, PHMSA will be responsible for inspecting natural gas storage sites in those states, which could increase its inspection workload beyond the level it has estimated. For states that choose certification and are approved, PHMSA plans to use grants to fund up to 80 percent of state inspection costs. However, PHMSA officials told us that PHMSA may not be able to fund states to this level, depending on the approved costs requested by all states and levels of funding PHMSA receives through the appropriations process. In either circumstance, PHMSA’s grant program for certified state partners leverages state dollars, since it requires states to fund the portions of their programs not covered by grant funding. PHMSA Has Established a Strategic Goal but Has Not Yet Followed Certain Leading Practices of Strategic Planning PHMSA also has established a strategic goal for its natural gas safety enforcement program, but it has not yet followed other leading practices for strategic planning. Specifically, PHMSA officials told us that their new enforcement program will be guided by one of PHMSA’s existing strategic goals—to promote continuous improvement in safety performance. PHMSA officials also told us that they are developing a performance goal for their training program and that other performance goals are still being identified and developed. The Government Performance and Results Act of 1993 (GPRA), as amended—which seeks to improve the effectiveness of federal programs by establishing a system for agencies to set goals for program performance and measure results—defines a performance goal as the target level of performance expressed as a tangible, measurable objective against which actual achievement is to be compared. For example, in the area of weather forecasting, we have previously reported that such a goal could be to increase the lead time for predicting tornadoes from 7 to 9 minutes. PHMSA has not yet followed certain leading practices for strategic planning, as it has not: (1) defined the level of performance or fully addressed core program activities with its existing performance goal; or (2) used baseline data and other data or budget information to inform and refine performance goals. Defining Level of Performance and Addressing All Core Program Activities Our prior work has identified several leading practices for strategic planning that PHMSA has not yet followed, such as setting goals that define a certain level of performance and address all core program activities. Some of this prior work has examined requirements under GPRA and the GPRA Modernization Act of 2010. GPRA, which was significantly enhanced by the GPRA Modernization Act of 2010, requires agencies to develop annual performance plans that, among other things, establish performance goals to define the level of performance to be achieved. We have previously reported that requirements under these acts can serve as leading practices for planning at lower levels of the agency. As one of several operating administrations within DOT, PHMSA would be considered a lower level of the agency. In addition, we have found that a key attribute of successful performance measures is that they reflect the full range of core program activities. Moreover, we have found that a key practice for helping federal agencies enhance and sustain collaborative efforts with other agencies is to define and articulate a common outcome or purpose they are seeking to achieve. While PHMSA has taken some steps to plan strategically for its new program, it has not followed certain leading practices of strategic planning. For example, PHMSA has developed a performance goal for its training program, and agency officials told us that they plan to review the number of students who pass their gas storage training course as a measure of the agency’s training performance goal. However, with this measure PHMSA has not defined the level of performance to be achieved. An example of a measure of the agency’s training performance goal that defines the level of performance could be one that specifies that a certain percentage of students will pass the course on their first attempt. In addition, PHMSA has not yet developed performance goals for other core program activities, such as conducting effective inspections. According to PHMSA subject-matter experts, one of the critical tasks associated with inspecting a gas storage site will be determining whether the operator has met all well monitoring requirements specified in API’s Recommended Practice 1171, which addresses the functional integrity of gas storage in depleted hydrocarbon reservoirs and aquifers. An example of a performance goal that could indicate whether PHMSA’s inspections are effective could be to annually reduce, by a certain percentage, the number of operators that do not meet the well monitoring requirements of Recommended Practice 1171. Another critical task identified by PHMSA’s subject-matter experts will be to determine whether the operator has followed its own risk management plan for gas storage sites—another area where PHMSA has not developed a performance goal. An example of a performance goal in this area could be to annually reduce, by a certain percentage, the number of gas storage operators that have not followed their own risk management plans. PHMSA officials acknowledged that their performance goals are not yet complete and said that they would strive to refine performance goals as they continue developing the program; however, PHMSA has not yet done so. As they do so, ensuring that their performance goals define the level of performance to be achieved and address core program activities could help them ensure that they effectively track progress toward their strategic goal and make adjustments to activities and resources, if needed, to better meet the goal. In addition, because PHMSA plans to leverage state resources to oversee gas storage sites, the success of its gas storage program will depend, in part, on collaboration with state partners. Establishing performance goals for the program could help PHMSA coordinate efforts and resources with the states that are expected to assist PHMSA with inspections. Using Baseline Data to Inform Performance Goals Another leading practice of strategic planning involves using baseline and trend data to inform performance goals, according to our prior work. Baseline data—data collected about operations before oversight begins— can serve as a basis for comparison with subsequently collected trend data. We have previously reported that baseline and trend data can provide a context for drawing conclusions about whether performance goals are reasonable and appropriate. For example, we found in 1999 that the Department of Education was able to use such information to gauge the appropriateness of its goals for reducing the default rate on student loans provided through the Federal Family Education Loan program. The program’s annual plan provided baseline and trend data for the default rate, which indicated that the rate declined from 22.4 percent to 10.4 percent from fiscal years 1990 to 1995. According to Education’s analysis of the data, future declines were likely to be steady but smaller because of the large number of high-default schools that had already been eliminated from the program. For fiscal year 1999, Education set a goal of reducing the default rate to 10.1 percent of borrowers. For PHMSA’s natural gas storage program, PHMSA will have access to baseline data—and eventually trend data—over time that could inform the development of performance goals and subsequent refinement of them. PHMSA officials told us that they have not yet used such data to inform the development of their performance goal because they are still in the process of collecting relevant data. For example, officials told us that, over time, they will have access to data about operators’ facilities, functional integrity work, and operations and maintenance procedures starting in early 2018. These data will likely include the number of wells that have leaked and been repaired during the last calendar year. As specified in PHMSA’s minimum safety standards, PHMSA also plans to collect safety and incident reports to track gas releases, deaths, and injuries resulting in hospitalizations. In addition, in August of 2017, PHMSA officials completed eight industry safety assessments, which involved visiting natural gas storage sites and studying sites’ safety procedures. As previously mentioned, these assessments aimed, in part, to document the initial condition of gas storage sites and safety practices. Agency officials told us that they had planned to use the data they collect from these assessments to inform the agency’s state certification and inspection programs. They did not specify whether or how they intend to use these data to inform their performance goals. As PHMSA continues developing performance goals for its natural gas storage program, using available data to inform and refine these goals could help the agency ensure that its goals are reasonable and appropriate. Using Budgetary Information to Inform Performance Goals We also have reported that comparing information about budgetary resources with information about performance goals can help decisionmakers determine whether their performance goals are achievable. Specifically, we have reported that decisionmakers can better compare planned levels of accomplishment with the resources requested if they have information about how funding levels are expected to achieve a discrete set of performance goals. For example, we reported in a best practices report about strategic planning that the Internal Revenue Service (IRS) included in its performance plan for 1999 the budget amounts that corresponded with past performance levels. Table 2 illustrates how IRS used this information to inform proposed performance levels for the upcoming year. Moreover, GPRA requires agencies to prepare an annual performance plan covering each program activity set forth in the budget and, among other things, describe the resources required to meet performance goals. As previously mentioned, we have found that GPRA requirements can serve as leading practices for planning at lower levels of the agency. Assessing whether the new program’s performance goals are achievable given budgetary resources is important at a time when PHMSA officials are managing other new resources and responsibilities. For example, in addition to requiring DOT to establish minimum safety standards for natural gas storage sites, the PIPES Act of 2016 also requires DOT to update minimum safety standards for small-scale liquefied natural gas pipeline facilities. To carry out its responsibilities, PHMSA has received additional resources in recent years. As shown in figure 3, PHMSA’s Pipeline Safety Program has seen its total budgetary resources available increase from about $95 million in fiscal year 2007 to about $175 million in fiscal year 2016. In addition, the Consolidated Appropriations Act for fiscal year 2017 included a provision allowing for the obligation of up to $8 million from fees collected in fiscal year 2017 from operators for PHMSA’s natural gas storage program. These fees will be deposited in an Underground Natural Gas Storage Facility Safety account within PHMSA’s Pipeline Safety Fund and will be added to the Pipeline Safety Program’s total budgetary resources available for fiscal year 2017. PHMSA is not yet in a position to use budget information to inform or refine performance goals for its natural gas storage program because PHMSA officials are still developing these goals and PHMSA lacks key data, such as data on the time it takes—and therefore the budgetary resources required—to inspect natural gas storage sites. As previously mentioned, PHMSA will begin inspections in early 2018, and officials will have a better understanding of how long it takes to inspect natural gas storage sites by the end of fiscal year 2018. As PHMSA officials continue developing performance goals and finish collecting relevant data, using information about budgetary resources to inform and refine these goals may help PHMSA ensure that its goals are achievable. Conclusions Natural gas storage sites are key elements of our nation’s energy system, helping ensure that natural gas is available when demand peaks. As evidenced by the large-scale leak of natural gas outside Los Angeles that started in 2015 and extended into 2016, leaks from these sites can cause economic disruptions and environmental damage. These sites recently became subject to national safety standards, which are subject to further revision. PHMSA has taken a variety of steps to meet its new responsibilities for overseeing natural gas storage sites, such as developing a training program for inspectors and a performance goal for training. However, PHMSA has not yet followed certain leading practices of strategic planning in starting PHMSA’s new safety enforcement program. For example, PHMSA’s only current performance goal does not define the level of performance officials are working to achieve, and PHMSA does not currently have goals that address other core program activities, such as conducting effective inspections. PHMSA also has not yet used the baseline data it is collecting to develop its performance goals. PHMSA officials explained that they are still developing performance goals for their new program and collecting data. As the agency continues to develop these goals, ensuring that performance goals define the level of performance and address all core program activities could help the agency better track progress toward its strategic goal and adjust activities and resources, if needed, to better meet the goal. Using baseline data to develop these goals could help PHMSA ensure that its goals are reasonable and appropriate. Finally, once PHMSA finalizes performance goals for the program and collects relevant data over time as well as budgetary information, using these data and information when available to inform and refine performance goals may help PHMSA ensure that its goals are achievable. Recommendations for Executive Action We are making the following two recommendations to PHMSA. The Administrator of PHMSA should ensure that PHMSA defines levels of performance, addresses core program activities, and uses baseline data as it continues developing performance goals for its natural gas storage program. (Recommendation 1) The Administrator of PHMSA should ensure that PHMSA uses other data and information about budgetary resources as they become available to inform and refine its performance goals. (Recommendation 2) Agency Comments We provided a draft of this report to DOT for review and comment. In written comments, DOT concurred with the report’s recommendations and provided additional information on steps they are taking or plan to take as part of their oversight of natural gas storage sites. In addition, DOT stated that it would provide a detailed response to each recommendation within 60 days of our final report’s issuance. The complete comment letter is reproduced in appendix III. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Transportation, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or members of your staff have any questions about this report, please contact us at (202) 512-3841, [email protected], or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology In this report, we examine (1) the status of the Pipeline and Hazardous Materials Administration’s (PHMSA) efforts to implement the requirement under the Protecting Our Infrastructure of Pipelines and Enhancing Safety (PIPES) Act of 2016 to issue minimum safety standards for natural gas storage sites, and (2) the extent to which PHMSA has planned strategically to enforce its safety standards for natural gas storage sites. To examine the status of PHMSA’s efforts to implement the requirement to issue minimum safety standards for natural gas storage sites, we examined laws, regulations, and agency documents that describe the authority, time frames, and enforcement goals for implementing new federal rules under the PIPES Act. Specifically, we reviewed the PIPES Act to identify requirements that the act directed to the Department of Transportation (DOT), or PHMSA. To understand PHMSA’s implementation of DOT’s requirements under the act, we reviewed PHMSA notices and regulations as presented in the Federal Register and discussed the information in these documents with agency officials. We also reviewed guidance documents on the PHMSA website intended to provide natural gas storage operators with more detailed guidance and discussed the documents with agency officials. We reviewed an October 2016 report, mandated by the act, which was issued by a task force led by the Department of Energy (DOE). We also obtained and reviewed copies of recommended practices issued by the American Petroleum Institute (API), which issues industry consensus standards for the oil and gas industry, and interviewed API officials to better understand these recommended practices. We also interviewed agency officials. Specifically, we interviewed officials with PHMSA, the Federal Energy Regulatory Commission, the Bureau of Land Management within the Department of the Interior, and the Environmental Protection Agency, to understand how they participated in the task force and to what degree they have responsibilities related to natural gas storage safety enforcement. In addition, we obtained data from PHMSA and DOE’s Energy Information Administration about natural gas storage sites to gain an estimate of the number and regulatory status of various natural gas storage sites, their locations, and other details. We assessed the reliability of these data by (1) corroborating these data with other sources, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that these data were sufficiently reliable for the purposes of this report. We also interviewed agency officials at DOT and PHMSA, including discussing agency requirements under the PIPES Act and how PHMSA planned to implement its responsibilities. To better understand the operation and control of natural gas storage sites, we conducted a site visit to the Aliso Canyon Gas Storage Facility in California and spoke to officials representing the operator of the site, and state government officials responsible for safety enforcement at the site. To examine the extent to which PHMSA has planned strategically to enforce safety standards for natural gas storage sites, we compared information we gathered from PHMSA officials and documents with leading practices for strategic planning identified by our prior work, which were identified by examining requirements under the Government Performance and Results Act (GPRA) of 1993. We have previously reported that requirements under GPRA and the GPRA Modernization Act of 2010 can serve as leading practices for planning at lower levels of the agency. We also interviewed PHMSA officials—including budgetary, policy, and programmatic officials—about their planning efforts for the natural gas storage program. In addition, we reviewed regulations and documents that reflect agency planning efforts, including: PHMSA’s interim final rule on the safety of underground natural gas storage facilities; agency guidance, such as frequently asked questions for operators of natural gas storage sites; and agency planning documents, such as the Training Implementation Plan for Natural Gas Underground Storage Regulation Training, PHMSA 2021 Business Plan - 2017, and workload and budget estimates for the program. Using information obtained from these sources about PHMSA’s efforts to plan for its natural gas storage program, we compared PHMSA’s planning efforts with leading practices for strategic planning identified in our prior reports. We conducted this performance audit from November 2016 to November 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Number of Active Natural Gas Storage Sites as of January 2016 by State and Jurisdiction Table 3 identifies the 415 natural gas storage sites active as of January 2016, by state and jurisdiction. The number of natural gas storage sites that fall under federal or state jurisdiction in each state is presented, along with the total storage capacity of the sites. A natural gas storage site is considered to be under federal jurisdiction—also known as “interstate”—if the site is linked to a federally-regulated interstate pipeline permitted by the Federal Energy Regulatory Commission. Otherwise, sites are under state jurisdiction. The sites represented in this table were compiled by the Department of Energy’s Energy Information Administration in 2016, and provided by the Department of Transportation’s Pipeline and Hazardous Materials Safety Administration (PHMSA). EIA collects these data using a survey of natural gas storage site operators. According to a PHMSA document, PHMSA used these data to, among other things, identify natural gas storage sites and calculate the amount of user fees that it charged operators in 2017 (the first year PHMSA collected these user fees) to fund its inspection and enforcement programs. PHMSA plans to update its information about natural gas storage sites using data submitted by operators, as required by its interim final rule. This rule requires natural gas storage site operators to submit these data on or before July 18, 2017. PHMSA plans to require operators to annually submit this information using a form. According to PHMSA officials, the Office of Management and Budget recently approved this form. As a result, PHMSA will begin collecting data that reflect calendar year 2017 by its due date of March 15, 2018. PHMSA officials told us that it will take about 5 to 6 months to develop a website that will allow PHMSA to efficiently collect these data from operators for all sites this year and in future years. Appendix III: Comments from the Department of Transportation Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the individuals named above, Mike Hix and Jon Ludwigson (Assistant Directors), Richard Burkard, Lee Carroll, Nirmal Chaudhary, Ellen Fried, Cindy Gilbert, Carol Henn, Mary Koenen, Jessica Lemke, Ben Licht, Greg Marchand, John Mingus, Katrina Pekar-Carpenter, Sara Sullivan, and Kiki Theodoropoulos made important contributions to this report.
Why GAO Did This Study Natural gas storage is important for ensuring that natural gas is available when demand increases. There are 415 storage sites—including underground caverns and depleted aquifers and oil and gas reservoirs—located in 31 states, often near population centers (see fig.). Leaks from these sites, such as one near Los Angeles that led to the temporary relocation of about 8,000 families in 2015, can result in environmental and economic damage. Until 2016, states set standards for 211 sites, but there were no standards for 204 sites connected to interstate pipelines subject to federal jurisdiction. With passage of the PIPES Act of 2016, PHMSA, an agency within DOT that sets and enforces standards for energy pipelines, among other things, was tasked with issuing minimum standards for all gas storage sites. GAO was asked to review natural gas storage safety standards. This report examines (1) PHMSA's efforts to implement the requirement to issue minimum safety standards for natural gas storage sites and (2) the extent to which PHMSA has planned strategically to enforce its safety standards for these sites. GAO reviewed PHMSA documents and plans, compared them to leading planning practices, and interviewed PHMSA officials. What GAO Found To meet its requirement under the Protecting Our Infrastructure of Pipelines and Enhancing Safety (PIPES) Act of 2016, the Department of Transportation's (DOT) Pipeline and Hazardous Materials Safety Administration (PHMSA) issued minimum safety standards in an interim rule and plans to finalize them by January 2018. Under the interim standards, site operators are to follow industry-developed best practices to detect and prevent leaks and plan for emergencies, among other things. Since the interim rule went into effect in January 2017, the minimum safety standards apply to all 415 natural gas storage sites, and the rule will be subject to further revision before it is final. To enforce its safety standards, PHMSA has taken steps to establish a natural gas storage safety enforcement program. For example, PHMSA has started developing a training program for its inspectors. PHMSA also has identified a strategic goal for its program—to promote continuous improvement in safety performance—and is developing a performance goal for its training program. However, PHMSA has not yet followed certain leading strategic planning practices. For example, PHMSA has not yet defined the level of performance to be achieved, fully addressed all core program activities, or used baseline data to develop its performance goal. GAO has previously reported that requirements under the Government Performance and Results Act (GPRA) and GPRA Modernization Act of 2010—which include establishing performance goals to define the level of performance—can serve as leading practices for lower levels of an agency, such as PHMSA. GAO also has found that successful performance goals address all core program activities. PHMSA's goal focuses on training and does not address other core program activities, such as conducting effective inspections. For example, a goal to evaluate whether PHMSA's inspections are effective could be to annually reduce, by a certain percentage, the number of sites not meeting minimum standards. PHMSA officials told GAO that they will strive to add and refine performance goals as the program evolves. As they do so, ensuring that these goals define the level of performance, address all core program activities, and use baseline data could help PHMSA better track progress toward its strategic goal. What GAO Recommends GAO is making two recommendations, which are that PHMSA (1) define levels of performance and address all core program activities and (2) use budget data to refine performance goals for its gas storage program. DOT concurred with GAO's recommendations.
gao_GAO-18-561
gao_GAO-18-561_0
Background Columbia River Basin The Columbia River Basin is the fourth largest river basin in the United States and covers parts of seven states and British Columbia, Canada. It provides drainage for hundreds of rivers, creeks, and streams. More than 6 million acres of the Basin are irrigated agricultural land, and the Columbia River and its tributaries produce more hydroelectric power than any other North American river. The Columbia has 12 major tributaries, with the longest being the Snake River. The Columbia River itself flows more than 1,200 miles from its source in the Canadian Rockies to the Pacific Ocean, with the last 300 miles forming the border between the states of Oregon and Washington. The Basin has myriad dams and reservoirs—more than 250 reservoirs and approximately 150 other hydroelectric projects, including more than 35 major federal and nonfederal dams on the Columbia River and its major tributaries in the United States. For more details, see figure 1. The Basin provides environmental, economic, and social benefits to many public and private interests and is vital to many industries in the Pacific Northwest, including sport and commercial fisheries, agriculture, forestry, transportation, recreation, and electrical power generation. However, activities from these industries have affected the environment in the Basin and, among other impacts, impaired water quality in some areas to the point where human health is at risk and historic salmon and steelhead stocks are at risk or extinct. Under the Clean Water Act, states have identified many Columbia River tributaries, the Columbia River itself, and its estuary as impaired. Major sources of impairment to water quality include pollutant run-off from agricultural activities and storm-water on impermeable surfaces (e.g., paved parking lots and roads); habitat modification due to the hydroelectric dams and their associated reservoirs; legacy toxic contaminants, such as mercury and PCBs; and contaminants of emerging concern, such as discarded pharmaceuticals. In addition, EPA Superfund sites are located throughout the Basin and may have negatively impacted water quality in locations such as Portland Harbor in Oregon, the Hanford Site in Washington, and the Upper Columbia River at Lake Roosevelt in Washington. Figure 2 shows some sources that may lead to impairment of the Basin, including point and nonpoint sources of pollution. In the early to mid-1990s, the states of Washington and Oregon sponsored monitoring studies that identified dozens of sites in the lower reaches of the Columbia River where contaminants exceeded water quality standards for the presence of pesticides, toxic metals, and cyanide, among other findings. Further, in 1992, an EPA survey of contaminants in fish reported a potential health threat to tribal members and other people who eat fish from the Basin. More recently, a 2009 EPA report summarized findings contained in studies by USGS and NMFS (in conjunction with the University of California-Davis). The report noted that significant levels of toxic chemicals were found in fish and the waters they inhabit, including toxics banned from use since the 1970s, such as dichlorodiphenyltrichloroethane (commonly known as DDT) and PCBs, as well as emerging contaminants, such as chemicals used for flame retardants. This has led states to periodically issue fish, and in some cases shellfish, advisories throughout the Basin warning the public not to consume more than specified quantities of contaminated aquatic species or, in some cases, at all. In addition to potential human health impacts, other studies have found that some contaminants have negative impacts on fish and wildlife populations in the Basin. Since the 1990s, fewer sites in the Basin have been monitored for water quality, and investment in such monitoring has decreased, according to an EPA official. For example, according to staff from the Lower Columbia Estuary Partnership, monitoring sites on the mainstem lower Columbia River have decreased over time and currently one site is being monitored for toxics. Selected Legislation Related to Water Quality in the Columbia River Basin The Clean Water Act and Endangered Species Act are the primary federal statutes driving many of the restoration efforts in the Columbia River Basin. A range of other laws, treaties, court decisions, and authorities also serve to create requirements for entities to implement restoration efforts in the Basin. Clean Water Act: The Clean Water Act was enacted in 1972 to “restore and maintain the chemical, physical, and biological integrity of the nation’s waters.” It establishes the basic structure for establishing surface water quality standards, as well as regulating discharges of pollutants into the waters of the United States, and provides various regulatory and non-regulatory tools for doing so. Under the Clean Water Act, EPA may allow states under certain circumstances to implement their own clean water programs and to enforce their requirements. EPA establishes by regulation the requirements for state enforcement authority, such as the authority to seek injunctive relief and civil and criminal penalties. National Estuary Program: In 1987, amendments to the Clean Water Act added Section 320, which established the National Estuary Program to promote comprehensive planning for, and conservation and management of, nationally significant estuaries, among other things. EPA oversees the program and has designated 28 estuaries as being of national significance, including the Lower Columbia Estuary. Based on this designation, in 1995 EPA and the governors of Washington and Oregon established the Lower Columbia Estuary Partnership. The Partnership works with federal, state, tribal, local, and nongovernmental entities to improve the lower Columbia River and its estuary by protecting and restoring ecosystems and enhancing clean water for current and future generations of fish, wildlife, and people. Under Clean Water Act Section 320, as the management conference for the estuary, the Lower Columbia Estuary Partnership is required to develop and implement a comprehensive conservation and management plan (CCMP) to restore and maintain the chemical, physical, and biological integrity of the estuary, including water quality. The CCMP for the lower Columbia River estuary covers the lower 146 miles of the Columbia River and its associated tributaries, or about 7 percent of the Basin overall, and is intended to reflect a scientific characterization of, and stakeholder concerns about, the estuary, including its water quality, habitats for animal and plant life, and other resource challenges. Figure 3 shows the area covered by the Lower Columbia Estuary Partnership’s CCMP. Clean Water Act Section 123 on Columbia River Basin Restoration: The Water Infrastructure Improvements for the Nation Act of 2016 amended the Clean Water Act by adding Section 123 on Columbia River Basin Restoration. The law requires EPA to establish the Columbia River Basin Restoration Program, which is to be a collaborative stakeholder-based program for environmental protection and restoration activities through the Basin. Legislation calling for establishment of a Columbia River Basin restoration program within EPA was introduced in 2010. According to a Congressional committee report accompanying the bill, a main finding was that while EPA in 2006 recognized the Columbia River Basin as one of the nation’s large aquatic ecosystems and had in place an organizational structure to manage restoration efforts being implemented in the lower Columbia River estuary, there was no congressionally authorized program or dedicated appropriations to support the water quality restoration and toxic reduction efforts throughout the Basin. Section 123 directs EPA to assess trends in water quality in the Basin, collect and assess data on potential causes of water quality problems, develop a program to provide grants to various entities, and establish a voluntary interagency Columbia River Basin Restoration Working Group (Working Group). The law also requires the President’s annual budget submission to include an interagency crosscut budget prepared by OMB that displays, for each federal agency involved in the protection and restoration of the Columbia River Basin, funding amounts obligated for those purposes in the preceding fiscal year, the estimated budget for the current fiscal year, and the proposed budget for the next fiscal year for related activities at each agency. Figure 4 shows the requirements of Clean Water Act Section 123. Endangered Species Act: Enacted in 1973, the purpose of the Endangered Species Act is to protect and recover imperiled species and the ecosystems upon which they depend. It is jointly administered by the U.S. Fish and Wildlife Service (FWS) and NMFS. Generally, the FWS manages land and freshwater species, and NMFS manages marine species and anadromous fish, such as salmon. Under the Endangered Species Act, species may be listed as either endangered or threatened. In the Basin, numerous species have been listed, including 13 species of salmon and steelhead. Under Section 7 of the act, federal agencies are to ensure that any actions they authorize, fund, or carry out, whether on federal or private lands, do not jeopardize listed species. To fulfill this responsibility, the agencies often must formally consult with FWS or NMFS, which issues a biological opinion assessing whether the agency action is likely to jeopardize the continued existence of the species or result in destruction or adverse modification of critical habitat. For example, three federal agencies—the Corps, BPA, and Bureau of Reclamation—operate and manage federal dams and other hydroelectric facilities that comprise the Federal Columbia River Power System under a biological opinion NMFS issued in 2008.The biological opinion includes, among other measures, performance standards for the survival rate of fish migrating upstream or downstream past the associated dams and reservoirs. Additional required mitigation actions include those related to habitat restoration, predation management, and hatchery management to mitigate for the adverse effects of the system, as well as numerous research, monitoring, and evaluation actions to support and inform adaptive management decisions. Large Aquatic Ecosystems: EPA has designated specific areas around the country as “large aquatic ecosystems.” Such ecosystems comprise multiple small watersheds and water resources within a large geographic area. Over the years, EPA has worked with other federal agencies, state and local governments, tribes, and others to develop specific geographic- based programs to protect and restore these areas, including the Chesapeake Bay and the Great Lakes. In 2006, EPA recognized the Columbia River Basin as a large aquatic ecosystem to help promote the development of new cooperative initiatives and efforts to improve water quality, remove contaminated sediments, restore native fish species, and preserve and restore aquatic habitat and ecosystems throughout the Basin. In 2008, EPA’s Office of Water established a national Council of Large Aquatic Ecosystems to work within the agency and better support and promote efforts being implemented by the geographic-based programs to protect these large aquatic ecosystems. EPA incorporated strategic goals and objectives for most large aquatic ecosystems into its strategic plan for fiscal years 2006 through 2011 and into its national water program guidance. Over time, for the majority of these large aquatic ecosystems—such as the Chesapeake Bay, Great Lakes, Long Island Sound, and Puget Sound—EPA formally established dedicated program offices and received congressional appropriations specifically for restoration efforts in each large aquatic ecosystems geographic area. See figure 5 for the large aquatic ecosystems designated by EPA throughout the United States. Entities Involved in Water Quality-Related Restoration Efforts in the Basin Multiple entities conduct activities related to restoration efforts in the Basin, including federal agencies, state agencies, federally and non- federally recognized tribes, tribal organizations, and nongovernmental entities. Along with their primary water, power, resource, and other management and regulatory responsibilities, federal, state, and tribal entities are responsible under various laws, treaties, executive orders, and court decisions for protecting, mitigating, and enhancing fish and wildlife resources in the Basin, among other things. Eleven federal agencies, within six departments, are involved with water quality-related restoration efforts in the Basin. The departments and agencies, and their respective roles, include: U.S. Department of Agriculture Forest Service: Manages national forests and grasslands under the principles of multiple use and sustained yield. Natural Resources Conservation Service (NRCS): Assists farmers, ranchers, and other landowners in developing and carrying out voluntary efforts to protect the nation’s natural resources. U.S. Department of Commerce NMFS: Conserves, protects, and manages living marine resources to ensure their continuation as functioning components of marine ecosystems and to afford economic opportunities; implements the Endangered Species Act for marine and anadromous species; and supports on-the-ground habitat restoration projects with funding and technical assistance. U.S. Department of Defense Corps: Designs, builds, and operates hydroelectric civil works projects in the Basin to provide electric power, navigation, flood control, and environmental protection. U.S. Department of Energy: Addresses U.S. energy, environmental, and nuclear challenges through science and technology solutions, including clean-up of the former Hanford plutonium production site for nuclear weapons in Washington. Bonneville Power Administration (BPA): BPA provides power and transmission services and markets the electricity generated by the Corps and Reclamation dams comprising the Federal Columbia River Power System. U.S. Department of the Interior Bureau of Land Management: Administers public lands and subsurface mineral resources under the principle of multiple use and sustained yield. FWS: Manages wildlife refuges; conserves, protects, and enhances fish, wildlife, and plants; and implements the Endangered Species Act for terrestrial species, migratory birds, certain marine mammals, and certain fish. Reclamation: Designs, constructs, and operates water projects for multiple purposes, including irrigation, hydropower production, municipal and industrial water supply, flood control, recreation, and fish and wildlife. USGS: Conducts objective scientific studies and provides information to address problems dealing with natural resources, geologic hazards, and the effects of environmental conditions on human and wildlife health. EPA: Protects human health and safeguards the natural environment by protecting the air, water, and land, including administration of the Clean Water Act. Various Entities Implemented a Range of Restoration Efforts for Improving Water Quality in the Columbia River Basin from Fiscal Years 2010 through 2016 In response to our survey, various entities—federal and state agencies, tribes and tribal organizations, and nongovernmental entities—identified a range of restoration efforts they implemented related to improving water quality in the Columbia River Basin from fiscal years 2010 through 2016. Although there have been some plans to guide certain restoration efforts for parts of the Basin, there is no overall plan to guide water quality- related restoration efforts throughout the Columbia River Basin or a requirement for a federal agency or others to develop such a plan. We found that entities implemented their restoration efforts under a range of authorities and programmatic missions. At the federal and state levels, many of the restoration efforts were implemented as part of programs with a broader geographic scope than the Basin. For example, many of EPA’s efforts are part of programs that have a nationwide focus, such as the Clean Water Act Section 106 Water Pollution Control Grant Program, which provides grants to states, territories, interstate agencies, and eligible tribes to establish and administer water pollution control programs for the prevention, reduction, and elimination of pollution. Conversely, other restoration efforts have been implemented exclusively in the Columbia River Basin. For example, the Shoshone-Bannock Tribe’s Yankee Fork Restoration Program works to improve the floodplain and riparian zones along dredged sections of the Yankee Fork Salmon River. Appendix II provides a list of the restoration efforts implemented in the Columbia River Basin from fiscal years 2010 through 2016, based on entities’ responses to our survey. See table 1 for examples of a range of restoration efforts implemented by various entities in the Basin from fiscal years 2010 through 2016. Based on responses to our survey, we found that entities implemented restoration efforts in the Columbia River Basin for a variety of purposes, such as improving surface water quality or reducing toxic pollutants. Specifically, our survey listed five purposes and asked entities to identify whether each was a primary purpose, secondary purpose, or not a purpose of the respective restoration effort. Overall, the most common primary purposes identified were improving surface water quality and restoring and protecting habitat. For example, the Forest Service identified monitoring surface water quality as the sole purpose for its Pacific Northwest Region Aquatic Inventory and Monitoring effort, which inventories and monitors watershed and stream habitat conditions to provide information and feedback to improve resource protection and restoration programs. Similarly, FWS identified restoring and protecting habitat as the primary purpose of its National Fish Habitat Partnership Pacific Region effort. This effort—part of a nationwide program—focuses on restoring aquatic habitat important to fish species of regional significance in the Columbia River Basin. See table 2 for the purposes identified in our survey and examples of associated restoration efforts. In addition, we found that restoration efforts implemented in the Columbia River Basin can directly or indirectly support improving water quality. For example, some restoration efforts directly support improving water quality, such as efforts whose primary purpose included monitoring surface water quality. Other restoration efforts indirectly support improving water quality. For example, NRCS’ Conservation Stewardship Program’s primary purpose is helping agricultural producers, ranchers, and forest landowners expand their conservation activities to enhance natural resources while simultaneously improving their operations. These efforts do not directly focus on improving water quality, but activities implemented through these efforts may indirectly improve water quality in the Columbia River Basin. Entities Used Various Collaborative Approaches for Selected Restoration Efforts We found that entities’ approaches to collaboration for selected water quality-related restoration efforts in the Basin from fiscal years 2010 through 2016 varied based on the specific circumstances of the given effort. This was in part because there is no overall coordinating body to guide water quality-related restoration efforts throughout the Columbia River Basin or a requirement prior to the enactment of Section 123 for federal agencies or others to develop such a body. For example, certain efforts are required by law or regulation to use specific types of collaborative approaches (e.g., stakeholder review of proposed program plans), and other efforts that are voluntary in nature may use different approaches to engaging and maintaining collaborative efforts among relevant entities. For example, the Washington State Department of Ecology and others developed the dissolved oxygen total maximum daily load (TMDL) for the Spokane River and Lake Spokane through a regulatory process that included public review and comment. In contrast, entities such as the Lower Columbia Estuary Partnership and the Columbia River Toxics Reduction Working Group sought the voluntary involvement of other entities through their mutual interest in a common outcome, in this case restoring the lower Columbia River estuary and reducing toxics in the Basin, respectively. In addition, based on responses to our survey, the majority of restoration efforts in the Basin involved multiple entities. Specifically, for restoration efforts implemented in the Basin from fiscal years 2010 through 2016, respondents reported that approximately 71 percent of the efforts involved more than one entity and that approximately 29 percent were implemented solely by a lead entity. To highlight examples of collaborative approaches entities used for water quality-related restoration efforts, we selected five efforts for review. While these efforts are not generalizable to all restoration efforts in the Basin, they highlight specific collaborative approaches entities used for individual restoration efforts, as follows: Effort 1: The Corps Northwestern Division Reservoir Control Center Water Quality Program (2008-present) is a federally led effort designed to implement the 2008 Federal Columbia River Power System biological opinion, and collaboration is enabled through coordination meetings, facilitated by a neutral third party, to manage Corps project operations affecting water quality. For example, according to Corps guidelines, day-to-day coordination of Corps operations (e.g., voluntary water spill over dams) to meet the biological opinion’s requirements and comply with water quality standards occurs through biweekly or more frequent meetings of its operational-level interagency Technical Management Team. The team operates under institutionalized collaboration procedures that provide guidance for, among other things, membership, member roles and responsibilities, and procedures for meetings and decision making. According to agency documentation, meetings of the Technical Management Team are facilitated by an impartial contracted facilitator whose position is designed to enable team members the opportunity to fully participate in discussions and help members resolve conflicts as they arise. Effort 2: Washington State’s Spokane River & Lake Spokane Dissolved Oxygen TMDL (2004-present) is a state-led effort, regulatory in nature, and collaboration is enabled through an associated Foundational Concepts guiding document. Under the Clean Water Act, Washington State was required to develop a TMDL and associated water quality improvement plan for the Spokane River and Lake Spokane because the state identified several segments of these water bodies as having impaired water quality. In a 2004 draft TMDL, the state proposed phosphorus discharge requirements necessary for the river to meet the state’s water quality standards. However, not all responsible for point source pollution discharges believed that well-established technology existed that could achieve these requirements, according to the Foundational Concepts document. The state developed the document specifically to enhance and further enable a collaborative approach among the regulatory agencies and the pollution dischargers involved in revising and finalizing the TMDL, according to Washington State officials. The final TMDL document, issued in 2010, noted that technology was available that could bring current discharges much closer to the levels required by water quality standards, and that Washington State could develop a plan, approved by EPA, that would provide reasonable assurance that the standards could be achieved within 10 years. Effort 3: The Columbia River Toxics Reduction Working Group (2005-present) is an EPA-led effort, voluntary in nature, and collaboration is enabled by a joint signed executive statement signed in 2011. EPA developed the group—in conjunction with other relevant federal, state, tribal, local, and nonprofit partners—to better coordinate toxics reduction efforts in the Basin and to share related information within the context of each organization’s own roles and responsibilities. Executives from the partner agencies, tribes, and organizations demonstrated their leadership commitment for the Columbia River Toxics Reduction Working Group’s efforts by signing the joint statement. The executive statement was designed to publicly highlight their commitment to be partners involved with the Columbia River Toxics Reduction Working Group toward the collaborative efforts necessary to reduce toxics in the Basin. Effort 4: The Lower Columbia Estuary Partnership (1995-present) is an effort led by a nongovernmental organization, voluntary in nature, and collaboration is enabled through a management plan. The Partnership’s organizational purpose is to facilitate restoration efforts in the lower Columbia River estuary portion of the Basin by building on existing efforts, providing a regional framework for action, and filling gaps in understanding and planning, among other things. The Partnership’s CCMP guides the collaborative efforts of the Partnership and its associated stakeholders and identifies what the Partnership should be doing concerning regional coordination activities, as well as how such coordination should be pursued. Effort 5: The Confederated Tribes of the Umatilla Indian Reservation Fisheries Habitat Sub-Program (1987-present) is a tribal effort, sovereign in nature, and collaboration is enabled through the sub-program’s Umatilla River Vision guiding document. This fisheries habitat effort is designed to provide for sustainable harvest opportunities of aquatic species traditionally consumed by the Umatilla through protection, conservation, and restoration of related aquatic habitats, according to Umatilla tribal officials. The vision articulated by the tribe’s Fisheries Program is that the Umatilla Basin includes a healthy Umatilla River capable of providing sufficient quantities of the First Foods (i.e., water, salmon, deer, cous, and huckleberry) necessary to sustain the continuity of the tribe’s culture. The Umatilla tribes developed the Umatilla River Vision to help identify existing gaps in knowledge and the work that must be accomplished to reestablish a healthy watershed and restore fisheries habitat on the Umatilla Reservation. Umatilla tribal officials we interviewed stated that the document is applicable to all Umatilla aboriginal lands and guides all their restoration efforts and coordination with other entities, including federal and state officials and funding partners. In addition, we obtained the views of officials from 11 federal agencies on factors that may enable and hinder collaboration in the Basin. In identifying factors that enabled collaboration in their implementation of specific restoration efforts, officials from the 11 federal agencies most often identified the following: (1) having pre-existing relationships with partners, such as through participation in interagency bodies; (2) having clearly defined roles and responsibilities and common outcomes for restoration efforts across partners; and (3) identifying resource needs and the sources of resources to be used for such efforts. The officials also identified potential actions that could enhance basin-wide collaboration for restoration efforts beyond their individual efforts. For example, one official responded that collaboration could be improved by involving senior- level officials in discussing and establishing priorities for basin-wide restoration, so that each entity could then implement efforts across the Basin in a manner consistent with the priorities agreed to by the senior leaders. Other officials noted that implementing this action would require individual agencies and entities to provide staff time and needed resources to enable collaboration on broader basin-wide priorities, consistent with each agency’s individual missions and goals. An official also suggested, to enhance collaboration on basin-wide restoration, proactively involving relevant entities through presentations and document reviews to allow the entities to offer their suggestions and identify any objections they may have for a given effort. In addition, a different official suggested implementing basin-wide restoration monitoring and evaluation to determine which efforts are working well, which are not, and how any given effort may need to change to more efficiently or effectively restore the Basin. The officials from the 11 federal agencies most often identified the following factors that hindered collaboration in their implementation of specific restoration efforts: (1) lack of sufficient resources, (2) incompatibility of policies and procedures across agencies, and (3) lack of clearly defined common outcomes for restoration efforts across partners. The officials also identified challenges to collaboration for basin-wide restoration beyond their individual efforts. Among other things, one federal official identified as a challenge the variability of missions, authorities, and priorities among various agencies and entities pursuing restoration efforts in the Basin. According to officials, these factors make it difficult to establish mutually agreeable end-goals and means for restoration because various entities have potentially competing interests based on each organization’s primary mission. Specifically, prioritizing certain restoration efforts over others—as may occur through adoption of a basin-wide restoration strategy or plan—may lead some entities to not participate in basin-wide restoration activities. According to other officials, this is because an entity is most likely to prioritize its own efforts, not the efforts of other entities. Other challenges to basin-wide collaboration officials cited included the litigation surrounding restoration efforts in the Basin (e.g., lawsuits regarding salmon and steelhead recovery under the Endangered Species Act) and the associated potentially adversarial relationships among entities, as well as limited staff time and resources for collaborating with other entities. Entities Reported Using a Mix of Federal and Nonfederal Sources of Funding to Implement Restoration Efforts, but Total Federal Expenditures Could Not Be Determined Entities responding to our survey reported that most of the restoration efforts they implemented in the Basin were supported through a mix of federal and nonfederal funding sources. For several reasons, we could not determine total federal expenditures to implement the restoration efforts identified through our survey. Instead, we collected data from five federal agencies (BPA, Corps, EPA, Forest Service, and USGS) to provide illustrative examples of federal water quality-related restoration expenditures in the Basin. Entities Reported Most of their Restoration Efforts in the Basin Were Implemented with a Mix of Federal and Nonfederal Funding Sources Entities responding to our survey reported that most of their restoration efforts in the Basin were supported through a mix of federal and nonfederal funding sources. With respect to federal funding, responses to our survey indicated that nearly all of the restoration efforts identified through our survey received some level of federal funding. This includes funding appropriated to federal agencies for mission-driven activities that may have a primary purpose other than improving water quality and restoring the Basin. For example, according to agency officials, while improving water quality is not a primary mission of the Corps’ and Reclamation’s hydropower projects, maintaining compliance with water quality standards is a component of the operation and maintenance of these projects. Similarly, multiple federal agencies are involved in efforts to recover species protected under the Endangered Species Act and restore habitats that have been affected by operations of the Federal Columbia River Power System, particularly eliminating barriers to fish passage, operating fish hatcheries, and monitoring water temperatures to promote fish survival rates; those efforts indirectly benefit water quality. Several of the federal efforts we identified in our review do not directly implement restoration activities but provide financial and technical assistance to support other entities’ implementation of restoration efforts. These efforts include: EPA’s Clean Water Act Section 319 Nonpoint Source Implementation Grants Program, under which EPA provides grants to states to implement programs and fund programs that address nonpoint source pollution; NRCS’s Regional Conservation Partnership Program, which provides financial incentives and technical assistance for eligible partners, such as agricultural producers, to implement voluntary conservation measures that address a range of natural resource management concerns, including water quality degradation and loss of fish and wildlife habitat; NMFS’s Community-Based Restoration Program, which awards funds and provides technical assistance to national and regional partners and local grassroots organizations to restore habitat; and FWS’s Partners for Fish and Wildlife Program, which provides financial and technical assistance to private landowners to protect or restore wetlands, uplands, and riparian and instream habitats. For example, in fiscal year 2016, NMFS’s Community-Based Restoration Program awarded about $900,000 in grant funds to The Nature Conservancy to support its restoration of 330 acres of floodplain habitat at the confluence of two forks of the Willamette River. This effort provides a range of benefits, including improved water quality, improved fish passage, and increased hydrologic connectivity. In addition, more than half of the restoration efforts identified through our survey were implemented with a mix of federal and nonfederal funding sources, including most of the state efforts. These sources include support through direct financial awards or indirect support through in-kind services. For example, Reclamation’s Pacific Northwest Water Quality Program provided cost-reimbursable services and technical support to stakeholders, such as state agencies and watershed councils, in the design and implementation of water quality improvement plans. Similarly, the Lower Columbia Estuary Partnership’s 2017 annual report noted that for each $1 in federal funding the partnership received from EPA, the partnership raised an additional $9 in funding solicited from other federal, state, and private sources. In 2017, the partnership brought in $7.6 million in direct funding, most of which supported projects implemented by local organizations and businesses to restore habitat, monitor restoration work, and support outdoor education initiatives. The partnership also estimated that in 2017, it received in-kind services from a range of contributors, such as scientists, technical experts, and community members who volunteered more than 18,000 hours of their time to implement various partnership activities. The partnership valued these in- kind services at nearly $430,000. Some programs, such as the Corps’ Aquatic Ecosystem Restoration program, do not provide funding to other entities but include specific cost- sharing requirements for project sponsors to secure contributions of nonfederal funding. For example, nonfederal project sponsors are required to provide 35 percent of the construction costs for projects implemented through the Corps’ program, which can include land easements, rights-of-way, and necessary relocations. Other programs, such as NRCS’s Regional Conservation Partnership Program, do not include matching requirements for nonfederal funding but work with partners to identify other funding sources to supplement federal funding awards. Total Federal Expenditures for Basin Restoration Efforts Could Not Be Determined While we were able to collect information about the general sources of funding that supported implementation of the restoration efforts in the Basin respondents identified in our survey, we could not determine the total amounts of federal expenditures for these efforts for several reasons. First, unlike efforts to restore other large aquatic ecosystems, there was no congressionally authorized program to protect and restore the Basin prior to 2016 or federal funding dedicated specifically for this purpose, according to EPA officials. In the absence of dedicated federal funding or a congressionally authorized program focused on restoring the Basin, agency data on water quality-related restoration expenditures in the Basin is not readily available. Second, because some of the efforts are supported with funding from national and statewide programs that have a broader geographic scope than the Basin, it can be difficult to identify the portion of program expenditures that were for activities located within the Basin. This includes national-level programs, such as the Forest Service’s National Best Management Practices Program and EPA’s Clean Water Act grant programs, as well as statewide water quality permit programs. For instance, officials we interviewed from the Washington State Department of Ecology explained that, because the state typically do not track expenditures by region or location, it would be difficult to provide consistent and comparable estimates of expenditures for their statewide programs because of the various methodologies they use to compile the information. Third, it can be difficult to determine how much of a program’s expenditures were for water quality-related restoration when the effort was implemented primarily for a different purpose or multiple purposes that may indirectly contribute to improving water quality. Several entities that responded to our survey indicated that they do not track expenditures by activity and that it would be difficult to estimate the portion of spending on restoration-related efforts. For example, Forest Service officials told us that for its Integrated Resource Restoration program, it is difficult to track expenditures for specific restoration activities in which the funding goes towards multiple objectives, such as vegetation management and wildlife species, in addition to water quality and aquatic resources. While data on total federal expenditures for restoring the Basin could not be determined, we collected expenditures from five federal agencies to provide illustrative examples of their spending on the restoration efforts they conducted across the Basin. Using responses to our initial survey, we selected efforts that respondents identified as being implemented for a variety of restoration purposes and for which information on expenditures would be available. As shown in table 3, we collected data on expenditures for fiscal years 2014 through 2016 for specific efforts implemented by the Corps, BPA, EPA, Forest Service, and USGS. The following examples provide more detailed information about each effort for which we collected information on federal expenditures: Corps’ Ecosystem Restoration Programs. The Corps implements several ecosystem restoration programs under various authorities for the purposes of restoring and protecting aquatic habitats and environmental quality throughout the Basin. Through the Aquatic Ecosystem Restoration Program and the Project Modifications for Improvement of the Environment program, the Corps is authorized to carry out cost-effective restoration projects at facilities it operates throughout the Basin. Under the Lower Columbia River Basin Restoration Program, the Corps conducts studies and ecosystem restoration projects to protect, monitor, and restore fish and wildlife habitat in the Lower Columbia River Estuary. Collectively, for fiscal years 2014 through 2016, the Corps reported expending approximately $15.6 million in federal funding to conduct 25 aquatic ecosystem restoration projects across the Basin; this amount included costs for program coordination. For example, the Corps partnered with the City of Portland on the Westmoreland Park Ecosystem Restoration project to remove barriers to fish passage for endangered salmon swimming in Crystal Springs Creek on their way to the Willamette River (see figure 6). For fiscal years 2014 through 2016, the Corps reported about $1.4 million in total expenditures for the project, which included activities such as restoring a stream channel and surrounding wetland vegetative zone along with replacing three small culverts with wider, natural bottom fish-friendly culverts to improve water quality and restore fish passage upstream. BPA’s Columbia River Basin Fish and Wildlife Program. According to BPA, this is one of the largest fish and wildlife protection programs in the country, annually funding hundreds of projects implemented in the Columbia River Basin by a wide range of federal, state, local, tribal, academic, and nongovernmental entities across four states. The program is implemented in partnership with the Northwest Power and Conservation Council, which makes recommendations on projects that should be funded and reviews the program at least every 5 years to develop updates as needed. BPA reported that from fiscal years 2014 through 2016, it provided an average of about $90 million per year in funding for projects that directly or indirectly benefitted water quality-related restoration efforts in the Basin, including projects to restore damaged fish habitat, improve hatchery practices, research, monitoring and evaluation, and water rights acquisitions. For example, in 2015, the program awarded $180,000 to fund habitat restoration actions to improve ecological functions, including water quality, as part of the Buckmire Slough Phase #1 project located near Vancouver Lake in southwest Washington (see figure 7). This restoration project reconnected about 65 acres of shallow water salmon habitat by removing two earthen berms and collapsed culverts and installed a channel- spanning pedestrian bridge to maintain trail access. According to BPA officials, the removal of the barriers helped improve fish passage and water flow through Buckmire Slough to the larger watershed that includes Vancouver Lake, the Lake River, and the Columbia River. EPA’s Lower Columbia Estuary Partnership. EPA reported that the Lower Columbia Estuary Partnership had total expenditures of about $37 million in federal funding from fiscal years 2014 through 2016. The funding supported a range of efforts and restoration objectives for the lower portion of the Columbia River Basin, including habitat restoration; long-term monitoring strategy for sediment, fish tissue, and water quality; outdoor education programs; and citizen and professional involvement. According to EPA officials, the Lower Columbia Estuary Partnership has received about $600,000 annually in funding through Clean Water Act Section 320, which primarily supports the administrative and management functions of the partnership, including work to solicit funding from other federal and nonfederal sources to implement restoration projects throughout the estuary. Additionally, from fiscal years 2014 through 2016, the Lower Columbia Estuary Partnership received approximately $3.4 million in funding from BPA and other federal partners to support implementation of a long-term monitoring strategy for sediment, fish tissue, and water quality in the lower Columbia River and estuary. The funding helped support the Partnership’s scientific and coordination staff as well as support sub-awards to outside experts in project design, data acquisition, and data analysis. The Partnership also received about $10 million in funding from BPA and other federal entities to fund multi-year projects, implemented by the Partnership and other local governments and nonprofit organizations, that contributed to the goal of restoring and protecting 25,000 acres of habitat to help the recovery of threatened and endangered salmon in the lower Columbia River and estuary. Forest Service’s Region 6 (Pacific Northwest) Watershed and Aquatic Restoration Program. According to Forest Service officials, this program includes all required inventory, assessment, planning and design, and permitting needed to implement watershed protection and restoration projects in the agency’s Pacific Northwest Region. Examples of the types of projects implemented through this program include: restoring fish passage and hydrologic connectivity at road- stream crossings; upgrading roads that are needed and decommissioning roads that are no longer needed; and protecting and restoring riparian areas to protect and restore stream temperatures. Forest Service reported expenditures of about $92 million in fiscal years 2014 through 2016 for these types of aquatic restoration projects implemented in national forests that contribute water flow to the Columbia River Basin. This includes about $4.6 million in funding received from other federal agencies, such as BPA, the Corps, Reclamation, FWS, Bureau of Land Management, and the Federal Highway Administration. It also includes approximately $19 million in funding provided to other federal, state, tribal, nongovernmental, and local entities to support implementation of their restoration-related projects in the Basin. USGS’s National Water Quality Programs. USGS reported total expenditures of about $40 million from fiscal years 2014 through 2016 for Columbia River Basin water quality-related restoration efforts. This includes funding through appropriations, matching funds, and cost- reimbursable activities for projects and studies implemented through its national programs and Idaho, Oregon, Washington, and Wyoming- Montana regional Water Science Centers. This includes around $12 million in expenditures for National Water Quality Program activities, which provide an understanding of whether water quality conditions are improving or worsening over time, and how natural features and human activities affect those conditions. One of the efforts implemented through this program during this time frame was a regional study, the Pacific Northwest Stream Quality Assessment; USGS expenditures for this effort were about $3.3 million. The objectives of the regional study included determining the status of stream quality across the region by assessing various water quality factors that are stressors on aquatic life—such as contaminants, toxicity, and streamflow—and evaluating their relative influence on biological communities. EPA and OMB Have Not Yet Implemented Clean Water Act Section 123 EPA and OMB have not yet implemented actions required under Clean Water Act Section 123, which was enacted in 2016. Specifically, EPA has not yet established the Columbia River Basin Restoration Program, including its associated Working Group. In addition, OMB has not yet prepared and submitted as part of the President’s annual budget request an interagency crosscut budget on federal agencies’ budgets for and spending on environmental protection and restoration efforts in the Basin. EPA Has Not Yet Established the Columbia River Basin Restoration Program According to EPA officials we interviewed, the agency has not yet taken steps to establish the Columbia River Basin Restoration Program, including the Columbia River Basin Restoration Working Group, as directed by Clean Water Act Section 123. In addition, agency officials told us that they were not currently planning to do so, as the agency has not received dedicated funding appropriated for this purpose. These officials acknowledged, however, that the agency has not yet requested funding to implement the program nor initiated any studies or assessments to identify what resources it may need to establish the program. We have previously reported that the Project Management Institute’s The Standard for Program Management provides generally recognized leading practices for program management. It provides an overview of a program’s three life cycle phases and associated actions with each phase. The primary purpose of the first phase—program definition—is to progressively elaborate the goals and objectives to be addressed by the program, define the expected program outcomes and benefits, and seek approval for the program. This phase has two distinct but overlapping sub-phases: Program formulation: involves development of the business case for the program, including initiating studies and estimates of scope, resources, and cost. Program planning: commences upon formal approval of the program and leads to the formation of a program team to develop the program management plan. Upon completion of this first phase, an entity is to prepare a program management plan and, with final approval, the program commences. Consistent with the practices established in The Standard for Program Management, a program management plan would include, among other components, a schedule of the actions an entity is to take, as well as the resources and funding needed to establish a program. By developing a program management plan that includes a schedule of the actions the entity will take and the resources and funding needed to establish and implement the program and submitting this plan to the appropriate congressional authorizing committees as part of the fiscal year 2020 budget process, EPA will have more reasonable assurance that it can establish the program in a timely manner. Further, in establishing the program under Section 123, EPA will need to also establish the Working Group, which is to recommend and prioritize projects and actions and review the progress and effectiveness of restoration projects and actions implemented throughout the Basin. OMB Has Not Yet Submitted an Interagency Crosscut Budget on Federal Agencies’ Spending for Environmental Protection and Restoration Efforts in the Columbia River Basin According to OMB officials we interviewed, the agency has not yet submitted an interagency crosscut budget or requested that federal agencies provide information on their budgets and spending for Columbia River Basin environmental protection and restoration efforts as directed by Clean Water Act Section 123. Specifically, the President’s budget is to include an interagency crosscut budget displaying amounts budgeted and obligated by each federal agency involved with environmental protection and restoration projects, programs, and studies relating to the Basin. While OMB officials acknowledged the agency is responsible for preparing the interagency crosscut budget for the Basin, they told us that the agency has only had preliminary internal discussions about the best approach for implementing the requirement, including whether to develop guidance that would define key terms and the processes agencies should follow in compiling the requested information. The officials, however, could not identify a time frame for when the agency anticipated finalizing any guidance or when it would begin requesting federal agencies provide OMB the information it needs to include in the interagency crosscut budget submission to Congress. Federal standards for internal control calls for an agency to design control activities to achieve objectives and respond to risks, such as by clearly documenting internal controls in a manner that allows the documentation to be readily available for examination (e.g., the documentation may appear in management directives, administrative policies, or operating manuals). By developing and providing guidance on the types of projects and activities that agencies should include in their reports, as well as what processes they should follow in compiling the related budget and spending information, OMB would have more reasonable assurance that the agencies provide comparable information about their restoration efforts. According to a 2011 Congressional Research Service report, an interagency crosscut budget is often used to present budget information from two or more agencies whose activities are targeted at a common policy goal or related policy goals. As outlined in a 2015 federal report, an interagency crosscut budget can help facilitate federal agency coordination and collaboration for restoration activities that can benefit from an integrated approach, and it can help increase cost effectiveness. That report also noted that collecting budget information from the agencies involved can help identify high-level trends in restoration-related funding over time. We recognize that agencies will differ in their budget and account management practices as well as the complexities of the federal budget process. However, as the 2011 Congressional Research Service report concluded, by providing agencies guidance and criteria that they can use to determine which projects and programs will be tracked across agencies, the process for developing an interagency crosscut budget can account for the differences in how agencies fund and implement their restoration-related efforts. The report also noted that crosscut budgets can help make data from multiple agencies more understandable and could be used to inform congressional oversight committees, participating agencies, and other entities implementing an ecosystem initiative. By directing each federal agency involved in the protection and restoration of the Basin to collect the information needed for the interagency crosscut budget and to submit this information to OMB for inclusion in the President’s budget request for fiscal year 2020, OMB can better inform Congress as it considers funding for restoration efforts in the Basin as part of the annual budget process. Conclusions Federal agencies and other entities have undertaken a wide range of water quality-related restoration efforts in the Columbia River Basin for many years. The Water Infrastructure Improvements for the Nation Act of 2016 amended the Clean Water Act by adding Section 123 on Columbia River Basin Restoration, which requires the EPA Administrator to establish the Columbia River Basin Restoration Program, including its associated Working Group. This collaborative stakeholder-based program is to oversee and help coordinate environmental protection and restoration activities implemented throughout the Columbia River Basin. However, because EPA has not yet established the Program and Working Group, entities do not currently use a basin-wide collaborative approach to coordinate water quality-related restoration efforts being implemented throughout the Basin. Furthermore, EPA does not have a program management plan for this effort. By developing a program management plan for the effort, consistent with The Standard for Program Management, EPA will have more reasonable assurance that it can implement Clean Water Act Section 123 in a timely and effective manner. Furthermore, by establishing the Columbia River Basin Restoration Program, including the associated Working Group, EPA will be better positioned to carry out its responsibilities, which include prioritizing and evaluating the progress and effectiveness of environmental protection and restoration projects and actions implemented throughout the Columbia River Basin as required by law. In addition, Clean Water Act Section 123 requires the President’s budget to include an interagency crosscut budget displaying amounts budgeted and obligated by each federal agency involved with environmental protection and restoration projects, programs, and studies relating to the Columbia River Basin. Such a crosscut budget would include amounts obligated for the preceding fiscal year; an estimated budget for the current fiscal year; and a proposed budget for the next fiscal year for the Basin. Given the difficulties we identified in determining federal expenditures for water quality-related restoration efforts implemented in the Columbia River Basin, by developing definitions and guidance on the types of projects, programs, and studies federal agencies should include in their reports and processes to follow in compiling their budgets, OMB could help ensure that they provide consistent and comparable information that OMB needs for the crosscut budget submission to Congress. Having consistent and comparable information on federal agency expenditures and budgets is critical to helping ensure that Congress and the relevant appropriating committees can make informed decisions about funding Columbia River Basin restoration efforts in their annual budget deliberations. Recommendations for Executive Action We are making a total of three recommendations, one to EPA and two to OMB. Specifically: The Administrator of the EPA should develop a program management plan that includes a schedule of the actions EPA will take and the resources and funding it needs to establish and implement the Columbia River Basin Restoration Program, including formation of the associated Columbia River Basin Restoration Working Group, and submit this plan to the appropriate congressional authorizing committees as part of the fiscal year 2020 budget process. (Recommendation 1). The Director of OMB should develop and provide guidance on the types of projects and activities that agencies involved in the protection and restoration of the Columbia River Basin should include in their reports, as well as the processes they should follow in compiling the related budget and spending information. (Recommendation 2). The Director of OMB should direct each federal agency involved in the protection and restoration of the Columbia River Basin to collect the information OMB needs for the interagency crosscut budget and to submit this information to OMB for inclusion in the interagency crosscut as part of the President’s budget request for fiscal year 2020. (Recommendation 3). Agency Comments We provided a draft of this report for review and comment to EPA, OMB, and the departments of Agriculture, Commerce, Defense, Energy, and the Interior. We also provided a draft of the report to the Idaho Department of Environmental Quality, Montana Department of Environmental Quality, Oregon Department of Environmental Quality, and Washington State Department of Ecology. EPA provided written comments, which are reproduced in appendix IV, and stated that it agreed with the conclusions and recommendation in our report. The Department of Agriculture also provided written comments, which are reproduced in appendix V. The departments of Defense and the Interior and the Washington State Department of Ecology responded by email that they did not have comments on the draft report. The departments of Commerce and Energy and the Idaho Department of Environmental Quality provided technical comments, which we incorporated as appropriate. OMB, the Montana Department of Environmental Quality, and the Oregon Department of Environmental Quality did not provide any comments. In its written comments, EPA stated that it agrees with our recommendation to develop a program management plan that includes schedule of the action it will take and the resources and funding needed to establish and implement the Columbia River Basin Restoration Program and associated Working Group as required under Clean Water Act Section 123. EPA stated that it will work with its partners within the existing governance structures to begin discussions on the development of a program management plan. As an initial step, the agency will reconvene the Columbia River Toxics Reduction Working Group to initiate discussion for how to approach implementation of Section 123. Further, EPA stated it stands ready to work with OMB on an interagency cross cut budget after OMB provides guidance on the types of projects and activities necessary to develop the budget. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Commerce, Defense, Energy, and the Interior; the Administrator of EPA; the Director of OMB; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology This report examines (1) restoration efforts to improve water quality in the Columbia River Basin from fiscal years 2010 through 2016; (2) approaches to collaboration that entities have used for selected efforts, including factors they identified that enabled or hindered collaboration in the Basin; (3) the sources of funding and federal funding expenditures; and, (4) the extent to which the Environmental Protection Agency (EPA) and the Office of Management and Budget (OMB) have implemented Clean Water Act Section 123. For all four objectives, we reviewed relevant laws, including the Clean Water Act. We also conducted interviews and reviewed documentation from entities around the Basin, including federal agencies, state agencies responsible for managing water quality in their state, federally and non- federally recognized tribes and tribal organizations, and nongovernmental organizations. We also conducted a site visit to Portland, Oregon to meet with officials from federal agencies, a tribal organization, and a nongovernmental entity regarding their activities related to restoration efforts in the Columbia River Basin. We limited the scope of our review to the United States, specifically to the four states with the largest square mileage in the Columbia River Basin: Idaho, Oregon, Montana, and Washington. To examine restoration efforts to improve water quality in the Columbia River Basin implemented from fiscal years 2010 through 2016, we administered a survey to entities that implement restoration efforts in the Basin (see app. III for a blank copy of the survey). The survey asked each entity to individually list any water quality-related programs they implemented in the Basin from fiscal years 2010 through 2016. The survey included maps of the Columbia River Basin to provide respondents a common point of reference. For each program, we asked respondents to identify: the program’s primary and secondary purposes; one or two key examples of the activities conducted as part of the whether the entity was the only entity responsible for implementing whether the entity was the lead entity responsible for implementing what other entities, if any, were involved with implementing the the primary authorities under which the entity implemented the program; the state(s) and area(s) within the Basin in which the program was implemented; a website containing primary source documents and other relevant information on the program; whether the entity received any federal funding to support implementation of the program; the sources of the federal funding, if any; whether the entity tracks expenditures of federal funding specifically for which fiscal years, if any, from fiscal years 2010 through 2016 the entity would be able to provide information on the annual amount of federal funding expended for this program; whether the entity would be able to provide actual expenditures, estimated expenditures, or neither for the annual amount of federal funding the entity expended on the program; how the entity collected expenditure data; any nonfederal sources of funding that supported the entity’s implementation of the program; and a primary point of contact for any follow-up questions on the program. We conducted telephone pretests of the survey with 4 entities and revised it in response to their comments. During this process, we sought to ensure that (1) the questions were clear and unambiguous, (2) we used terminology correctly, (3) the survey did not place an undue burden on respondents, and (4) respondents had sufficient information to answer the questions. We identified and sent the survey to 41 entities based on the following criteria: federal agencies whose missions relate to restoration efforts in the Basin, state agencies responsible for water quality issues for the four states within our scope, federally and non-federally recognized tribes, tribal organizations, and nongovernmental entities involved with restoration efforts within the Basin. We emailed the survey in an attached pdf form that respondents could return electronically after marking checkboxes or entering responses into open-answer boxes. We sent the survey with a cover letter on May 31, 2017. After 2 weeks, we sent a reminder email, attaching an additional copy of the survey, to entities who had not responded. After 4 weeks, we telephoned all respondents who had not returned the survey and asked them to participate. We received responses from the entities listed in Table 4. We received 32 completed surveys from all of the 16 federal and state agencies that we contacted and we received responses from 16 of the 25 federally and non-federally recognized tribes, tribal organizations, and nongovernmental entities that we contacted. Because we did not survey every entity implementing restoration efforts in the Basin, the results from our analysis may not include all restoration efforts implemented in the Columbia River Basin from fiscal years 2010 through 2016. To assess the accuracy and completeness of the responses, we reviewed and analyzed each completed survey. In particular, we contacted each respondent at least once to follow-up on their responses and allowed respondents to review, correct, and edit their responses if necessary. During this follow-up, we asked questions to ensure that the responses to each survey were complete, comparable, and accurate and to clarify ambiguous responses. After we completed this follow-up, we analyzed the list of compiled restoration efforts to assess whether each listed restoration effort met general criteria. For example, we assessed the responses to make sure the efforts represented a similar level of aggregation, specifically at a program level. As part of our assessment, we reviewed prior interviews and agency’s or entity’s documents and websites. For example, in some instances the name of a restoration effort listed in the survey did not match the name of the effort on the agency’s website. We recognize that despite implementation of our criteria, some ambiguity may remain about the programs included in the catalog. Based on our assessment, we further refined the list of restoration efforts and developed the final list as presented in Appendix II. To examine approaches to collaboration that entities—including federal agencies, states, tribes, and nongovernmental entities—have used for select efforts, we selected five case examples for in depth review. We used selection criteria to yield a limited number of efforts in the Columbia River Basin that were among the broadest in scope with regards to their geographic coverage and/or the number and type of entities involved (e.g., interstate vs. intrastate programs, entities from multiple levels of government) based on the survey responses we received. In addition, we selected these efforts, in part, to highlight collaborative practices for efforts implemented by a variety of entity types and with different primary purposes (i.e., improving or monitoring surface water quality, reducing toxic pollutants, recovering threatened or endangered species, or restoring and protecting habitat). We conducted interviews with officials from these five case example efforts on the collaborative practices they used to plan and implement their programs and requested related documentation for review. We derived the questions we used for the case interviews from our prior reports on practices that may enable collaboration. For example, we asked interviewees about mechanisms they used for their given effort to define intended outcomes and roles and responsibilities, identify resource needs (e.g., funding, staff) and their sources, and ensure the compatibility of policies and procedures across entities. Our prior reporting served as the conceptual framework for understanding the collaborative practices used by officials leading these case example efforts. We highlight in our report a single illustrative collaborative practice used for each effort. In addition, we separately emailed four questions to each of the 11 federal agencies with water quality-related restoration efforts and that responded to our survey, to solicit agency officials’ opinions on practices that may have enabled or hindered collaboration for efforts planned and implemented by their respective agency. We sent these emails to the same agency points of contact to which we sent the first survey designed to identify restoration efforts in the Basin or to other officials the agency identified as the relevant point of contact. We derived questions we emailed from our prior reporting on factors that may enable collaboration. We asked interviewees to consider efforts for which their agency had their most and least successful experiences in collaborating with other organizations on water quality-related restoration activities and to systematically rank factors, from a list we provided, that enabled or hindered their collaboration with the other organizations. We received written responses from all 11 agencies. Our prior reports served as the conceptual framework for developing the list of factors that we provided to the respondents and from which they selected those that applied to their agency’s experience. We highlight the most commonly identified collaboration enablers and hindrances. We systematically asked officials from the five case efforts and the 11 federal agencies that received the four questions we emailed for their perspectives on the most significant challenges, if any, to enhancing collaboration among entities involved in restoration efforts to improve water quality in the Basin. We also systematically asked the same officials for their suggestions, if any, for steps that could be taken to enhance collaboration among entities involved in restoration efforts to improve water quality in the Basin. We highlight some of the challenges and suggestions respondents offered. Last, to determine whether a mechanism exists for basin-wide collaboration on water quality-related restoration programs, we reviewed existing legislation and interviewed agency officials. To examine the sources of funding and federal funding expenditures in the Columbia River Basin, we interviewed agency officials, reviewed budget documents, analyzed responses to funding questions included in our initial survey, and analyzed expenditure data for selected federal efforts for fiscal years 2014 through 2016. Initially, we intended to use a second survey to collect comprehensive data on expenditures for each restoration effort that entities identified in response to our initial survey. However, in pretests with agency officials, we identified significant concerns with respect to the accuracy and completeness of information that we would gather through this approach that would limit our ability to compare expenditure data across agencies and efforts. Given the degree of variability, uncertainty, and lack of detail in the information agencies could provide, we concluded that the data would not be reliable for the purposes of estimating their expenditures of federal funding for their water-quality related restoration expenditures throughout the Columbia River Basin. To provide some information on expenditures, we decided to modify our comprehensive approach by shortening the time frame to fiscal years 2014 through 2016 and limiting the request to one restoration effort for each of the 11 federal agencies. We selected the 11 restoration efforts based on our review of the agencies’ responses to questions in our initial survey relating to the primary purpose(s) of the program and availability of expenditure data. We then conducted interviews with agency officials to learn more about the selected efforts and the availability and reliability of expenditure data. Based on these interviews, we determined that for 6 of the 11 programs, the efforts had limited activities in the Basin during this time frame or the agencies would only be able to provide limited information or would not be able to provide sufficiently reliable expenditure data for the selected effort. We then distributed a second survey to 5 agencies— Bonneville Power Administration (BPA), U.S. Army Corps of Engineers, EPA, U.S. Forest Service, and U.S. Geological Survey. In this survey, we requested expenditures information for a specified restoration effort and asked about the sources and processes the agencies followed in compiling the information. Based on our review of these responses, we determined that the expenditure information for these specific restoration efforts was sufficiently reliable for purposes of our reporting objective. To examine the extent to which EPA and OMB have implemented Clean Water Act Section 123, we reviewed the law and legislative history. We also requested documentation from and conducted interviews with knowledgeable officials at EPA and OMB. We also identified program management leading practices reported by the Project Management Institute’s The Standard for Program Management and discussed in our prior reports. For example, we considered the applicable leading practices for schedule and cost estimates, as well as other practices such as the development of program management plans. Appendix II: Catalog of Columbia River Basin Water Restoration Efforts, Fiscal Years 2010 through 2016 Table 5 provides a list of 188 Columbia River Basin water quality-related restoration efforts identified by 11 federal agencies, 4 state agencies, 4 nongovernmental organizations, and 11 tribes and tribal entities in their responses to our May 2017 survey, along with a brief description of each effort and the restoration purpose(s) it supported. This list is primarily based on the survey responses. The survey included definitions of key terms including program, implement, and purposes of the programs. After we received survey responses, we conducted multiple reviews of the information, including asking the entities to review and edit the information they provided. In some cases we supplemented their responses with additional information available through other sources, such as interviews with officials and reviews of agency documents, as appropriate. Given the size of the Basin and number of entities involved, for our survey we specifically requested respondents report the restoration efforts at a programmatic level. In some instances, we decided to consolidate certain efforts that appeared to be part of the same overall program and exclude other efforts that appeared to be project-level efforts. Although we made every attempt to gather a comprehensive list of restoration efforts implemented by the entities listed below, including verifying the information with the respective entities, this list may not capture all of the relevant restoration efforts they implemented in the timeframe covered by our review. Further, entities may have not have listed all of their relevant efforts. We also acknowledge that the list does not reflect restoration efforts in the Columbia River Basin that were implemented by other entities not included within the scope of our review. Appendix III: Survey Distributed to Entities in the Columbia River Basin Appendix IV: Comments from the Environmental Protection Agency Appendix V: Comments from the Department of Agriculture Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact J. Alfredo Gómez, (202) 512-3841 or [email protected]. Staff Acknowledgments In addition to the individual named above, Barbara Patterson (Assistant Director), Heather Dowey (Analyst in Charge), Stephen Betsock, Mark Braza, John Delicath, Carol Henn, Karen Howard, Vondalee Hunt, David Lysy, Jeff Malcolm, Michael Meleady, Dan C. Royer, Kiki Theodoropoulos, and Sarah Veale made key contributions to this report. Related GAO Products Puget Sound Restoration: Additional Actions Could Improve Assessments of Progress. GAO-18-453. Washington, D.C.: July 19, 2018. Long Island Sound Restoration: Improved Reporting and Cost Estimates Could Help Guide Future Efforts. GAO-18-410. Washington, D.C.: July 12, 2018. Great Lakes Restoration Initiative: Improved Data Collection and Reporting Would Enhance Oversight. GAO-15-526. Washington, D.C.: July 21, 2015. Great Lakes Restoration Initiative: Further Actions Would Result in More Useful Assessments and Help Address Factors That Limit Progress. GAO-13-797. Washington, D.C.: September 27, 2013. Chesapeake Bay: Restoration Effort Needs Common Federal and State Goals and Assessment Approach. GAO-11-802. Washington, D.C.: September 15, 2011. Recent Actions by the Chesapeake Bay Program Are Positive Steps Toward More Effectively Guiding the Restoration Effort, but Additional Steps Are Needed. GAO-08-1131R. Washington, D.C.: August 28, 2008. South Florida Ecosystem: Restoration Is Moving Forward but Is Facing Significant Delays, Implementation Challenges, and Rising Costs.GAO-07-520. Washington, D.C.: June 4, 2007. Chesapeake Bay Program: Improved Strategies Are Needed to Better Assess, Report, and Manage Restoration Progress. GAO-06-96. Washington, D.C.: October 28, 2005. Great Lakes: Organizational Leadership and Restoration Goals Need to Be Better Defined for Monitoring Restoration Progress. GAO-04-1024. Washington, D.C.: September 28, 2004. Columbia River Basin: A Multilayered Collection of Directives and Plans Guides Federal Fish and Wildlife Activities. GAO-04-602. Washington, D.C.: June 4, 2004. Great Lakes: An Overall Strategy and Indicators for Measuring Progress Are Needed to Better Achieve Restoration Goals. GAO-03-515. Washington, D.C.: May 21, 2003. Columbia River Basin Salmon and Steelhead: Federal Agencies’ Recovery Responsibilities, Expenditures and Actions. GAO-02-612. Washington, D.C.: July, 26, 2002. South Florida Ecosystem Restoration: Substantial Progress Made in Developing a Strategic Plan, but Actions Still Needed. GAO-01-361. Washington, D.C.: May 27, 2001. Comprehensive Everglades Restoration Plan: Additional Water Quality Projects May be Needed and Could Increase Costs. GAO/RCED-00-235. Washington, D.C.: September 14, 2000.
Why GAO Did This Study The Basin is one of the nation's largest watersheds and extends mainly through four Western states and into Canada. Activities such as power generation and agricultural practices have impaired water quality in some areas, so that human health is at risk and certain species, such as salmon, are threatened or extinct. In December 2016, Congress amended the Clean Water Act by adding Section 123, which requires EPA and OMB to take actions related to restoration efforts in the Basin. GAO was asked to review restoration efforts in the Basin. This report examines (1) efforts to improve water quality in the Basin from fiscal years 2010 through 2016; (2) approaches to collaboration that entities have used for selected efforts; (3) sources of funding and federal funding expenditures; and (4) the extent to which EPA and OMB have implemented Clean Water Act Section 123. GAO reviewed documentation, including laws, policies, and budget information; surveyed federal, state, tribal, and nongovernmental entities that GAO determined had participated in restoration efforts; and conducted interviews with officials from most of these entities. What GAO Found Various entities, including federal and state agencies and tribes, implemented restoration efforts to improve water quality in the Columbia River Basin from fiscal years 2010 through 2016, according to GAO survey results. Entities implemented a range of restoration efforts. Efforts included activities to improve surface water quality and restore and protect habitat. For example, the Kootenai Tribe of Idaho implemented projects on the Kootenai River to restore and maintain conditions that support all life stages of native fish. Entities used various collaborative approaches . Entities' approaches to collaboration for selected water quality-related efforts in the Basin varied. For example, the Environmental Protection Agency (EPA) sought various entities' voluntary involvement to coordinate toxics reduction efforts in the Basin. Total federal expenditures could not be determined . Entities reported using a mix of federal and nonfederal funding sources for restoration efforts in the Basin, but total federal expenditures could not be determined, in part because there is no federal funding dedicated to restoring the Basin. EPA and Office of Management and Budget ( OMB) have not yet implemented Section 123. According to EPA officials, the agency has not yet taken steps to establish the Columbia River Basin Restoration Program, as required by the Clean Water Act Section 123. EPA officials told GAO they have not received dedicated funding appropriated for this purpose; however, EPA has not yet requested funding to implement the program or identified needed resources. By developing a program management plan that identifies actions and resources needed, EPA would have more reasonable assurance that it can establish the program in a timely manner. Also, an interagency crosscut budget has not been submitted. According to OMB officials, they have had internal conversations on the approach to develop the budget but have not requested information from agencies. A crosscut budget would help ensure Congress is better informed as it considers funding for Basin restoration efforts. What GAO Recommends GAO is making three recommendations, including that EPA develop a program management plan for implementing the Columbia River Basin Restoration Program and that OMB compile and submit an interagency crosscut budget. EPA agreed with its recommendation. OMB did not comment, and GAO maintains its recommendations are valid.
gao_GAO-18-528
gao_GAO-18-528_0
Background Medicaid is jointly financed by the federal government and the states, with the federal government reimbursing states for a share of their expenditures for Medicaid covered services provided to eligible beneficiaries. The federal share of spending is based on a statutory formula that determines a federal matching rate for each state. Medicaid Service Delivery Models States may provide Medicaid services under either or both a fee-for- service model and a managed care model. Under a fee-for-service delivery model, states make payments directly to providers for services provided, and the federal government reimburses the state its share of spending based on these payments. Under a managed care service delivery model, states pay MCOs a capitation payment, which is a fixed periodic payment per beneficiary enrolled in an MCO—typically, per member per month. The federal government reimburses its share of spending based on the capitation payments states made to the MCO. In return for the capitated payment, each MCO is responsible for arranging for and paying providers’ claims for all covered services provided to Medicaid beneficiaries. For example, MCOs may pay providers on a fee- for-service basis or with a monthly capitation payment per beneficiary, or through some other payment approach in which the provider assumes some risk for providing covered services. In either case, MCOs are required to report to the states information on services utilized by Medicaid beneficiaries—information typically referred to as encounter data. Figure 1 illustrates these models. State and MCO Program Integrity Responsibilities Program integrity refers to the proper management and function of the Medicaid program to ensure that quality and efficient care is being provided, while Medicaid payments are used appropriately and with minimal waste. Program integrity efforts encompass a variety of administrative, review, and law enforcement strategies. State stakeholders—Medicaid managed care offices, state Medicaid program integrity units, Medicaid Fraud Control Units (MFCUs), and in many cases state auditors—and MCO stakeholders—MCOs that contract with states to deliver Medicaid services—play important roles in the oversight of managed care payment risks and have a variety of program integrity responsibilities. A stakeholder’s program integrity responsibilities can be specialized—such as for MFCUs, which focus on fraudulent behavior—or varied—such as for state Medicaid managed care offices and MCOs, which are responsible for monitoring fraud and other issues, such as compliance with quality standards or ensuring MCOs meet contract requirements. (See table 1.) Two of the stakeholders—state Medicaid managed care offices and MCOs—have responsibilities for program operation in addition to program integrity oversight responsibilities. For example, state Medicaid managed care offices’ program operations responsibilities include enrolling beneficiaries, negotiating contracts with MCOs, developing capitation rates, and making monthly capitation payments to MCOs. MCOs’ program operation responsibilities include establishing contracts with providers, creating provider networks, ensuring that enrollees have an ongoing source of primary care and timely access to needed services, and processing and paying provider claims. In a previous report, we found that state Medicaid program integrity efforts focus primarily on payments and services delivered under fee-for- service, and do not closely examine program integrity in managed care. For example, officials from five of seven states that we spoke to for that report said that they primarily focused their program integrity efforts on fee-for-service claims. They also noted that program integrity in Medicaid managed care was more complex than for fee-for-service. CMS’s Program Integrity Responsibilities CMS’s program integrity responsibilities take a variety of forms. CMS issues program requirements for states through regulations and guidance; for example, regulations requiring states to establish actuarially sound capitation rates and to ensure that MCOs have an adequate network of providers, as well as to ensure that all covered services are available and accessible to beneficiaries in a timely manner. CMS also requires states to submit MCO contracts and capitation rates to CMS for review and approval, and report key information such as encounter data collected from MCOs. The agency provides technical assistance and educational support to states, including having staff available to help states with specific issues or questions, and providing courses on program integrity issues. The agency also conducts periodic reviews to assess state program integrity policies, processes, and capabilities. In addition, CMS has engaged audit contractors to help states audit providers receiving Medicaid payments, including payments made by MCOs to providers. Six Types of Payment Risks Exist for Managed Care, with Stakeholders Viewing Some Risks as Greater than Others We identified six types of payment risks through our review of Medicaid audit reports and other sources. Most of the stakeholders we spoke to agreed that these payment risks exist in Medicaid managed care. Four of these risks relate to the payments state Medicaid agencies make to MCOs, and two relate to payments that MCOs make to providers. (See figs. 2 and 3.) In terms of the relative importance of these payment risks, two payment risks were more frequently cited by stakeholders as having a higher level of risk than other types—incorrect MCO fee-for-service payments to providers and inaccurate state capitation rates. The remaining four payment risks were more frequently cited as having lower or unknown levels of risk: improper state capitation payments, state payments to noncompliant MCOs, incorrect MCO capitation payments, and duplicate state payments. (See fig. 4.) When we asked stakeholders to designate a level of risk, stakeholders whose primary responsibility is program integrity—state auditors, MFCU officials, and state Medicaid program integrity staff—were more likely to assign a higher level of risk for certain types of payment risks than state Medicaid managed care officials and MCO officials. (See app. I for additional information on risk level designation by stakeholder group.) Stakeholders provided the following examples of payment risks that they rated as having “some” or “high” risk in the state. (See table 2.) See appendix II for further examples of payment risks identified as part of our review of audits and other reports. Multiple Challenges Exist for Effective Program Integrity Oversight and Stakeholders Identified Strategies to Address Them Key Challenges to Oversight Included Resource Allocation, the Quality of Data and Technology, and the Adequacy of State Policies and Practices We identified six challenges to effective program integrity oversight in Medicaid managed care based on our review of Medicaid audit reports and other sources. Among these six challenges, stakeholders most frequently cited allocation of resources, quality of data and technology, and adequacy of state policies and practices as key challenges. Some stakeholders also described strategies to address these challenges. Through our research on examples of payment risks in Medicaid managed care, we identified six areas that can present challenges to program integrity oversight, including (1) availability and allocation of resources; (2) access to and quality of data and technology; (3) state policies and practices; (4) provider compliance with program requirements; (5) MCO management of program integrity; and (6) federal regulations, guidance, and review. Allocation of resources, quality of data and technology, and state policies and practices were the three most commonly cited challenges to program integrity oversight by stakeholders. (See fig. 5.) Stakeholders described the following examples of challenges to program integrity oversight they had observed. See appendix III for more information on the particular challenges for each of the payment risks. Availability and allocation of resources. Stakeholders who cited resource allocation as an oversight challenge to managed care cited several key issues, such as the number of staff allocated to an activity, the expertise needed, and the ability to retain and replace staff. (See table 3.) Some stakeholders identified resource issues within their own organizations, while some identified resource issues they said existed in other organizations. Access to and quality of data and technology. Stakeholders who cited the quality of data and technology as oversight challenges to managed care provided examples related to timely access to data, inaccurate and unreliable data, and problems with information systems and interfaces. (See table 4.) State policies and practices. Stakeholders who cited state policies and practices as an oversight challenge to managed care described insufficient contract requirements, lack of state monitoring, and problems with state oversight. (See table 5.) Stakeholders from the state program integrity office, the MFCU, and the state auditor’s office more frequently identified state policies and practices as a challenge than stakeholders from the state Medicaid managed care agency. MCO management of program integrity. Stakeholders who cited MCO management as an oversight challenge to managed care described how inadequate MCO oversight and monitoring—as well as incomplete MCO reporting to the state agency—can increase the risk of different types of payment risks. (See table 6.) Stakeholders from the state Medicaid managed care agency, the state program integrity office, and the MFCU were more likely than MCO stakeholders to cite these issues as challenges. In particular, a few state officials noted that there was variation in size and resources among the MCOs in their respective states. Provider compliance with program requirements. Stakeholders who cited provider compliance as a challenge to oversight indicated that providers are the primary source of inaccurate payments, because of improper billing, which may include fraudulent billing. These stakeholders also stated that some types of providers presented a higher risk than others in their state. Several stakeholders pointed out that certain providers intentionally commit fraud, while others may be unaware of changes in policies or procedures and therefore unintentionally submit inaccurate claims. Several stakeholders noted that it is the responsibility of providers to bill correctly, while a few others pointed out that because the payment process is complicated, MCOs and state agencies may not identify inaccurate payments. Stakeholders also selected from a list of 19 types of providers the 3 or 4 that in their view represented the highest payment risks in the state. The two most frequently mentioned health care providers or services were (1) durable medical equipment, and (2) psychiatric and behavioral health care providers. (See table 7.) Federal regulations, guidance, and review. Over half of the stakeholders who identified federal regulations, guidance, and review as oversight challenges to managed care cited the complexity of federal regulations and the lack of federal guidance as key issues. For example, one stakeholder said that there needed to be more clarity about the new regulations for setting capitation rates for MCOs, while another said that there was a lack of clarity about the respective roles of states and MCOs in program integrity oversight. One stakeholder noted that most of the responsibility for operating the Medicaid program lies with the state, not with the federal government. Strategies Identified by Stakeholders to Address Managed Care Oversight Challenges Included Ensuring High Quality Data and Collaboration among State Agencies and MCOs Some stakeholders we interviewed identified strategies, controls, or best practices to address the challenges to oversight of Medicaid managed care payment risks. As shown in table 8, they identified a variety of strategies such as ensuring high quality data, collaboration among state agencies and MCOs, imposing sanctions on noncompliant MCOs, enhancing contract requirements, and conducting regular monitoring. CMS Has Assisted States in Addressing Payment Risks, but Some Efforts Have Been Delayed and There Are Gaps in Oversight CMS has taken important steps to address payment risks in Medicaid managed care, issuing a final rule, increasing guidance, and conducting oversight activities. However, some efforts are incomplete, and there are gaps in key oversight activities. CMS Issued a Final Rule, Provided Additional Guidance, and Updated Certain Oversight Activities Related to Managed Care Program Integrity In May 2016, CMS issued a final rule on Medicaid managed care. According to CMS, the rule is intended to enhance regulatory provisions related to program integrity and payment risks, among other things. These regulatory provisions varied in terms of when the requirements were applicable. For example, for contracts beginning on or after July 1, 2017, the rule requires state contracts with MCOs to require MCOs to promptly report all overpayments made to providers, and to specify the overpayments due to potential fraud; states to account for overpayments when setting capitation payment amounts; and states to establish procedures and quality assurance protocols to ensure that MCOs submit encounter data that is complete and accurate. These requirements have the potential to enhance MCO and state oversight of managed care, and address payment risks involving incorrect MCO payments to providers and inaccurate state capitation rates for MCOs. CMS is currently reviewing the rule for possible revision of its requirements and an announcement on the results of the review is expected in 2018. Most stakeholders we spoke to identified ways in which the managed care rule could have a positive impact on managed care program integrity oversight. Of the 49 stakeholders we spoke to, 28 made positive statements about the rule’s potential impact on program integrity oversight of payment risks in managed care, 9 stakeholders said they were not familiar enough with the managed care rule to comment on it, and the remaining 12 stakeholders provided a range of comments about the rule. The 28 stakeholders with positive comments identified a variety of ways in which they said the managed care rule would help, including establishing transparency in setting state capitation rates; providing clear guidelines for MCO reporting, and clear authority for states to require reporting; obtaining information on overpayments identified and collected by holding MCO leadership accountable for meeting program reducing medical costs, despite additional short-term administrative costs. Comments by the other 12 stakeholders who were familiar with the rule included statements that the rule should have been more aggressive in requiring MCOs to implement efforts related to program integrity; would have limited impact for them, because many of its requirements were already in place in their state; and set time frames for implementation that were hard to meet. In addition to issuing the rule, CMS has sought to increase guidance available to states through training, technical assistance, and other educational resources. (See table 9.) Lastly, CMS efforts have included updating the requirements used in capitation rate setting reviews, contract oversight, and other types of audits and reviews, as described below. Review of state capitation rates for Medicaid MCOs. CMS reviews states’ capitation rates at least once every year, and in 2017 made revisions to its rate review guidance to states, incorporating new requirements from the managed care rule. According to CMS officials, the agency typically conducts between 250 and 300 rate reviews annually to determine whether states’ rate development methodologies meet generally accepted actuarial principles, as well as federal laws and requirements. Review of state Medicaid MCO contracts. CMS regularly reviews state contracts with MCOs to ensure that contract provisions meet federal requirements. In 2017, CMS updated its criteria for Medicaid managed care contract review and approval, and revised the guide that it provides to states to help them develop effective MCO contracts. CMS contracted audits. In 2016, CMS began to transition and consolidate audits of providers to a type of contractor called Unified Program Integrity Contractors (UPIC). This transition is intended to integrate contracted audit activities across CMS health care programs, such as Medicaid and Medicare, according to CMS. Additionally, UPIC audits can include health care providers who participate in multiple federal programs. Within the Medicaid program, UPICs may conduct audits with states interested in pursuing what are called “collaborative audits.” CMS’s contract with UPICs allows for audits of providers in MCO networks. Focused program integrity reviews. CMS officials said that in 2016, the agency updated the review guide used to conduct focused program integrity reviews of state Medicaid managed care programs. CMS program integrity reviews have identified some common issues, such as a low number of investigations of overpayments conducted by managed care plans and a low amount of recoveries by plans. However, CMS officials stated these reviews are not focused primarily on assessing specific payment risks. For example, these reviews do not involve an actual review or audit of MCO payments to providers to assess the extent that inaccurate payments were made. Instead, they review program integrity policies and processes, such as whether and how the state monitors overpayments, and whether MCOs comply with state requirements. CMS Efforts to Address Payment Risks Have Been Delayed and Gaps Exist in Key Oversight Activities. Despite CMS’s efforts to improve oversight of program integrity in Medicaid managed care, there have been delays in issuing guidance, and gaps in key auditing and monitoring activities. These delays and gaps are inconsistent with the agency’s current program integrity plan, which established goals for improving state oversight of program integrity in Medicaid managed care, as well as the financial accountability of Medicaid MCOs. Delays in the Development and Issuance of Guidance Publication of CMS guidance that would assist states in oversight of payment risks has been delayed. CMS officials told us in April 2017 that they planned to issue a compendium of guidance related to the managed care rule’s program integrity regulations. The compendium is intended to provide guidance on (1) MCO program integrity requirements, (2) state audits of MCO encounter data that must be conducted at least every 3 years, and (3) MCO overpayments to providers. However, in September 2017, CMS officials told us that although they had a draft of the compendium, they did not have a timeline for issuing it, because the managed care rule was under review. As of May 2018, no issuance date has been set for the guidance. Over half of the stakeholders we interviewed who identified federal responsibilities as an oversight challenge to managed care cited the complexity of federal regulations and the lack of federal guidance as key issues. The lack of available federal guidance resulting from delays in issuing such guidance is inconsistent with federal internal control standards that call for federal agencies to communicate quality information to those responsible for program implementation for the purposes of achieving program objectives and addressing program risks. Until such guidance is issued, stakeholders’ ability to effectively address challenges to payment risks in Medicaid managed care will continue to be hindered. Gaps in Auditing Although audits of providers that bill and are paid by MCOs can provide important information about payment risks and are included in the UPIC statement of work, only 14 of the 762 audits initiated by CMS contractors during the period of fiscal year 2014 through 2017 were managed care audits. Our review of three CMS contracted managed care audits indicated that the amount of inaccurate MCO payments to providers—as well as MCO and provider noncompliance with contracts—can be significant. For example, one audit of an MCO’s payments to selected providers found that 8.94 percent of payments were in error, representing over $4 million in overpayments for a 6-month period. This audit also identified a lack of provider compliance with requirements to provide preventive care services and care coordination to members, and a lack of MCO compliance with requirements to monitor member enrollment, resulting in the MCO paying providers for individuals who were not enrolled. CMS officials shared plans to increase collaborative audits in managed care in the future. CMS officials said the agency is in the early planning stages to pilot an audit of MCO providers in one state, with the goal of addressing challenges encountered in prior managed care audits. CMS is also in discussions with states and audit contractors to conduct potential audits and investigations in fiscal years 2018 and 2019. However, CMS and audit contractor officials identified several circumstances related to states’ contracts with MCOs that they said have created gaps in their auditing activity. According to CMS officials, states have reported a reluctance to conduct provider audits when states’ contracts with MCOs (1) allow the MCO to retain identified overpayments, or (2) do not explicitly discuss how identified overpayments are addressed. Officials with the two operating UPICs told us that CMS’s general guidance to them was to restrict their audits to states with MCO contracts where the states can recoup overpayments from the MCOs. According to one contractor, because few states have such contracts, the vast majority of the contractors’ audits are of providers paid on a fee-for-service basis. However, overpayments to providers can affect state and federal expenditures regardless of a state’s particular recoupment policy, because if they are not accounted for, they may increase future capitation rates paid to MCOs. Audit contractor officials also said the lack of access to MCO coverage and policy materials, and the inability to directly access encounter or claims data, prevent them from doing analyses to identify potential provider fraud, abuse, and waste for investigation. While CMS officials said they encourage states to participate in additional collaborative audits of managed care, they did not identify steps the agency is taking to address the circumstances that limit collaborative audits conducted. The lack of sufficient auditing in managed care is inconsistent with federal internal control standards that require federal agencies to identify risks through such activities as auditing. Gaps in Monitoring CMS has incomplete information on the scope and extent of MCO overpayments to providers, which results in a gap in monitoring MCO payments. Gaps in monitoring also exist because CMS lacks a process for consistently collecting information about overpayments and documenting that states account for overpayments when setting capitation rates. A few examples of these issues include the following: While CMS regularly reviews states’ proposed capitation rates, it lacks a process to consistently ensure any overpayments are accounted for by the states. According to an official with CMS’s Office of the Actuary, their review of state capitation rates does not require documentation of the amount of overpayments that occurred the prior year, how they were determined, or how they were incorporated into setting capitation rates. According to this official, issues between states and MCOs—such as contractual issues related to how overpayments are handled—are beyond the scope of their review and responsibilities. However, such information could be important to program integrity oversight; for example, 11 stakeholders we interviewed said that state capitation rates did not account for overpayments, because they had observed that overpayments were not reported by MCOs, were not monitored by the state, or both. Although some of CMS’s focused program integrity reviews have suggested that there is under-reporting of MCO overpayments to providers, CMS officials explained that these reviews are intended to assess state compliance with regulations, and not to determine the extent of under-reporting or why overpayments are under- reported. States’ and CMS’s contracted auditors have conducted only a few collaborative audits in managed care, even though such audits can identify overpayments made by MCOs to providers. These gaps in monitoring of overpayments are inconsistent with federal internal control standards that require federal agencies to monitor operating effectiveness through audits and reviews. Without more complete information on the extent of overpayments and a process to ensure they are accounted for in state capitation rates, CMS is unable to ensure that MCOs are effectively identifying overpayments and documenting that they are accounted for when reviewing and approving state capitation rates. As a result, CMS cannot be sure that states are holding MCOs financially accountable for making proper payments, that states are paying accurate capitation payments to MCOs, or that the federal government’s share of Medicaid expenditures is accurate. Conclusions Managed care has the potential to help states reduce Medicaid program costs and better manage utilization of health care services. However, oversight of managed care is critical to achieving these goals. Payment risks are not eliminated under managed care; in fact, they are more complex and difficult to oversee. While CMS has taken important steps to improve program integrity in managed care—including strengthening regulations, developing guidance for states on provider enrollment in Medicaid managed care, and beginning to include managed care in the monitoring and auditing process—the efforts remain incomplete, because of delays and limited implementation. To date, CMS has not issued its planned compendium with guidance on program integrity in Medicaid managed care, taken steps to address known factors limiting collaborative audits, or developed a process to help ensure that overpayments to providers are identified by the states. Without taking actions to address these issues, CMS is missing an opportunity to develop more robust program integrity safeguards that will best mitigate payment risks in managed care. Recommendations For Executive Action We are making the following three recommendations to CMS: The Administrator of CMS should expedite the planned efforts to communicate guidance, such as its compendium on Medicaid managed care program integrity, to state stakeholders related to Medicaid managed care program integrity. (Recommendation 1) The Administrator of CMS should eliminate impediments to collaborative audits in managed care conducted by audit contractors and states, by ensuring that managed care audits are conducted regardless of which entity—the state or the managed care organization—recoups any identified overpayments. (Recommendation 2) The Administrator of CMS should require states to report and document the amount of MCO overpayments to providers and how they are accounted for in capitation rate-setting. (Recommendation 3) Agency Comments We provided a draft of this product to the Department of Health and Human Services for comment. HHS concurred with these recommendations, stating that it is committed to Medicaid program integrity. HHS also cited examples of activities underway to improve oversight of the Medicaid program, such as training offered through the Medicaid Integrity Institute, and guidance provided in the Medicaid Provider Enrollment Compendium. The full text of HHS’s comments is reproduced in appendix IV. HHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services and the Administrator of CMS. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix V. Appendix I: Risk Level Designations by Stakeholder Group We asked stakeholders involved in program integrity oversight to assign a level of risk—either low, some, or high—to six types of payment risks in Medicaid managed care. We interviewed officials in the following five organizations in each of 10 states: state Medicaid managed care office, state program integrity unit, Medicaid Fraud Control Unit (MFCU), state auditor’s office, and a managed care organization (MCO). (See table 1 for a description of each of these entities.) Figures 6 through 9 below illustrate the risk level stakeholders assigned to the four types of payment risk that are associated with states’ periodic capitation payments to MCOs. Figures 10 and 11 illustrate the risk level stakeholders assigned the two types of payment risks associated with MCO payments to providers. In some cases, stakeholders said they did not have enough information to assign a level of risk (“Don’t know”) or that one of the payment risks did not apply in their state (“Not applicable”). For some payment risks, the stakeholders whose primary responsibility is program integrity—state auditors, MFCU officials, and state Medicaid program integrity staff—were more likely to assign a higher level of risk than state Medicaid managed care officials and MCO officials who have responsibilities both for program operation and program integrity. For example, some of the risk levels cited in our interviews by state auditors, MFCU officials, and state Medicaid program integrity staff included the following: State auditors most frequently cited improper state capitation payments as high risk in the state. Three state auditors identified duplicate state payments as high risk. Just over half of all state auditors, MFCU officials, and state Medicaid program integrity staff identified inaccurate state capitation rates as some or high risk. In contrast, state Medicaid managed care officials and MCO officials were less likely to assign high risk to payment types. Some examples include the following: No state Medicaid managed care officials cited a high level of risk for any of the six payment types. Two MCO officials cited a high level of risk for incorrect MCO fee- for-service payments. No other MCO officials cited a high level of risk for any of the other payment types. Stakeholder views on the risk level of different payment risks are outlined in the figures that follow. Improper state capitation payments may occur when the state makes monthly capitation payments to an MCO for beneficiaries who are ineligible for or not enrolled in Medicaid, or who have died. (See fig. 6.) Inaccurate state capitation rates occur when a state established a capitation rate that is inaccurate primarily due to issues with the data used to set the rates. Data issues could include inaccurate encounter data, unallowable costs, overpayments that are not adjusted for in the rate, or older data that do not reflect changes in care delivery practices that affect MCO costs. (See fig. 7.) State payments to noncompliant MCOs occur when a state pays MCOs a periodic capitation per beneficiary even though the MCO has not fulfilled state contract requirements. Examples of unfulfilled contract requirements include an MCO failing to establish an adequate provider network, reporting inaccurate encounter data for services, or failing to report the amount of overpayments the MCO has made to providers. (See fig. 8.) Duplicate state payments to an MCO occur when a health care provider submits a fee-for-service claim to the state Medicaid program for services that were covered under the MCO contract. (See fig. 9.) Incorrect MCO fee-for-service payments occur when the MCO pays providers for improper claims, such as claims for services (1) not provided, or provided by ineligible providers; or (2) that represent inappropriate billing, such as billing individually for bundled services or for a higher intensity of services than needed. (See fig. 10.) Incorrect MCO capitation payments occur when MCOs pay providers a periodic fixed payment without assurances they have provided needed services. (See fig. 11.) Appendix II: Examples of Different Types of Payment Risks in Medicaid Managed Care To identify examples of payment risks in Medicaid managed care, we reviewed Department of Health and Human Services’ (HHS) Office of Inspector General (HHS-OIG) publications and our prior work; obtained input from the National State Auditor’s Association; and conducted literature searches and key word searches of online databases, which identified additional state audits and investigations involving Medicaid managed care payment. We grouped these examples of payment risks into six broad categories or types based on similar key characteristics. Tables 10 through 15 provide examples of each of the six types of payment risks we identified: (1) improper state capitation payments, which are state capitation payments to MCOs for ineligible or deceased individuals; (2) inaccurate state capitation rates; (3) state payments to non-compliant managed care organizations (MCO); (4) duplicate state payments to MCOs and providers; (5) incorrect MCO fee-for-service payments to providers; and (6) incorrect MCO capitation payments to providers that have not complied with program requirements. Appendix III: Challenges to Effective Program Integrity Oversight in Medicaid Managed Care We asked 49 stakeholders involved in program integrity oversight to consider the following six challenges to effective program integrity oversight: (1) availability and allocation of resources; (2) access to and quality of data and technology; (3) state policies and practices; (4) provider compliance with program requirements; (5) managed care organization (MCO) management of program integrity; and (6) federal regulations, guidance, and review. Stakeholders were asked whether any of these presented a challenge to each of six types of payment risks in Medicaid managed care in their state, including (1) improper state capitation payments to MCOs for ineligible or deceased individuals; (2) inaccurate state capitation rates; (3) state payments to MCOs that have not fulfilled contract requirements; (4) state duplicate payments to MCOs and providers; (5) incorrect MCO fee-for-service payments to providers for improper claims; and (6) incorrect MCO capitation payments to providers that have not complied with program requirements. Figure 12 illustrates the number of times stakeholders cited a particular challenge for each of the payment risks. The frequency with which each of the challenges was identified differed to some extent for different payment risks. Some examples include the following: Quality of data and technology was the most cited challenge for duplicate state payments. State policies and practices was the most cited challenge for inaccurate state capitation rates. Provider compliance with program requirements was the most cited challenge for two payment types: (1) incorrect MCO fee-for- service payments to providers, and (2) incorrect MCO capitation payments to providers. Resource allocation was the second most cited challenge for five of the six payment risk types, although it was not the most cited challenge for any one payment risk type. Appendix IV: Comments from the Department of Health and Human Services Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, Tim Bushfield (Assistant Director), Mary Giffin (Analyst-in-Charge), Arushi Kumar, Julie Flowers, Drew Long, Vikki Porter, Katie Thomson made key contributions to this report. Other staff who made contributions to the report were Jessica Broadus, Barbie Hansen and Erika Huber.
Why GAO Did This Study Federal spending on services paid for under Medicaid managed care was $171 billion in 2017, almost half of the total federal Medicaid expenditures for that year. Federal and state program integrity efforts have largely focused on Medicaid fee-for-service delivery where the state pays providers directly, rather than managed care, where it pays MCOs. As a result, less is known about the types of payment risks under managed care. GAO was asked to examine payment risks in Medicaid managed care. In this report, GAO (1) identified payment risks; (2) identified any challenges to state oversight and strategies to address them; and (3) assessed CMS efforts to help states address payment risks and oversight challenges. To do this work, GAO reviewed findings on managed care payment risks and oversight challenges from federal and state audits and other sources. GAO also interviewed 49 state program integrity stakeholders in 10 states selected based on size, the percent of population in managed care, and geography. Stakeholders included the state Medicaid managed care office, state Medicaid program integrity unit, state auditor, Medicaid Fraud Control Unit, and an MCO. What GAO Found Under Medicaid managed care, managed care organizations (MCO) receive a periodic payment per beneficiary in order to provide health care services. Managed care has the potential to help states reduce Medicaid program costs and better manage the use of health care services. However, managed care payments also have the potential to create program integrity risks. GAO identified six types of payment risks associated with managed care, including four related to payments that state Medicaid agencies make to MCOs, and two related to payments that MCOs make to providers. Of the six payment risks GAO identified, state stakeholders responsible for ensuring Medicaid program integrity more often cited the following two as having a higher level of risk: incorrect fee-for-service payments from MCOs, where the MCO paid providers for improper claims, such as claims for services not provided; and inaccurate state payments to MCOs resulting from using data that are not accurate or including costs that should be excluded in setting payment rates. GAO also identified multiple challenges to program integrity oversight for managed care programs. Stakeholders most frequently cited challenges related to (1) appropriate allocation of resources, (2) quality of the data and technology used, and (3) adequacy of state policies and practices. Some stakeholders offered strategies to address these challenges, including collaborating with other entities to identify problem providers and fraud schemes, as well as having effective data systems to better manage risks. The Centers for Medicare & Medicaid Services (CMS), which oversees Medicaid, has initiated efforts to assist states with program integrity oversight for managed care. However, some of these efforts have been delayed, and there are also gaps in oversight. CMS's planned Medicaid managed care guidance to states has been delayed due to the agency's internal review of the regulations; as of May 2018, no issuance date had been set for the guidance. CMS established a new approach for conducting managed care audits beginning in 2016. However, only a few audits have been conducted, with none initiated in the past 2 years. In part, this is due to certain impediments identified by states, such as the lack of some provisions in MCO contracts. CMS has updated standards for its periodic reviews of the state capitation rates set for MCOs. However, overpayments to providers by MCOs are not consistently accounted for in determining future state payments to MCOs, which can result in states' payments to MCOs being too high. Lack of guidance and gaps in program integrity oversight are inconsistent with federal internal control standards, as well as with CMS's goals to (1) improve states' oversight of managed care; (2) use audits to investigate fraud, waste, and abuse of providers paid by MCOs; and (3) hold MCOs financially accountable. Without taking action to address these issues, CMS is missing an opportunity to develop more robust program integrity safeguards that will help mitigate payment risks in Medicaid managed care. What GAO Recommends GAO recommends that CMS (1) expedite issuing planned guidance on Medicaid managed care program integrity, (2) address impediments to managed care audits, and (3) ensure states account for overpayments in setting future MCO payment rates. The Department of Health and Human Services concurred with these recommendations.
gao_GAO-19-217
gao_GAO-19-217_0
Background OPM administers two defined-benefit retirement plans that provide retirement, disability, and survivor benefits to federal employees. The Civil Service Retirement System (CSRS) provides retirement benefits for most federal employees hired before 1984. The Federal Employees Retirement System (FERS) covers most employees hired in or after 1984, and provides benefits that include Social Security and a defined contribution system. If a federal employee becomes disabled while employed in a position subject to the retirement system, and the employee meets the disability eligibility requirements, the employee may apply for a disability retirement. Agencies’ human resources offices, payroll offices, and OPM are responsible for compiling and processing federal employees’ retirement applications. The process begins when an employee submits a paper retirement application to his or her agency’s human resources office. OPM’s guidance states that both agencies and payroll offices must certify that specific portions of the application are accurate. OPM employees then ensure that the package includes all the necessary information. An OPM adjudicator processes the retirement package, which contains the application documents from human resources and payroll. For example, the package includes the separation form, which finalizes the date that the employee will retire. The adjudicator determines if the eligibility requirements are met for an annuity as well as health and life insurance into retirement, and calculates the annuity. The process is completed when the individual begins receiving regular monthly benefit payments, as illustrated in figure 1. According to OPM officials, OPM then stores the paper retirement file until (1) all benefits have been applied for and paid to all eligible heirs, and (2) a specified amount of time has passed. Over several decades, OPM has attempted to modernize the retirement application process by automating paper-based functions and replacing antiquated information systems. However, as we have highlighted in our past work, the agency has experienced numerous challenges and has a history of undertaking modernization projects that did not yield the intended outcomes. Specifically, we found that OPM’s efforts over 2 decades to modernize its processing of federal employee retirement applications were fraught with (information technology) IT management weaknesses. In 2005, we made recommendations to address weaknesses in project, risk, and organizational change management. In 2008, as OPM was on the verge of deploying an automated retirement processing system, we reported deficiencies in, and made recommendations to address, additional weaknesses in system testing, cost estimating, and progress reporting. In 2009, we reported that OPM continued to have deficiencies and made recommendations to address these and other weaknesses in the planning and oversight of the agency’s modernization effort. OPM began to address these recommendations; however, in February 2011, it terminated the modernization effort. As figure 2 shows, 31.6 percent of federal employees who were on board as of September 30, 2017, will be eligible to retire in the next 5 years. Some agencies have particularly high levels of employees eligible to retire in the next 5 years. OPM’s reporting on its application processing timeliness also shows longer processing times or occasional improved processing times that were not sustained from fiscal year 2006 to 2017. We found it difficult to compare OPM’s performance across years because the performance measures have changed over time. For example, in 2009 through 2011, OPM’s performance measure was the average number of days to process applications. During this time period, OPM met its target except for 1 year when OPM reported 108 days and the target was 45 days. In contrast, in 2014 through 2017, OPM’s performance measure was the percentage of applications processed in 60 days. During this time period, OPM did not meets its target of processing 90 percent of applications in 60 days as the percentage ranged from 57 to 79 percent each fiscal year. OPM Attributes Processing Delays to Paper-Based Applications, Staffing, and Missing Information in Applications Paper-based applications. Despite past attempts to modernize its retirement applications processing operation, OPM currently requires federal employees to submit their retirement application on paper. According to OPM officials, OPM has automated some front-end processing steps, despite various challenges, such as OPM’s and agencies’ legacy systems lacking functionality or integration, inaccurate data due to manual data-entry errors, and lack of real-time data because data are stored in inconsistent formats at multiple locations. OPM officials reported that payroll providers can electronically send OPM 59 data elements, which allows OPM to authorize interim annuity payments to 50 percent of new applicants, as well as initiates other processing functions. However, as shown in figure 3, subsequent processing steps still require manual intervention, including assembling paper documents into folders, ensuring documents are in proper order, and addressing missing or incomplete information. Staffing capacity. OPM attributed delays to not having enough staff to address its peak workload season, called the surge period. According to OPM, it hired additional staff in 2017 and 2018 to process applications throughout the year, but overtime pay was needed to increase staffing capacity during surge periods. Also, officials reported that hiring freezes, continuing resolutions, and other budget constraints affected hiring numbers and created hiring delays over the past 5 fiscal years. During the surge, OPM officials said they receive the bulk of applications starting in mid-January continuing through February (6 weeks). However, the effect of the surge workload lasts until mid-April because OPM takes about 60 days on average to process an application. Figure 4 illustrates the flow of applications OPM received and processed in fiscal years 2016 to 2018. During the months of January and February in this time period, OPM received an average of about 13,200 applications per month, a considerable increase over its average of about 7,200 per month at other times of the year. Despite the increase in applications, OPM’s application processing numbers remained essentially the same in January and February (8,200 per month) compared to other months of the year (8,100 per month), thus increasing OPM’s inventory of unprocessed applications, which ranged from approximately 11,400 to 24,200 for the time period shown. The increase in inventory was partly mitigated in March of each year, when OPM processed an average of about 11,000 applications. OPM officials reported that they processed more applications in March because they used overtime pay and flexible staffing across work units, such as temporarily shifting staff to a different unit to expedite workflow; screened for complete applications; and received fewer applications as surge periods ended. We discuss these and other actions OPM has taken to increase staffing capacity during surge periods later in the report. Incomplete applications. According to OPM officials, in up to 40 percent of applications, OPM is missing information needed to finalize processing, which increases processing time. Incomplete applications generally fall into two categories: Missing information. OPM estimates about 10 percent of applications are missing information, such as a form or signature. For example, OPM officials said that documentation for the applicant’s preceding 5 years of health insurance coverage, which is necessary to continue health insurance into retirement, was often missing. Waiting for applicant decisions. OPM estimates about 30 percent of applications are delayed while waiting for applicant decisions. For example, OPM stated that it must wait 30 days for the applicant to select an annuity option if deposits or redeposits are made. In addition to these three root causes, OPM officials reported that other factors, such as legislative changes, can also cause processing delays. For example, changes in the law may require OPM to revise its processes and train its staff, taking away time from core processing activities. OPM Conducts Limited Assessments of Its Processing Operation and Assistance to Agencies IT Modernization Plan Remains Unclear Subsequent to terminating its retirement modernization effort in February 2011, OPM refocused its retirement modernization efforts and in 2013 developed a new strategic vision for modernizing retirement applications processing. OPM’s 2013 strategic vision for modernizing retirement applications processing envisioned a paperless system that would timely authorize accurate retirement benefit payments, answer customers’ questions, and promote self-service account maintenance. According to OPM officials, the strategic vision consists of five key initiatives which are in varying stages of development and implementation, as shown in table 1. Partly in response to cancelling its third attempt to automate the processing of federal retirement applications in February 2011, OPM is now taking an incremental approach towards modernizing its retirement IT systems. According to OPM officials, they also recognize the need to improve OPM’s enterprise architecture before implementing significant modernization efforts. As we have previously reported, these steps can help agencies successfully modernize and maintain IT environments. OPM’s current approach provides a framework to help the agency achieve its overall IT modernization strategic vision. However, OPM officials provided no further explanation about how retirement IT modernization activities would proceed, such as describing proposed time frames and estimated cost ranges, even for initial project phases. Likewise, OPM’s Inspector General recently reported that the agency’s fiscal year 2018 IT modernization expenditure plan did not account for total costs nor identify the full scope of OPM’s modernization effort for the agency. Industry best practices and IT project management principles stress the importance of sound planning for system modernization projects. These plans should identify key aspects of a project, such as scope, responsible organizations, costs, schedules, and risks. Additionally, planning should begin early in the project’s life cycle and be updated as the project progresses. Further, according to federal internal control standards, management should define objectives in specific and measurable terms, such as defining what is to be achieved, who is to achieve it, how it will be achieved, and time frames for achievement. OPM officials said that additional IT modernization work is dependent on sufficient funding, support from the Office of the Chief Information Officer, and development of a technical enterprise architecture roadmap. These components are important. However, they do not preclude OPM from establishing a basic project management plan that includes objectives, estimated cost ranges, and proposed time frames for its initial project phases. Without a plan that is consistent with IT project management principles, OPM is less able to articulate a path forward in measurable terms and assess performance towards achieving its objectives. Similarly, without an electronic application system, OPM is less able to automatically verify information upfront when the application is submitted and notify applicants of any discrepancies prior to accepting the application. The administration’s proposal to move the retirement application processing operation to the General Services Administration (to be renamed as the Government Services Agency) has created additional uncertainty for OPM. Potential changes in organizational affiliation, policy, budget, and staff may make it difficult for OPM to plan for large- scale changes in its operations. Nevertheless, continuing to develop plans to modernize retirement IT systems seems prudent, given that the details of the reorganization are still unknown and that the move to the General Services Administration may not occur in the near term, or at all. Further, IT modernization is a key theme in the March 2018 President’s Management Agenda and will likely be a key driver in changing agency operations for years to come. OPM Lacks Performance Information That Could Improve Processing Timeliness and Staffing Capacity We have previously reported that to successfully implement reforms and improve their operations and results, agencies need to robustly manage their performance. This involves not only measuring progress toward goals, but also using performance information (i.e., data collected to measure progress towards agency goals) to identify and correct problems, improve program implementation, and make other important management and resource allocation decisions. However, we found that OPM does not use performance information on processing timeliness to manage for results. In addition, we found that OPM conducted limited assessments of its processing data and did not assess the effectiveness of its staffing actions. Performance Goals and Measures OPM’s fiscal year 2019 processing timeliness goal is to process all retirement applications in an average of 60 days or less. The related performance measure is the average number of days to process retirement applications. However, we found that OPM did not use its timeliness performance measure to manage for results or provide external stakeholders and applicants a clearer picture of processing time. Performance measures not used to manage. Based on our 2017 survey of federal managers, we found that OPM managers agency-wide reported a statistically significant decrease in using performance information to develop program strategy, allocate resources, and take corrective actions since 2013. Similarly, for this review, we found that OPM could enhance its use of performance information to manage for results in retirement applications processing. OPM has not established additional performance measures for the various parts of the application review and processing operation that would contribute towards achieving its overall processing timeliness goal. For example, OPM does not measure timeliness or have related performance goals for its various work units that process applications. OPM officials do not use such performance goals and measures to manage for results in part because they do not perceive the information to be relevant to reducing processing delays. For example, OPM officials said that the new timeliness performance goal facilitates planning but does not improve processing time or otherwise provide better service to retirees. According to these officials, OPM does not have a requirement for completing its various processing steps within a certain amount of time because each application is different, and they do not want staff to rush and potentially make mistakes, thereby causing rework. In comparison, agencies and payroll centers that submit these applications to OPM are required to do so within a certain time frame. Similarly, OPM has not established a timeliness performance goal or measure for completing its review of applicants’ eligibility for disability retirement. OPM officials said that OPM does not have a performance goal or measure for the review for disability retirement eligibility because it has not reached a steady processing level for these applications. However, OPM did not provide a time frame for when it expects to reach a steady processing level, nor did officials explain why OPM has not established performance goals and measures based on past performance or other benchmarks. In comparison, the Social Security Administration, through partnerships with state agencies, also reviews applications for disability benefits eligibility and has established performance goals for both the accuracy and processing time for this review process. As of November 2018, OPM officials reported that they are collecting data to develop a separate performance goal for measuring the timeliness of reviewing disability retirement eligibility and expect to establish a performance baseline within the next 3 to 6 months. The lack of management practices to encourage and enhance the use of performance measures at the operational level can make it challenging for OPM to use performance information to manage operations, such as identifying problem areas that cause delays and implementing corrective actions, and to make decisions, such as better targeting limited resources based on risk or other priorities. Unclear performance measures. OPM officials reported that the new processing timeliness goal also provides agencies and applicants a clearer, more realistic expectation of processing time. However, none of the four agencies we interviewed considered the new goal to be clearer or more helpful than past goals. The Departments of Defense and Health and Human Services, and the U.S. Postal Service were unaware that any such goal was ever established, prior to our discussions. We found that this performance goal was unclear because it lacked explanatory information that would make it more meaningful for applicants and external stakeholders, such as agency benefit officers and congressional oversight committees. Specifically, the new performance goal and related measure are expressed as an average, which allows for potentially wide variation in processing times while still meeting OPM’s goal. In past work, we have reported that including explanatory information on goals and measures helps improve the usefulness of performance information. Without explanatory information, reporting an average can obscure aspects of OPM’s processing timeliness, such as the number and types of applications OPM processes faster or slower than 60 days and the range of processing times. Also, OPM’s processing timeliness goal and measure do not include all phases of the application review process, specifically the time OPM takes to determine eligibility for disability retirements, which can be lengthy. We have previously reported that performance information could be more useful if it identified significant data limitations and their implications for assessing performance. OPM officials reported that the processing timeliness goal and measure exclude data on disability retirement applications pending approval because OPM does not consider reviewing disability retirement eligibility as part of processing. OPM includes disability applications in its processing timeliness goal after these applications have been approved. Not providing explanatory information about what the processing goal includes or excludes can lead to agencies’ and applicants’ false expectations and confusion about the amount of time OPM is taking to review applications. Limited Assessments of Processing and Staffing Strategies OPM has implemented various strategies for improving processing timeliness, as discussed below. However, we found multiple examples where OPM did not assess whether the strategies were effective. Assessment of processing applications. According to OPM officials, senior and frontline managers review processing data, such as age of pending applications, weekly to identify potential concerns, and adjust staffing and workload if necessary. However, we found that OPM’s performance information may be of limited use for assessing processing delays because the data lacked elements that would provide a more complete measure of performance. For example, we found that OPM did not review about half of applications government-wide for errors in fiscal years 2014 to 2016 combined, including all disability retirement applications. Likewise, OPM officials said that the number of unprocessed applications in inventory does not include disability retirement applications still pending approval. As a result, OPM’s performance information for both application errors and inventory does not reflect the full extent of processing delays because various applications have been excluded. OPM officials were unable to explain to us why or how they decided to exclude certain applications. Also, OPM generally does not assess the accuracy of the data it collects on application errors. OPM most recently reviewed the accuracy of the error data in 2014, despite additional feedback from agencies that some errors charged to them were incorrect. We also found outliers in the data that OPM officials were unable to explain. Assessment of staffing actions. OPM has taken actions to increase staffing capacity in retirement operations throughout the year, as well as during surge periods, as shown in table 2. However, we also found that OPM does not assess the effectiveness of its staffing actions, even though OPM officials reported that they are consistently looking for opportunities to improve OPM’s current processes. For example, OPM officials said that staffing actions improved efficiency but were unable to provide supporting data or documentation of their assessments, such as how often cross-functionally trained staff worked in other units and resulting improvements in output or quality. Likewise, OPM has not assessed the results of using overtime pay. As shown in table 3, any increased use of overtime pay during fiscal years 2013 to 2017 did not increase the number of applications processed. OPM officials said that overtime pay does not necessarily translate into increased output because some actions performed during overtime, such as quality review, do not contribute towards finalizing additional applications. They added that other factors can decrease production, such as reduced staff. Reduced staffing from fiscal years 2013 and 2016 may have contributed to decreased output, even with the use of overtime. However, OPM officials were unable to provide the number or types of positions that were reduced. Likewise, OPM does not measure how and to what extent the various factors affect output. OPM officials also said that they use overtime pay during surge periods to move applications through processing during its busiest time of the year, thereby decreasing an otherwise longer waiting time for applicants. However, OPM does not measure overtime productivity, or productivity in general, nor are they able to correlate overtime data with applications processing data or outcomes. OPM officials explained that they expect staff to be equally productive during overtime as they are during regular work time. Although OPM officials may set these productivity expectations, they do not collect productivity data to measure whether and to what extent staff meet these expectations. Further, OPM officials could not provide basic staffing data, such as the number of staff who have processed retirement applications for the past 5 years or number of processing staff paid overtime. Such information is valuable because it provides the basis for assessing whether OPM’s staffing actions are improving performance and meeting their intended purpose. We have previously reported that to be useful, performance information must meet users’ needs for completeness, accuracy, consistency, timeliness, validity, and ease of use. Other attributes that affect the usefulness of information include, but are not limited to, relevance, credibility, and accessibility. Further, federal internal control standards state that management should use quality information and design control activities to achieve the agency’s objectives. Examples of control activities include top-level reviews of performance compared to plans, goals, and objectives; management reviews at the functional or activity level; comparing and assessing related data sets so that relationships can be analyzed and appropriate actions taken; and clearly documenting control activities, transactions, and other significant events so that the documentation is readily available for examination. Federal internal control standards also state that management should implement control activities through policies. OPM officials reported that OPM’s systems were not robust enough to produce better performance information beyond basic processing data. OPM officials added that they have limited resources to assess data on strategies intended to improve processing timeliness. As such, OPM could consider a risk-based approach to collecting data and conducting assessments. For example, OPM could prioritize assessments of more resource-intensive activities over less resource-intensive activities. OPM could also focus its assessments on situations that could potentially introduce processing errors or data inconsistencies, such as when regulatory or process changes are implemented, or when staff are newly employed or are taking on new responsibilities. OPM officials also said that processing time is one of multiple factors they use to determine the effectiveness of staffing actions. However, as noted earlier, processing times have not consistently improved, further underscoring the need for better data and assessments of strategies intended to improve processing timeliness. Lack of useful performance information and policies and procedures to conduct assessments can hinder managers from identifying causes and corrective actions to problems in existing programs, as well as developing and prioritizing strategies and related resources for future programs. OPM Provides Assistance to Agencies, but Lacks a Robust Process for Assessing That Assistance To obtain agencies’ perspectives on the retirement application process and better understand their coordination and collaboration with OPM, we interviewed four selected agencies using a standardized set of questions in a semi-structured interview format. After we met with the agencies, we discussed the agencies’ perspectives on OPM’s assistance with OPM officials and incorporated their comments, as appropriate. Selected Agencies Have Mixed Perspectives on OPM Assistance OPM provides four main types of assistance to agencies: written guidance, training, communication through assigned liaisons and email, and monthly error reports. Guidance. OPM provides written guidance to agencies on submitting retirement applications through the Civil Service Retirement System and Federal Employees Retirement System Handbook for Personnel and Payroll Offices and Benefit Administration Letters. The letters provide guidance to agencies on various topics, such as on retirement policy and process issues. The most recent version of the handbook posted on OPM’s website is from 1998. OPM officials reported that the handbook is updated on an ongoing basis and as resources permit. Of the 47 chapters on OPM’s website, five had been updated between 2013 and 2017. NASA reported that OPM’s handbook is out of date and found it unreliable because some of the information is no longer accurate. All of the four selected agencies reported that the Benefit Administration Letters were very important. The Department of Defense (DOD), the Department of Health and Human Services (HHS), and the U.S. Postal Service (USPS) reported that the Benefit Administration Letters were issued at about the right frequency. In addition, DOD, the National Aeronautics and Space Administration (NASA), and USPS also stated that the Benefit Administration Letters were helpful or very helpful. Training. OPM officials reported that OPM provides training opportunities to agencies which include semi-annual multi agency conferences, training for benefit officers, webcasts, self-paced online training, and onsite training if requested. DOD and HHS reported that they were satisfied with the training, and NASA and USPS reported that they were dissatisfied. For example, NASA reported that OPM’s training would be improved with more virtual trainings that are shorter. NASA also reported that cost constraints prohibited sending all retirement staff to in-person training while virtual training is accessible to more staff. Liaisons and emails. OPM officials stated that it communicates with agencies by assigning all agencies a liaison to contact for technical assistance and communicating directly via email. For example, HHS reported that its previous liaison had helped locate missing records, such as a federal employee’s federal service history. All of the four selected agencies reported that the interaction with the liaisons was very important, and DOD, NASA, and USPS reported that the interactions were very helpful and about the right frequency. OPM also stated that it communicates with benefit officers and other interested parties through emails. USPS reported that the emails from OPM included Benefit Administration Letters and announcements about meetings and upcoming trainings. DOD, NASA, and USPS reported that emails were the most helpful form of communication with OPM. Error reports. OPM provides agencies with a monthly error report after it analyzes each agency’s batch of applications. This report includes information on the type of error found and the volume of applications with the same error, according to OPM. The error report includes retirement applications for those who retired while working for the federal government, which, for example, does not include disability retirement applications, according to OPM officials. OPM officials reported that the intent of the error reports is to educate the agencies. DOD and USPS reported that the error reports were helpful for identifying application errors. However, all four selected agencies reported that aspects of the error reports were not user-friendly. For example, the error reports are in a format that cannot be manipulated, thereby requiring agencies to manually enter data to track the type of errors found, and analyze the data and share the information internally. Such manual entry increases the risk of data entry errors that could compromise the accuracy of the original data. The four selected agencies also reported that the error reports lack some types of information, such as clear descriptions of errors, data on trends, and information on disability retirement applications. OPM Conducts Limited Reviews of Its Assistance to Agencies OPM officials reported they review two of the four types of assistance (guidance and training) and also conducted a review of error reports in 2014. They also stated that they had taken some actions in response to agency feedback. However, OPM did not provide documentation of their assessments of guidance or training. Guidance. OPM officials reported that they continue to evaluate their guidance and had taken some actions in response to agency feedback. For example, in response to agencies’ feedback that they experienced difficulty obtaining paper documentation of 5 years of health insurance, OPM officials reported that they developed a new form that agencies could use to certify that employees had the required coverage, which has resulted in decreased errors. However, OPM could not provide us with documentation of its reviews of its guidance. In addition, OPM had no schedule for updating guidance to agencies, according to OPM officials. Training. OPM officials reported that they receive agency feedback on training in multiple ways and had taken some actions in response. For example, OPM officials said that agencies provide feedback on trainings informally during conversations with liaisons and at in-person trainings. OPM officials also said they read training evaluation forms, which include multiple choice questions on the value of the different aspects of the training and an area to write any comments or suggestions. In addition, OPM periodically surveys benefit officers on their training, including open-ended questions about how and on what topics the respondent would prefer to receive training. However, the benefit officer survey does not include broader questions about how the training or other types of assistance could better meet the needs of agencies. In addition, OPM officials reported that in response to agency feedback, they made improvements to the class offerings, such as enhancing training on military discharges. OPM officials also reported that one of the actions they take in response to the most common errors that agencies make in retirement applications is to provide training on these topics. For example, OPM officials reported that they identified common errors on federal health benefits and military service documentation and subsequently provided training on both topics. OPM officials did not provide us with documentation of their reviews of agency feedback on training. Error reports. In 2014, OPM conducted a review of the errors that 12 agencies disputed in the agencies’ error reports. OPM officials reported that the review concluded that less than 1 percent of OPM’s incorrectly identified errors would have affected the annuitant. According to OPM officials, the cost of reviewing and adjusting the error rate for accuracy outweighs the benefits. In addition, the four selected agencies reported that they had shared information with OPM on errors that the agencies thought were erroneously identified as errors. The four selected agencies reported that OPM had not changed the error rates in response. In addition, HHS and USPS reported that OPM did not share the information on disputed errors with its staff who audit the applications for errors. USPS officials also stated that OPM had not used this information to train its staff. OPM’s fiscal year 2018 budget justification cited partnering with agencies to help them submit complete and accurate retirement packages for quicker processing. While OPM officials reported that they have reviewed certain types of assistance, they have limited or no documentation on the analysis or the results of these reviews. Federal internal control standards state that management should compare actual performance to expected results and evaluate and document monitoring results. The standards also state that management should complete and document corrective actions to remediate control deficiencies in a timely manner. In relation to training, which is one of the types of assistance OPM provides to agencies, we have also reported that a leading training investment practice is to evaluate the benefits achieved through training, such as having a formal process for evaluating improvement in performance and tracking the impact of training on the agency’s performance goals. Another leading practice is to compare the merits of different delivery mechanisms (such as classroom or computer-based training) and determine what mix of mechanisms to use to ensure efficient and cost-effective delivery. OPM officials reported that effectiveness of their assistance to agencies is a contributing factor to decreased errors in retirement applications. For example, according to OPM, the percentage of complete applications submitted government-wide improved from 77 percent in fiscal year 2010 to about 92 percent in fiscal year 2017. OPM officials also noted that they assessed the effectiveness of their guidance and trainings and any modifications by observing if particular types of errors decrease overall. OPM officials provided us a list of the most common errors for fiscal year 2017, such as a missing marriage certificate. Although OPM officials have stated that they review two of the four types of assistance (guidance and training), OPM lacks a robust process for assessing and documenting its analysis and findings regarding all forms of the assistance it provides to agencies. This makes it more difficult for OPM to clearly demonstrate the effectiveness of its assistance. Thus, for example, there is limited understanding as to whether OPM’s training is being delivered through the most efficient and cost-effective mix of mechanisms. OPM may be missing opportunities to better partner with agencies by tailoring its assistance to help agencies improve their own processes and training. Assessments that result in enhancing OPM’s assistance to agencies could improve the completeness of applications submitted, which could in turn improve OPM’s application processing time. With respect to the agency error report, federal internal control standards state that management should communicate quality information externally so that external parties can help the entity achieve its objectives, and periodically evaluate its methods of communication so that it communicates quality information. OPM officials reported that the current structure of the agency error reports was designed to capture the large overarching error-based issues many agencies face, such as applicants electing more life insurance coverage than permitted. OPM officials reported that they have not solicited input from agencies about the usefulness of the monthly error reports, but agencies regularly provide feedback to their OPM liaisons. OPM officials reported that they are evaluating the trends in the feedback. However, revising the structure of the current error reports would not be cost-effective, according to OPM officials. They also reported that they are considering including disability applications in future error reports. The current format of the agency error report may limit its usefulness to agencies in improving their retirement applications and educating staff on how to address or minimize errors. Without user-friendly error reports, such as one that could be manipulated in Excel, agencies could find it more challenging to efficiently share the data among agency divisions and for the divisions to further sort the data. This challenge may be particularly burdensome at agencies comprised of numerous sub- agencies that share responsibility for preparing higher volumes of retirement applications. Selected Agencies Have Developed Strategies for Compiling Accurate Applications We found that the four selected agencies we interviewed used three strategies to compile accurate retirement applications, as shown in figure 5 below. Some agencies also had additional strategies, such as tracking identified issues in applicant’s retirement applications. Preparing employees for retirement. The four selected agencies provide retirement counseling and had an agenda or a checklist to guide the discussion. Some of the topics included designating beneficiaries and eligibility to continue health insurance into retirement. DOD, HHS, USPS, and NASA also reported providing additional assistance to prepare employees for retirement. DOD’s website had calculators that could be used for estimating a Thrift Savings Plan annuity and survivor benefits. HHS stated that its employees have access to online pre-retirement seminars and financial planning resources. In addition, USPS has an employee retirement kit for that includes health insurance information, general retirement information, and retirement forms, such as for documenting life insurance and retirement effective date. NASA also prepares employees for retirement in two additional ways. First, NASA reviews new employees’ electronic Official Personnel Folders, which contain their federal employment history, and makes corrections as needed. NASA officials stated it tries to resolve any issues in an employee’s electronic Official Personnel Folder rather than waiting until an employee retires. Samples of these files are then audited. Second, NASA stated that it encourages employees to ask for an annuity estimate every year for the 7 years prior to planned retirement. NASA reported that each annuity estimate generated includes a review of an employee’s files, and enables the agency to identify and address any errors. Educating and training staff that compile retirement applications. The four selected agencies hold periodic staff meetings that include discussions of retirement applications. For example, NASA’s meeting includes a discussion of common errors to avoid, unique or complex retirement cases, process improvements, and lessons learned. The four selected agencies also conduct retirement application training. For example, DOD provided a multiday training that included topics such as creditable service, annuity computation, and retirement eligibility. HHS also stated that it partnered with its payroll provider to present the payroll side of retirement processing, including retirement application processing and disability retirement processing. DOD, HHS, NASA, and USPS also reported that new staff is mentored by experienced staff. Procedures for compiling applications. The four selected agencies have procedures for compiling applications. For example, the four selected agencies have checklists to help staff compile the required documents. DOD’s checklist includes a list of more than 30 documents in sequential order with instructions on which documents to include for each of the two retirement plans. In addition, the four selected agencies reported having a system to track the process of compiling applications. DOD’s, NASA’s, and USPS’ respective systems also include tracking identified issues. For example, USPS’ system monitors the overall progress of each application, as well as tracks the status of each identified issue, such as missing documents, and whether the issue had been resolved. The four selected agencies also conduct audits on some or all of the applications before submitting applications to OPM. The agencies reported that the audits are used to increase accuracy of submitted applications and provide feedback to staff on any identified errors. For example, DOD has an audit checklist with more than 30 items to review, such as whether a marriage certificate is included if applicable and if the application is signed. Conclusions Delays in processing retirement applications for federal employees have been a longstanding problem. According to OPM, it has identified root causes for the delays and has developed and implemented strategies to improve its processing operation. For example, the agency has developed a strategic vision for modernizing the current paper-based application, and employed strategies to address staffing capacity and minimize the number of incomplete applications. However, without improving its data collection and assessments of its strategies, OPM cannot know whether its strategies are effective at reducing the delays, or could be modified to yield better results. Furthermore, OPM’s plan for modernizing its information technology (IT) retirement processing lacks cost estimates and timelines, which means there are no measurable results with which to evaluate resource needs or interim progress. In addition, although OPM has established a performance goal on processing timeliness, its related performance measure does not include explanatory information that could make it more meaningful. OPM also has not set performance measures for various parts of the application review and processing operation that could provide clearer insights into where improvements may be needed. Lack of quality performance information hinders applicants and external stakeholders from understanding OPM’s timeliness in processing applications, and limits OPM from better managing and monitoring program performance. Furthermore, OPM lacks a robust process for assessing its assistance to agencies, which makes it difficult for OPM to demonstrate the effectiveness of its assistance. Potential organizational changes and other external factors have created additional uncertainty for OPM. These challenges notwithstanding, approximately 100,000 federal employees depend on OPM each year to process retirement benefits, such as life and health insurance, in a timely manner. As such, OPM should endeavor to reduce processing delays, monitor and report on its progress through better performance information, and effectively partner across the federal government to improve processing timeliness. Recommendations for Executive Action We are making the following six recommendations to OPM: The Associate Director of OPM’s Retirement Services, working in coordination with the Chief Information Officer, should develop, document, and implement a Retirement Services IT modernization plan for initial project phases that is consistent with key aspects of IT project management, such as determining objectives, costs, and time frames for each initial phase. (Recommendation 1) The Associate Director of OPM’s Retirement Services should adopt management practices to enhance the use of performance information on processing timeliness to inform how OPM manages operations, identifies problem areas, and allocates resources. For example, OPM could enhance use of performance measures at the operational level or establish a timeliness performance goal for reviewing disability retirement eligibility. (Recommendation 2) The Associate Director of OPM’s Retirement Services should provide explanatory information, such as the range of processing times and the exclusion of disability retirement eligibility determinations, as part of the performance measure on processing timeliness. (Recommendation 3) The Associate Director of OPM’s Retirement Services should develop and implement policies and procedures for assessing strategies intended to improve processing times, including collecting and improving data needed to support those strategies, such as collecting better productivity data or staffing data and linking them to processing outcomes. (Recommendation 4) The Associate Director of OPM’s Retirement Services should examine its process for assessing its assistance to agencies on retirement applications. For example, OPM could incorporate into its assessment process more agency feedback or documentation of assessment results, which could improve its partnership with agencies to strengthen the assistance provided. (Recommendation 5) The Associate Director of OPM’s Retirement Services should work with agencies to determine if there are cost-effective ways to make the retirement application error report that it sends to agencies more user- friendly. For example, explore whether there are cost-effective ways to provide the error report in a format that could be manipulated (e.g., Excel spreadsheet), or to include additional information, such as incorporating disability retirement applications or providing clearer descriptions of errors or trend data, some of which OPM already collects. (Recommendation 6) Agency Comments and Our Evaluation We provided a draft of the report to OPM, DOD, HHS, NASA, and USPS for review and comment. In its comments, reproduced in appendix I, OPM concurred with 1 recommendation and partially concurred with the remaining 5 recommendations. HHS and NASA provided technical comments, which we incorporated as appropriate. DOD and USPS had no comments on the draft. OPM partially concurred with our first recommendation to develop, document, and implement a Retirement Services IT modernization plan that includes costs and time frames for initial project phases. OPM stated that it has established initial high-level funding estimates for each of its five key IT initiatives, but OPM did not provide any documentation or further details. OPM cited that its ability to implement the modernization plan depends on the availability of funding and coordination with the agency’s top leadership. We agree these are important elements, which further underscore our recommendation. An IT modernization plan with objectives, cost estimates, and time frames could help support funding requests, as well as measure progress in implementing the initiatives. OPM partially concurred with our second recommendation to enhance the use of performance information on processing timeliness to make more informed management decisions. OPM responded that it measures overtime spending, reviews daily work level in each work unit, and assesses employee productivity in these units. Collecting and reviewing such operational-level data contributes to monitoring efforts; however our recommendation emphasizes the importance of using performance information to better manage operations to align with organizational goals. OPM partially concurred with our third recommendation to provide explanatory information as part of its reporting of processing timeliness. OPM agreed to add an explanation about disability retirement eligibility determinations to its public reports. OPM disagreed that reporting data on the range of processing times would be beneficial because, according to OPM, it provides processing information through other means, such as through applicants’ online accounts and agency benefit officers. While providing this information is beneficial, publically reporting data on the range of processing times helps improve the usefulness of performance information for applicants and external stakeholders, such as congressional oversight committees. Further, OPM acknowledged that it already collects and shares such data, which confirms it has the information and ability to implement this recommendation by adding appropriate summary notes to its public reporting. This action coupled with adding an explanation about disability retirement eligibility determinations should address the recommendation. OPM partially concurred with our fourth recommendation to develop and implement policies and procedures for assessing strategies intended to improve processing times, including collecting data needed to support those strategies. OPM stated that a new case management system could provide better productivity and staffing data with which to assess effectiveness, but is dependent on funding and IT support. However, developing policies and procedures to manage and monitor its assessment process—such as determining when, how, and how often to conduct assessments and what data to collect—is not dependent on having a new case management system. In fact, establishing such policies and procedures could help inform system requirements in terms of data and reporting needs. OPM concurred with our fifth recommendation to examine its process for assessing its assistance to agencies on retirement applications, and stated that it will incorporate more agency feedback into its assessment results on non-disability immediate retirement applications. OPM partially concurred with our sixth recommendation to work with agencies to determine if there are cost-effective ways to make the error report more user-friendly. OPM stated that it will explore using MS Excel spreadsheets and incorporating clearer descriptions of errors and data trends. OPM asserted that our report incorrectly states that the data sent to agencies cannot be manipulated as agencies receive the data in MSWord documents from which they can create MS Excel spreadsheets. However, as OPM acknowledges, agencies have to create their own spreadsheets. Doing so requires agencies to manually enter the data to track and analyze errors, which increases the risk of data entry mistakes that could compromise the accuracy of original data, as we reported. OPM also stated that collecting disability application error information is not an inexpensive or simple process change. While we recognize OPM’s audit efforts may need to be modified to capture this type of error information, it would provide agencies with more comprehensive error data that could be used to improve the agencies’ application processes. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Acting Director of OPM, the Secretary of DOD, the Secretary of HHS, the Administrator of NASA, and the Postmaster General and Chief Executive Officer of USPS. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Office of Personnel Management Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, Leah Querimit Nash (Assistant Director), Maya Chakko (Analyst in Charge), Mark Bird, Jackie Chapin, Jeff DeMarco, Elizabeth Fan, Gina Hoover, Ted Hu, Ben Licht, Meredith Moles, Robert Robinson, and Kayla Robinson made key contributions to this report.
Why GAO Did This Study According to OPM, it receives more than 100,000 retirement applications each fiscal year. Between 2014 to 2017, OPM did not meet its goal of processing most retirement applications within 60 days. GAO was asked to review potential improvements in federal retirement processing at OPM. This report (1) describes the root causes of retirement application processing delays, as determined by OPM; and (2) examines what strategies, if any, OPM has taken to address those root causes, and how OPM has evaluated the effectiveness of the strategies. GAO reviewed OPM data and documents, and interviewed OPM officials. GAO also interviewed officials from DOD, HHS, NASA, and USPS about their experiences with processing retirement applications. GAO selected these agencies because they represent a variety of application error rates and relatively high application volume. What GAO Found The Office of Personnel Management (OPM), which administers the federal retirement program, identified three root causes for retirement processing delays: 1. the continuing reliance on paper-based applications and manual processing; 2. insufficient staffing capacity, particularly during peak workload season; and 3. incomplete applications. OPM has taken various actions to address these root causes and thereby reduce delays. Vision for modernizing retirement processing. OPM's strategic vision consists of five key initiatives for modernizing the application process, including developing an electronic application form and an electronic system to store retirement information. However, OPM was unable to provide estimated time frames or costs for the initiatives. OPM officials said that additional information technology (IT) modernization work is dependent on sufficient funding, among other factors. These factors are important but do not preclude OPM from establishing estimated cost ranges and time frames—practices consistent with industry best practices and IT project management principles. Actions to increase staffing capacity. OPM's actions have included using overtime pay and hiring additional staff. However, OPM generally does not assess the effectiveness of these actions or whether they reduce delays. For example, OPM does not measure overtime productivity or correlate overtime data with application processing data. Federal internal control standards state that management should review its performance compared to its goals. OPM officials stated that they have limited resources for assessments. However, without assessments, OPM is less able to make informed decisions on how to best use staffing practices to improve processing times. Actions to reduce missing information in applications. OPM provides assistance to agencies through guidance, training, communication through liaisons and email, and error reports. OPM's monthly error reports to agencies include information on the type of error found and the volume of applications with the same error, according to OPM. The four agencies GAO interviewed—Department of Defense (DOD), Department of Health and Human Services (HHS), National Aeronautics and Space Administration (NASA), and U.S. Postal Service (USPS)—reported that aspects of the error reports were not user-friendly. OPM stated that its assistance is intended to help agencies submit complete and accurate retirement packages for quicker processing. Federal internal control standards state that management should communicate quality information externally and periodically reevaluate its communication methods. OPM officials stated that the error report is intended to capture the overarching errors many agencies face and that revising the error report would not be cost-effective. However, the current format of the error report may limit its usefulness to agencies in improving their retirement applications. What GAO Recommends GAO is making 6 recommendations. These recommendations include that OPM should develop a retirement services IT modernization plan for initial project phases; develop and implement policies for assessing staffing strategies intended to improve processing times; and determine if there are cost-effective ways to make the retirement application error report more user-friendly. OPM concurred with 1 recommendation and partially concurred with 5 recommendations. GAO continues to believe all aspects of the recommendations are valid, as discussed in the report. GAO also incorporated technical comments.
gao_GAO-18-88
gao_GAO-18-88_0
Background CMS has four principal programs: Medicare, Medicaid, CHIP, and the health-insurance marketplaces. See table 1 for information about the four programs. As discussed earlier, Medicare and Medicaid are CMS’s largest programs and have been growing steadily (see fig. 1). CBO projects that, in 2026, under current law, Medicare spending will reach $1.3 trillion. Medicaid is also expected to continue to grow—program spending is projected to increase 66 percent to over $950 billion by fiscal year 2025, and more than half of the states have chosen to expand their Medicaid programs by covering certain low-income adults not historically eligible for Medicaid coverage, as authorized under the Patient Protection and Affordable Care Act of 2010 (PPACA). The two programs’ use of managed-care delivery systems to provide care has also increased. For example, the number and percentage of Medicare beneficiaries enrolled in Medicare Part C has grown steadily over the past several years, increasing from 8.7 million (20 percent of all Medicare beneficiaries) in calendar year 2007 to 17.5 million (32 percent of all Medicare beneficiaries) in calendar year 2015. As of July 1, 2015, nearly two-thirds of all Medicaid beneficiaries were enrolled in managed- care plans and about 40 percent of expenditures in fiscal year 2015 were for health-care services delivered through managed care. CMS Funding to Address Fraud, Waste, and Abuse CMS receives appropriations to carry out antifraud activities through several funds including the Health Care Fraud and Abuse Control (HCFAC) program and the Medicaid Integrity Program. The HCFAC program was established under the Health Insurance Portability and Accountability Act of 1996 to coordinate federal, state, and local law- enforcement efforts to address health-care fraud and abuse and to conduct investigations and audits, among other things. In fiscal year 2016, CMS received $560 million through the HCFAC program appropriations. The Medicaid Integrity Program, established by the Deficit Reduction Act of 2005, supports contracts to audit and identify overpayments in Medicaid claims, and provides technical assistance for states’ program-integrity efforts. According to CMS, it received $75 million every year since fiscal year 2009 through the Medicaid Integrity Program appropriations. According to CMS, in fiscal year 2016, total program-integrity obligations to address fraud, waste, and abuse for Medicare and Medicaid were $1.45 billion. Fraud Vulnerabilities and Improper Payments in Medicare and Medicaid As mentioned previously, we designated Medicare and Medicaid as high- risk programs starting in 1990 and 2003, respectively, because their size, scope, and complexity make them vulnerable to fraud, waste, and abuse. Similarly, the Office of Management and Budget (OMB) designated all parts of Medicare as well as Medicaid “high-priority” programs because these programs report $750 million or more in estimated improper payments in a given year. We also highlighted challenges associated with improper payments in Medicare and Medicaid in our annual report on duplication and opportunities for cost savings in federal programs. Improper payments are a significant risk to the Medicare and Medicaid programs and can include payments made as a result of fraud. Improper payments are payments that are either made in an incorrect amount (overpayments and underpayments) or those that should not be made at all. For example, CMS estimated in fiscal year 2016 that the Medicare fee-for-service (FFS) improper payment rate was 11 percent (approximately $41 billion) and the Medicaid improper payment rate was 10.5 percent (approximately $36 billion). Improper payment measurement does not specifically identify or estimate improper payments due to fraud. Types of Health-Care Fraud and Fraud Risk Health-care fraud can take many forms, and a single case can involve more than one scheme. Schemes may include fraudulent billing for services not provided, services provided that were not medically necessary, and services intentionally billed at a higher level than appropriate. These fraud schemes may include compensating providers, beneficiaries, or others for participating in the fraud scheme. Fraud can be regionally focused or can target particular service areas such as home-health services, or durable medical equipment such as wheelchairs. Fraud may also have nonfinancial effects. For example, patients may be subjected to harmful or unnecessary services by fraudulent providers. Fraud can be perpetrated by different actors, such as providers, beneficiaries, health-insurance plans, as well as organized crime. Fraud and “fraud risk” are distinct concepts. Fraud is challenging to detect because of its deceptive nature. Additionally, once suspected fraud is identified, alleged fraud cases may be prosecuted. If the court determines that fraud took place, then fraudulent spending may be recovered. Fraud risk exists when individuals have an opportunity to engage in fraudulent activity, have an incentive or are under pressure to commit fraud, or are able to rationalize committing fraud. When fraud risks can be identified and mitigated, fraud may be less likely to occur. Although the occurrence of one or more cases of health-care fraud indicates there is a fraud risk, a fraud risk can exist even if fraud has not yet been identified or occurred. Suspicious billing patterns, certain types of health-care providers, or complexities in program design may indicate a risk of fraud. Information to help identify potential fraud risks may come from various sources, including whistleblowers, agency officials, contractors, law-enforcement agencies, beneficiaries, or providers. Fraud Risk Management Standards and Guidance According to federal standards and guidance, executive-branch agency managers are responsible for managing fraud risks and implementing practices for combating those risks. Federal internal control standards call for agency management officials to assess the internal and external risks their entities face as they seek to achieve their objectives. The standards state that as part of this overall assessment, management should consider the potential for fraud when identifying, analyzing, and responding to risks. Risk management is a formal and disciplined practice for addressing risk and reducing it to an acceptable level. In July 2015, GAO issued the Fraud Risk Framework, which provides a comprehensive set of key components and leading practices that serve as a guide for agency managers to use when developing efforts to combat fraud in a strategic, risk-based way. The Fraud Risk Framework describes leading practices in four components: commit, assess, design and implement, and evaluate and adapt, as depicted in figure 2. The Fraud Reduction and Data Analytics Act of 2015, enacted in June 2016, requires OMB to establish guidelines for federal agencies to create controls to identify and assess fraud risks and design and implement antifraud control activities. The act further requires OMB to incorporate the leading practices from the Fraud Risk Framework in the guidelines. In July 2016, OMB published guidance about enterprise risk management and internal controls in federal executive departments and agencies. Among other things, this guidance affirms that managers should adhere to the leading practices identified in the Fraud Risk Framework. Further, the act requires federal agencies to submit to Congress a progress report each year for 3 consecutive years on the implementation of the controls established under OMB guidelines, among other things. CMS Manages Fraud Risks as Part of Its Agency-Wide Program-Integrity Activities and through an Extensive Network of Stakeholders Fraud Risk Management Is a Part of CMS’s Broader Program-Integrity Approach CMS’s antifraud efforts for its four principal programs are part of the agency’s broader program-integrity approach to address fraud, waste, and abuse. CMS’s Center for Program Integrity (CPI) is the agency’s focal point for program integrity across the programs. According to CMS, its approach to program-integrity allows it to “address the whole spectrum of fraud, waste, and abuse.” For example, CMS describes its program- integrity activities as addressing unintentional errors resulting from providers being unaware of recent policy changes on one end of the spectrum, through somewhat more-serious patterns of abuse such as billing for a more-expensive service than was performed (known as upcoding), and finally up to serious fraudulent activities, such as billing for services that were not provided. CMS then aims to target its corrective actions to fit the risk. See figure 3 for CMS’s description of the spectrum of fraud, waste, and abuse that its program-integrity activities aim to address. Within its program-integrity activities, CMS has established several control activities that are specific to managing fraud risks, while others serve broader program-integrity purposes. According to CMS officials, the agency’s antifraud control activities mainly focus on providers in Medicare FFS. Officials told us that when CPI began operating, its primary focus was developing program integrity for Medicare FFS and, as a result, it is the most “mature” of all of CPI’s programs. CMS’s specific fraud control activities include, for example, the Fraud Prevention System (FPS), a predictive-analytics system that helps identify potentially fraudulent payments in Medicare FFS, and the Unified Program Integrity Contractors (UPIC), which detect and investigate aberrant provider behavior and potential fraud in Medicare and Medicaid. Other control activities serve broader program-integrity purposes such as to reduce improper payments resulting from error, waste, and abuse in addition to preventing or detecting potential fraud. For example, CMS provides education and outreach to Medicare providers and beneficiaries on issues identified through data analyses in order to reduce improper payments and to increase their awareness of fraud. HHS and CMS department- and agency-wide strategic plans guide CMS’s program-integrity activities—including antifraud activities. The program-integrity goals identified in the HHS strategic plan primarily focus on improper payments and are driven by statutory requirements. For example, the HHS strategic plan for fiscal years 2014–2018 includes performance goals of reducing the percentage of improper payments made under Medicare FFS and Medicare Parts C and D. One antifraud- focused goal in the HHS strategic plan is to increase the percentage of Medicare providers and suppliers identified as high risk that receive administrative actions, such as suspending payments to providers or revoking providers’ billing privileges. HHS and CMS department- and agency-wide strategic plans also include an emphasis on fraud prevention and early detection—a leading practice in the Fraud Risk Framework—and moving away from a “pay-and-chase” model. For example, the HHS strategic plan calls for “fostering early detection and prevention of improper payments by focusing on preventing bad actors from enrolling or remaining in Medicare and Medicaid” and to “use public-private partnerships to prevent and detect fraud across the health care industry by sharing fraud-related information and data between the public and private sectors.” As a part of this emphasis on prevention, CMS developed FPS in response to the Small Business Jobs Act of 2010, which required CMS to implement predictive-analytics technologies. Also, the Patient Protection and Affordable Care Act of 2010 (PPACA) included provisions to strengthen Medicare and Medicaid’s provider enrollment standards and procedures, among other program-integrity provisions. CMS Uses an Extensive Network of Stakeholders to Manage Fraud Risks and Plays Varying Roles in These Relationships CMS works with an extensive and complex network of stakeholders to manage fraud risks in its four principal programs. In Medicaid and CHIP, CMS partners with and oversees the 50 states and the District of Columbia. Until the Deficit Reduction Act of 2005 expanded CMS’s role in Medicaid program integrity to provide effective federal support and assistance to states’ efforts to combat fraud, waste, and abuse, states were primarily responsible for Medicaid program integrity. Each state has its own Medicaid program-integrity unit, Medicaid Fraud Control Unit (MFCU), and state audit organization. CMS also uses numerous contractors to conduct the majority of its program-integrity activities. Since the enactment of Medicare in 1965, contractors have played an integral role in the administration of the program. The original Medicare program was designed so that the federal government contracted with health insurers or similar organizations experienced in handling physician and hospital claims to pay Medicare claims. Later, the Health Insurance Portability and Accountability Act of 1996 required the Secretary of Health and Human Services to enter into contracts to promote the integrity of the Medicare program. According to CMS officials, in fiscal year 2016 contractors received 92 percent of CMS’s program-integrity funding. Medicare and Medicaid program- integrity contractors play a variety of roles: (1) processing and reviewing claims, (2) conducting site visits of providers enrolling in Medicare, (3) auditing claims and recovering overpayments, (4) performing data analysis, and (5) investigating aberrant claims and provider behaviors, among other things. States also use contractors in many of these roles for managing program integrity. Additionally, multiple private health-insurance plans in Medicare Parts C and D and over 200 health-insurance plans in Medicaid managed care also carry out program-integrity activities. For the health-insurance marketplaces, CMS is responsible for operating the federally facilitated marketplace and overseeing the state-based marketplaces. CMS also developed the Federal Data Services Hub, which acts as a portal for exchanging information between state-based marketplaces, the federally facilitated marketplace, and state Medicaid agencies, among other entities, as well as other external partners, including other federal agencies, such as the Internal Revenue Service. Finally, law- enforcement groups, including the joint Department of Justice (DOJ) and HHS OIG Medicare Fraud Strike Force Teams, identify, investigate, and prosecute instances of fraud in CMS programs. See figure 4 for a depiction of CMS’s stakeholder network for managing fraud risks. This figure illustrates approximate numbers of stakeholders (through the concentration of dots), but not the extent of individual stakeholder roles. CMS provides oversight to, or partners with, these stakeholders to manage fraud risks. For oversight, CMS creates policies and guidance to direct stakeholders’ antifraud efforts, such as Medicare and Medicaid program-integrity manuals and the Medicaid Provider Enrollment Compendium. CMS also provides technical assistance to states in areas such as provider enrollment and data analysis. In areas where CMS does not have a primary role, it acts as a partner by collaborating and coordinating program-integrity and antifraud activities. For example, CMS is directly responsible for Medicare program integrity, but, in Medicaid and CHIP, states are the first line of program-integrity efforts. Similarly, CMS maintains control over Medicare FFS program integrity, but within Medicare managed care, it provides guidance for health- insurance plans to carry out their own program-integrity activities. In the health-insurance marketplaces, CMS reviews state-based marketplaces’ procedures for verifying applicant eligibility for coverage. For example, it conducts annual reviews of the state-based marketplaces, which include a review of states’ fraud, waste, and abuse policies. See figure 5 for a further description of CMS’s and various stakeholders’ roles and responsibilities in fraud risk management. CMS also facilitates collaboration among federal, state, and private entities for managing fraud risks. In 2012, CMS created the Healthcare Fraud Prevention Partnership (HFPP) to share information with public and private stakeholders and to conduct studies related to health-care fraud, waste, and abuse. According to CMS, as of October 2017, the HFPP included 89 public and private partners, including Medicare- and Medicaid-related federal and state agencies, law-enforcement agencies, private health-insurance plans (payers), and antifraud and other health- care organizations. The HFPP has conducted studies that pool and analyze multiple payers’ claims data to identify providers with patterns of suspect billing across payers. In a recent report, participants separately told us that the HFPP’s studies helped them to identify and take action against potentially fraudulent providers and payment vulnerabilities of which they might not otherwise have been aware, and fostered both formal and informal information sharing. CMS’s relationships with stakeholders were varied in terms of maturity and extent of information sharing, according to stakeholders we interviewed. While some relationships between CMS and stakeholders have been long-standing, some are developing, and others exist on an ad hoc basis. For example, CMS has had a long-standing relationship with state Medicaid program-integrity units, by collaborating through monthly meetings of the Medicaid Fraud and Abuse Technical Advisory Group, sending fraud alerts, and offering courses through the Medicaid Integrity Institute. However, in our interviews with state program-integrity units, and as we recently reported, some state Medicaid agencies shared concerns about the communication, level of policy guidance, and technical support provided by and received from CMS for managing fraud risks in Medicaid. This concern was echoed by state audit officials, with whom CMS recently initiated coordination to build relationships that would facilitate state auditing of Medicaid programs. CMS also has varying relationships with its law-enforcement partners. For example, the relationship between CMS and DOJ’s Health Care Fraud unit, which leads the DOJ and HHS OIG Medicare Fraud Strike Force Teams, has been ad hoc. According to CMS and DOJ officials, the interactions between the agencies have been based on specific fraud cases such as coordination of national takedowns when DOJ provided CMS with the names of providers committing fraud so that CMS could suspend them consistently with the timing of the enforcement efforts. According to CMS officials, they coordinate more with HHS OIG, working together on payment suspensions and revocations for OIG cases, or working with it to take administrative actions against large providers. CMS’s Efforts Managing Fraud Risks in Medicare and Medicaid Are Partially Aligned with the Fraud Risk Framework CMS’s antifraud efforts partially align with the Fraud Risk Framework. Consistent with the framework, CMS has demonstrated commitment to combating fraud by creating a dedicated entity to lead antifraud efforts. It has also taken steps to establish a culture conducive to fraud risk management, although it could expand its antifraud training to include all employees. CMS has taken some steps to identify fraud risks in Medicare and Medicaid; however, it has not conducted a fraud risk assessment or developed a risk-based antifraud strategy for Medicare and Medicaid as defined in the Fraud Risk Framework. CMS has established monitoring and evaluation mechanisms for its program-integrity control activities that, if aligned with a risk-based antifraud strategy, could enhance the effectiveness of fraud risk management in Medicare and Medicaid. CMS Has Shown Commitment to Combating Fraud by Creating an Organizational Structure and Taking Steps to Establish a Culture Conducive to Fraud Risk Management CMS’s Organizational Structure Includes a Dedicated Entity for Program-Integrity and Antifraud Efforts The commit component of the Fraud Risk Framework calls for an agency to commit to combating fraud by creating an organizational culture and structure conducive to fraud risk management. This component includes establishing a dedicated entity to lead fraud risk management activities. Within CMS, the Center for Program Integrity (CPI) serves as the dedicated entity for fraud, waste, and abuse issues in Medicare and Medicaid, which is consistent with the Fraud Risk Framework. CPI was established in 2010, in response to a November 2009 Executive Order on reducing improper payments and eliminating waste in federal programs. This formalized role, according to CMS officials, elevated the status of program-integrity efforts, which previously were carried out by other parts of CMS. As an executive-level Center—on the same level with five other executive-level Centers at CMS, such as the Center for Medicare and the Center for Medicaid and CHIP Services—CPI has a direct reporting line to executive-level management at CMS. The Fraud Risk Framework identifies a direct reporting line to senior-level managers within the agency as a leading practice. According to CMS officials, this elevated organizational status offers CPI heightened visibility across CMS, attention by CMS executive leadership, and involvement in executive- level conversations. Additionally, in 2014, CMS established a Program Integrity Board that has brought together senior officials across CMS Centers on a monthly basis to coordinate on fraud and program-integrity vulnerabilities. According to CPI officials, the board is one of the mechanisms through which CPI engages other executive-level offices at CMS. CPI chairs the meetings and typically develops meeting agendas to solicit information from and disseminate information to other CMS units or stakeholders. Further, the board may establish small working groups, known as integrated project teams, to address specific vulnerabilities. For example, according to CMS officials, in 2016 the board established a Marketplace integrated project team to resolve potential fraud eligibility and enrollment issues in the federally facilitated marketplace using the Fraud Risk Framework. CPI has further demonstrated commitment to addressing fraud, waste, and abuse through several organizational changes with the goal of improving coordination and communication of program-integrity activities across Medicare and Medicaid. Most recently, in 2014, CPI reorganized its structure to align functional areas across Medicare and Medicaid, where possible. Previously, separate units within CPI administered their own program-integrity activities for Medicare and Medicaid programs. For example, CPI established a Provider Enrollment and Oversight Group, responsible for provider screening and enrollment functions in both Medicare and Medicaid. According to CMS officials, if CPI employees identify an issue in provider enrollment in Medicare, the same CPI employees also consider how this issue applies to Medicaid. According to CMS officials, the reorganization has helped CPI to look at vulnerabilities in a crosscutting way and to facilitate communication across programs. Similarly, since 2016, CPI began shifting contracting functions from separate Medicare and Medicaid regional contractors that identify and investigate cases of potential fraud and conduct audits to five regional UPICs responsible for a range of program-integrity and fraud-specific activities in both Medicare FFS and Medicaid. According to CMS, the purpose of the UPICs is to coordinate provider investigations across Medicare and Medicaid, improve collaboration with states by providing a mutually beneficial service, and increase contractor accountability through coordinated oversight. CMS officials told us that UPIC integration is a cornerstone of CMS’s contract management strategy and would help to ensure communication and coordination across Medicare and Medicaid program-integrity efforts. CMS plans to award all the UPIC contracts by the end of 2017, ultimately phasing out the ZPICs and Medicaid Integrity Contractors. CMS Has Taken Steps to Create a Culture Conducive to Fraud Risk Management but Could Enhance Antifraud Training for Employees The commit component of the Fraud Risk Framework also includes creating an organizational culture to combat fraud at all levels of the agency. Consistent with the Fraud Risk Framework, CMS has promoted an antifraud culture by demonstrating a senior-level commitment to combating fraud through public statements, increased resource levels, and internal and external coordination. In addition to HHS and CMS strategic documents discussed earlier, CMS and CPI leaders have testified publicly about CMS’s commitment to preventing fraud and protecting taxpayers and beneficiaries. For example, CPI’s former Director testified in May 2016 before the House Committee on Energy and Commerce’s Subcommittee on Oversight and Investigations that “CMS is deeply committed to our efforts to prevent waste, fraud and abuse in Medicare and Medicaid programs, protecting both taxpayers and the beneficiaries that we serve.” More recently, CMS’s new Administrator testified in her February 2017 confirmation hearing regarding her intent to prioritize efforts around preventing fraud and abuse. CPI’s budget and resources have increased over time to support its ongoing program-integrity mission. According to CMS, program-integrity obligations for Medicare and Medicaid increased from about $1.02 billion in fiscal year 2010 to $1.45 billion in fiscal year 2016. According to CMS officials, the Health Care Fraud and Abuse Control (HCFAC) account, one of the primary sources of CPI funding, has never received a funding reduction. Additionally, in 2015, CPI received additional funding based on a discretionary cap adjustment to HCFAC. Similarly, CPI staff resources have increased over time. According to CMS, CPI’s full-time equivalent positions increased from 177 in 2011 to 419 in 2017. Consistent with leading practices in the Fraud Risk Framework to involve all levels of the agency in setting an antifraud tone, CPI has also worked collaboratively with other CMS Centers. In addition to engaging executive-level officials of other CMS Centers through the Program Integrity Board, CPI has worked collaboratively with other Centers within CMS to incorporate antifraud features into new program design or policy development and established regular communication at the staff level. For example: Center for Medicare and Medicaid Innovation (CMMI). When developing the Medicare Diabetes Prevention Program, CMMI officials told us they worked with CPI’s Provider Enrollment and Oversight Group and Governance Management Group to develop risk-based screening procedures for entities that would enroll in Medicare to provide diabetes-prevention services, among other activities. The program was expanded nationally in 2016, and CMS determined that an entity may enroll in Medicare as a program supplier if it satisfies enrollment requirements, including that the supplier must pass existing high categorical risk-level screening requirements. Center for Medicaid and CHIP Services (CMCS). CMCS officials told us they worked closely with CPI to issue Medicaid guidance and best practices to states on home and community-based services that incorporate program-integrity provisions. A senior CMCS official told us that, to address fraud, CMS has requested that states include provider information on claims to determine whether providers are meeting eligibility criteria. Center for Medicare (CM). In addition to building safeguards into programs and developing policies, CM officials told us that there are several standing meetings, on monthly, biweekly, and weekly bases, between groups within CM and CPI that discuss issues related to provider enrollment, FFS operations, and contractor management. A senior CM official also told us that there are ad hoc meetings taking place between CM and CPI: “We interact multiple times daily at different levels of the organization. Working closely is just a regular part of our business.” CMS has also demonstrated its commitment to addressing fraud, waste, and abuse to its stakeholders. Representatives of CMS’s extensive stakeholder network whom we interviewed—state officials, contractors, and officials from public and private entities—generally recognized the agency’s commitment to combating fraud. In our interviews with stakeholders, officials observed CMS’s increased commitment over time to address fraud, waste, and abuse and cited examples of specific CMS actions. State officials, for example, told us that the Medicaid Integrity Institute, a training center coordinated jointly by CMS and DOJ, has been a helpful resource for states to build capacity to address fraud and program integrity. CMS contractors told us that CMS’s commitment to combating fraud is incorporated into contractual requirements, such as requiring (1) data analysis for potential fraud leads and (2) fraud- awareness training for providers. Officials from entities that are members of the HFPP, specifically, a health-insurance plan and the National Health Care Anti-Fraud Association, added that CMS’s effort to establish the HFPP and its ongoing collaboration and information sharing reflect CMS’s commitment to combat fraud in Medicare and Medicaid. The Fraud Risk Framework identifies training as one way of demonstrating an agency’s commitment to combating fraud. Training and education intended to increase fraud awareness among stakeholders, managers, and employees, serves as a preventive measure to help create a culture of integrity and compliance within the agency. The Fraud Risk Framework discusses requiring all employees to attend training upon hiring and on an ongoing basis thereafter. To increase awareness of fraud risks in Medicare and Medicaid, CMS offers and requires training for stakeholder groups such as providers, beneficiaries, and health-insurance plans. Specifically, through its National Training Program and Medicare Learning Network, CMS makes available training materials on combating Medicare and Medicaid fraud, waste, and abuse. These materials help to identify and report fraud, waste, and abuse in CMS programs and are geared toward providers, beneficiaries, as well as trainers and other stakeholders. Separately, CMS requires health-insurance plans working with CMS to provide annual fraud, waste, and abuse training to their employees. However, CMS does not offer or require similar fraud-awareness training for the majority of its workforce. For a relatively small portion of its overall workforce—specifically, contracting officer representatives who are responsible for certain aspects of the acquisition function—CMS requires completion of fraud and abuse prevention training every 2 years. According to CMS, 638 of its contracting officer representatives (or about 10 percent of its overall workforce) completed such training in 2016 and 2017. Although CMS offers fraud-awareness training to others, the agency does not require fraud-awareness training for new hires or on a regular basis for all employees because the agency has focused on providing process-based internal controls training for its employees. While fraud-awareness training for contracting officer representatives is an important step in helping to promote fraud risk management, fraud- awareness training specific to CMS programs would be beneficial for all employees. Such training would not only be consistent with what CMS offers to or requires of its stakeholders and some of its employees, but would also help to keep the agency’s entire workforce continuously aware of fraud risks and examples of known fraud schemes, such as those identified in successful OIG investigations. Such training would also keep employees informed as they administer CMS programs or develop agency policies and procedures. Considering the vulnerability of Medicare and Medicaid programs to fraud, waste, and abuse, without regular required training CMS cannot be assured that its workforce of over 6,000 employees is continuously aware of risks facing its programs. Although CMS has shown commitment to combating fraud, at times CPI’s efforts to combat fraud compete with other mission priorities, such as (1) ensuring beneficiary access to health-care services and (2) limiting provider burden. CPI leadership has been aware of this inherent challenge. For example, at a congressional hearing in May 2016, CPI’s Director stated that “our efforts strike an important balance: protecting beneficiary access to necessary health care services and reducing the administrative burden on legitimate providers and suppliers, while ensuring that taxpayer dollars are not lost to fraud, waste, and abuse.” Beneficiary access to care. In accordance with its mission statement, providing and improving beneficiaries’ access to health care is a CMS priority. CMS’s commitment to providing access to high-quality care and coverage is reflected in the agency’s mission statement and is one of its four strategic goals. As a result, before taking administrative actions against a Medicare Part A provider, such as a hospice, or providers in rural areas, CMS officials told us that they first look at whether there is a sufficient number of providers in an area by running a provider search by provider county and adjacent counties and considering how heavily populated an area is with Medicare beneficiaries. According to these officials, rather than taking an administrative action against a provider that would limit beneficiaries’ access to services, the agency may enter into a corrective action plan with the provider. CMS officials told us that revoking a provider’s enrollment in Medicare, an option available to CMS in cases of provider noncompliance or misconduct, is rare. Administrative burden on providers. According to CMS documents and officials, concern over placing undue burden on providers—the majority of whom are presumed to be honest—provides a counterforce to implementing program-integrity control activities. CMS’s web page entitled Reducing Provider Burden states: “CMS is committed to reducing improper payments but must be mindful of provider burden because medical review is a resource-intensive process for both the healthcare provider and the Medicare review contractor.” Two CMS contractors told us that they scaled back or did not pursue audits of providers’ documentation because of provider burden or sensitivity considerations. One contractor removed providers from audit samples after some providers opposed having to supply multiple medical records. CPI officials told us that they want to reduce provider burden in a logical manner. For example, according to CMS officials, in the Medicare FFS Recovery Audit Program, CMS established limits on Additional Documentation Requests, which are requests for medical documentation supporting a claim being reviewed. CMS requires such documentation adjustments so that they align with a providers’ claim denial rates. Providers with low denial rates will have lower documentation requirements, while providers with high denial rates will have higher documentation requirements, thus adjusting provider burden based on demonstrated compliance. CMS Has Taken Steps to Identify Program Fraud Risks but Has Not Conducted a Fraud Risk Assessment for Medicare or Medicaid CMS Has Taken Steps to Identify Some Fraud Risks for Medicare and Medicaid The assess component of the Fraud Risk Framework calls for federal managers to plan regular fraud risk assessments and to assess risks to determine a fraud risk profile. Identifying fraud risks is one of the steps included in the Fraud Risk Framework for assessing risks to determine a fraud risk profile. CMS has taken steps to identify some fraud risks through several control activities that target areas the agency has designated as higher risk within Medicare and Medicaid, including specific provider types, such as home health agencies, and specific geographic locations. As discussed earlier, CMS officials told us that CPI initially focused on developing control activities for Medicare FFS and considers these activities to be the most mature of all CPI efforts to address fraud risks. CMS has identified fraud risks in the following selected examples, which are not an exhaustive list of its control activities. Data analytics to assist investigations in Medicare FFS. In 2011, CMS implemented FPS, a data-analytic system that screens all Medicare FFS claims to identify health-care providers with suspect billing patterns for further investigation. Medicare FFS contractors—ZPICs and UPICs— have used FPS to identify and prioritize leads for investigations of potential fraud by high-risk Medicare FFS providers. Contractors told us that FPS allows them to quickly identify and triage leads. CMS’s guidance requires contractors to prioritize investigations with the greatest program impact or urgency and identifies required criteria for prioritizing investigations, such as patient abuse or harm, multistate fraud, and high dollar amount of potential overpayments. One contractor we interviewed developed a risk-prioritization model that incorporated CMS’s required criteria, such as patient harm, as well as additional criteria, such as provider spikes in billing, into a tool that automatically creates a provider risk score to help the contractor focus and prioritize investigative resources. Prior authorization for Medicare FFS services or supplies. CMS published a final rule in December 2015 that identifies a master list of durable medical equipment, prosthetics, orthotics, and supplies for which CMS can require prior authorization before suppliers submit a Medicare FFS claim. In this rule, CMS identified 135 items that are frequently subject to unnecessary utilization and stated that the agency expects the final rule to result in savings in the form of reduced unnecessary utilization, fraud, waste, and abuse. Under this program, prior authorization is a condition of payment for claims. CMS can choose which items on the master list to subject to prior authorization. For example, in March 2017, it began requiring prior authorization for selected power wheelchairs in four states and expanded the prior authorization program for these items to all states in July 2017. CMS also began to test the use of prior authorization on a voluntary basis through a series of fixed-length demonstrations for items and services that have been associated with high levels of improper payments, including high incidences of fraud in some cases, and unnecessary utilization in certain geographic areas. For example, CMS began implementing a voluntary prior authorization demonstration in September 2012 for other power mobility devices, such as power scooters, in seven states where historically there has been extensive evidence of fraud and improper payments. CMS expanded the demonstration to an additional 12 states in October 2014, for a total of 19 states. According to the initial Federal Register notice, CMS planned to use the demonstration to develop improved methods for investigation and prosecution of fraud to protect federal funds from fraudulent actions and the resulting improper payments. Under the demonstration, providers and suppliers are encouraged—but not required—to submit a request for prior authorization for certain items before they provide the item to the beneficiary and submit a claim for payment. Revised provider screening and enrollment processes for Medicare FFS and Medicaid FFS. In response to PPACA, in 2011 CMS implemented a revised screening process for providers and suppliers who enroll in Medicare and Medicaid based on identified provider risk categories. CMS placed all Medicare provider and supplier types into one of three risk categories—limited, moderate, or high—based on its assessment of the potential risk of fraud, waste, and abuse each provider and supplier type poses. For example, CMS designated prospective (newly enrolling) home health agencies and prospective suppliers of durable medical equipment, prosthetics, orthotics, and supplies in the high-risk category. According to the final rule and our interviews with CMS officials, CMS developed these risk-based categories based on its review and synthesis of various information sources about the fraud risks posed by each provider and supplier type, including (1) the agency’s experience with claims data used to identify potentially fraudulent billing practices, (2) expertise of contractors responsible for investigating and identifying Medicare fraud, and (3) GAO and OIG reports. CMS designated specific screening activities for each risk category, with increased requirements for moderate- and high-risk provider and supplier types. For example, moderate- and high-risk providers and suppliers must receive preenrollment site visits, and high-risk providers and suppliers also are subject to fingerprint-based criminal-background checks. As part of the revised screening process, beginning in September 2011, CMS also undertook its first program-wide effort to rescreen, or revalidate, the enrollment records of about 1.5 million existing Medicare FFS providers and suppliers, to determine whether they remain eligible to bill Medicare. Temporary provider enrollment moratoriums for certain providers and geographic areas for Medicare FFS and Medicaid FFS. CMS identified certain provider types and geographic areas as high risk for fraud and used its authority under PPACA to implement temporary moratoriums to suspend enrollment of such Medicare and Medicaid providers in those areas. For example, in July 2016, CMS extended temporary moratoriums statewide on the enrollment of new Medicare Part B nonemergency ambulance suppliers and Medicare home health agencies statewide in six states, as applicable. The statewide moratoriums also apply to Medicaid. According to the Federal Register notice, CMS imposed the temporary moratoriums based on qualitative and quantitative factors suggesting a high risk of fraud, waste, or abuse, such as law-enforcement expertise with emerging fraud trends and investigations. CMS’s data analysis also confirmed the agency’s determination of a high risk of fraud, waste, and abuse for these provider and supplier types within certain geographic areas, according to the notice. Medicaid state program integrity reviews and desk reviews. CMS tailored state Medicaid program-integrity reviews to areas it identified as high risk for improper payments, such as personal care services, which may also be at high risk for fraud. In March 2017, we reported that, from fiscal years 2014 through 2016, CMS conducted focused reviews of state program-integrity efforts in 31 states, reviewing 10 or 11 states annually. For each state, CMS tailored its focused reviews to the state’s managed care plans and relevant other high-risk areas, including provider enrollment and screening, nonemergency medical transportation, and personal care services. CMS and state officials we spoke with as part of that work told us that the tailored oversight had been beneficial and helped identify areas for improvement. CMS has also initiated desk reviews of state program-integrity efforts. According to CMS, these desk reviews allow the agency to provide states with customized program- integrity oversight. Vulnerability tracking system for Medicare. CPI recently initiated an effort to centralize and formalize a vulnerability tracking process for Medicare, which could support identification of specific fraud risks, both in Medicare and possibly Medicaid. As described by CPI officials, the process aims to collect information on fraud-related vulnerabilities from CMS employees, contractors, and other sources, such as GAO and HHS OIG reports. CMS Has Not Conducted a Fraud Risk Assessment for Medicare or Medicaid The assess component of the Fraud Risk Framework calls for federal managers to plan regular fraud risk assessments and assess risks to determine a fraud risk profile. Furthermore, federal internal control standards call for agency management to assess the internal and external risks their entities face as they seek to achieve their objectives. The standards state that, as part of this overall assessment, management should consider the potential for fraud when identifying, analyzing, and responding to risks. The Fraud Risk Framework states that, in planning the fraud risk assessment, effective managers tailor the fraud risk assessment to the program by, among other things, identifying appropriate tools, methods, and sources for gathering information about fraud risks and involving relevant stakeholders in the assessment process. Fraud risk assessments that align with the Fraud Risk Framework involve (1) identifying inherent fraud risks affecting the program, (2) assessing the likelihood and impact of those fraud risks, (3) determining fraud risk tolerance, (4) examining the suitability of existing fraud controls and prioritizing residual fraud risks, and (5) documenting the results. (See fig. 6.) Although, as discussed earlier, CMS has identified some fraud risks posed by providers in Medicare FFS and, to a lesser degree, Medicaid FFS, the agency has not conducted a fraud risk assessment for either the Medicare or Medicaid program. Such a risk assessment would provide the detailed information and insights needed to create a fraud risk profile, which, in turn, is the basis for creating an antifraud strategy. According to CMS officials, CMS has not conducted a fraud risk assessment for Medicare or Medicaid because, within CPI’s broader approach of preventing and eliminating improper payments, its focus has been on addressing specific vulnerabilities among provider groups that have shown themselves particularly prone to fraud, waste, and abuse. With this approach, however, it is unlikely that CMS will be able to design and implement the most-appropriate control activities to respond to the full portfolio of fraud risks. A fraud risk assessment consists of discrete activities that build upon each other. Specifically: Identifying inherent fraud risks affecting the program. As discussed earlier, CMS has taken steps to identify fraud risks. However, CMS has not used a process to identify inherent fraud risks from the universe of potential vulnerabilities facing Medicare and Medicaid programs, including threats from various sources. According to CPI officials, most of the agency’s fraud control activities are focused on fraud risks posed by providers. The Fraud Risk Framework discusses fully considering inherent fraud risks from internal and external sources in light of fraud risk factors such as incentives, opportunities, and rationalization to commit fraud. For example, according to CMS officials, the inherent design of the Medicare Part C program may pose fraud risks that are challenging to detect. A fraud risk assessment would help CMS identify all sources of fraudulent behaviors, beyond threats posed by providers, such as those posed by health-insurance plans, contractors, or employees. Assessing the likelihood and impact of fraud risks and determining fraud risk tolerance. CMS has taken steps to prioritize fraud risks in some areas, but it has not assessed the likelihood or impact of fraud risks or determined fraud risk tolerance across all parts of Medicare and Medicaid. Assessing the likelihood and impact of inherent fraud risks would involve consideration of the impact of fraud risks on program finances, reputation, and compliance. Without assessing the likelihood and impact of risks in Medicare or Medicaid or internally determining which fraud risks may fall under the tolerance threshold, CMS cannot be certain that it is aware of the most-significant fraud risks facing these programs and what risks it is willing to tolerate based on the programs’ size and complexity. Examining the suitability of existing fraud controls and prioritizing residual fraud risks. CMS has not assessed existing control activities or prioritized residual fraud risks. According to the Fraud Risk Framework, managers may consider the extent to which existing control activities—whether focused on prevention, detection, or response—mitigate the likelihood and impact of inherent risks and whether the remaining risks exceed managers’ tolerance. This analysis would help CMS to prioritize residual risks and to determine mitigation approaches. For example, CMS has not established preventive fraud control activities in Medicare Part C. Using a fraud risk assessment for Medicare Part C and closely examining existing fraud control activities and residual risks, CMS could be better positioned to address fraud risks facing this growing program and develop preventive control activities. Further, without assessing existing fraud control activities and prioritizing residual fraud risks, CMS cannot be assured that its current control activities are addressing the most-significant risks. Such analysis would also help CMS determine whether additional, preferably preventive, fraud controls are needed to mitigate residual risks, make adjustments to existing control activities, and potentially scale back or remove control activities that are addressing tolerable fraud risks. Documenting the risk-assessment results in a fraud risk profile. CMS has not developed a fraud risk profile that documents key findings and conclusions of the fraud risk assessment. According to the Fraud Risk Framework, the risk profile can also help agencies decide how to allocate resources to respond to residual fraud risks. Given the large size and complexity of Medicare and Medicaid, a documented fraud risk profile could support CMS’s resource-allocation decisions as well as facilitate the transfer of knowledge and continuity across CMS staff and changing administrations. Senior CPI officials told us that the agency plans to start a fraud risk assessment for Medicare and Medicaid after it completes a separate fraud risk assessment of the federally facilitated marketplace. This fraud risk assessment for the federally facilitated marketplace eligibility and enrollment process is being conducted in response to a recommendation we made in February 2016. In April 2017, CPI officials told us that this fraud risk assessment was largely completed, although in September 2017 CPI officials told us that the assessment was undergoing agency review. CPI officials told us that they have informed CM and CMCS officials that there will be future fraud risk assessments for Medicare and Medicaid; however, they could not provide estimated timelines or plans for conducting such assessments, such as the order or programmatic scope of the assessments. Once completed, CMS could use the federally facilitated marketplace fraud risk assessment and apply any lessons learned when planning for and designing fraud risk assessments for Medicare and Medicaid. According to the Fraud Risk Framework, factors such as size, resources, maturity of the agency or program, and experience in managing risks can influence how the entity plans the fraud risk assessment. Additionally, effective managers tailor the fraud risk assessment to the program when planning for it. The large scale and complexity of Medicare and Medicaid as well as time and resources involved in conducting a fraud risk assessment underscore the importance of a well-planned and tailored approach to identifying the assessment’s programmatic scope. Planning and tailoring may involve decisions to conduct a fraud risk assessment for Medicare and Medicaid programs as a whole or divided into several subassessments to reflect their various component parts (e.g., Medicare FFS, Medicaid managed care) as well as determining the timing and order of assessments (e.g., concurrently or consecutively for Medicare and Medicaid). CMS’s existing fraud risk identification efforts as well as communication channels with stakeholders could serve as a foundation for developing a fraud risk assessment for Medicare and Medicaid. The leading practices identified in the Fraud Risk Framework discuss the importance of identifying appropriate tools, methods, and sources for gathering information about fraud risks and involving relevant stakeholders in the assessment process. CMS’s fraud risk identification efforts discussed earlier could provide key information about fraud risks and their likelihood and impact. Further, existing relationships and communication channels across CMS and its extensive network of stakeholders could support building a comprehensive understanding of known and potential fraud risks for the purposes of a fraud risk assessment. For example, the fraud vulnerabilities identified through data analysis and information sharing with states, health-insurance plans, law-enforcement organizations, and contractors through the HFPP could inform a fraud risk assessment. CPI’s Command Center missions—facilitated collaboration sessions that bring together experts from various disciplines to improve the processes for fraud prevention in Medicare and Medicaid—could bring together experts to identify potential or emerging fraud vulnerabilities or to brainstorm approaches to mitigate residual fraud risks. As CMS makes plans to move forward with a fraud risk assessment for Medicare and Medicaid, it will be important to consider the frequency with which the fraud risk assessment would need to be updated. While, according to the Fraud Risk Framework, the time intervals between updates can vary based on the programmatic and operating environment, assessing fraud risks on an ongoing basis is important to ensure that control activities are continuously addressing fraud risks. The constantly evolving fraud schemes, the size of the programs in terms of beneficiaries and expenditures, as well as continual changes in Medicare and Medicaid programs—such as development of innovative payment models and increasing managed-care enrollment—call for constant vigilance and regular updates to the fraud risk assessment. CMS Has Not Developed a Risk-Based Antifraud Strategy for Medicare and Medicaid, Which Would Include Plans for Monitoring and Evaluation CMS Has Not Developed a Risk-Based Antifraud Strategy The design and implement component of the Fraud Risk Framework calls for federal managers to design and implement a strategy with specific control activities to mitigate assessed fraud risks and collaborate to help ensure effective implementation. According to the Fraud Risk Framework, effective managers develop and document an antifraud strategy that describes the program’s approach for addressing the prioritized fraud risks identified during the fraud risk assessment, also referred to as a risk-based antifraud strategy. A risk- based antifraud strategy describes existing fraud control activities as well as any new fraud control activities a program may adopt to address residual fraud risks. In developing a strategy and antifraud control activities, effective managers focus on fraud prevention over detection, develop a plan for responding to identified instances of fraud, establish collaborative relationships with stakeholders, and create incentives to help effectively implement the strategy. Additionally, as part of a documented strategy, management identifies roles and responsibilities of those involved in fraud risk management activities; describes control activities as well as plans for monitoring and evaluation, creates timelines, and communicates the antifraud strategy to employees and stakeholders, among other things. As discussed earlier, CMS has some control activities in place to identify fraud risk in Medicare and Medicaid, particularly in the FFS program. However, CMS has not developed and documented a risk-based antifraud strategy to guide its design and implementation of new antifraud activities and to better align and coordinate its existing activities to ensure it is targeting and mitigating the most-significant fraud risks. Antifraud strategy. CMS officials told us that CPI does not have a documented risk-based antifraud strategy. Although CMS has developed several documents that describe efforts to address fraud, the agency has not developed a risk-based antifraud strategy for Medicare and Medicaid because, as discussed earlier, it has not conducted a fraud risk assessment that would serve as a foundation for such strategy. In 2016, CPI identified five strategic objectives for program integrity, which include antifraud elements and an emphasis on prevention. However, according to CMS officials, these objectives were identified from discussions with CMS leadership and various stakeholders and not through a fraud risk assessment process to identify inherent fraud risks from the universe of potential vulnerabilities, as described earlier and called for in the leading practices. These strategic objectives were presented at an antifraud conference in 2016, but were not announced publicly until the release of the Annual Report to Congress on the Medicare and Medicaid Integrity Programs for Fiscal Year 2015 in June 2017. Stakeholder relationships and communication. CMS has established relationships and communicated with stakeholders, but, without an antifraud strategy, stakeholders we spoke with lacked a common understanding of CMS’s strategic approach. Prior work on practices that can help federal agencies collaborate effectively calls for a strategy that is shared with stakeholders to promote trust and understanding. Once an antifraud strategy is developed, the Fraud Risk Framework calls for managers to collaborate to ensure effective implementation. Although some CMS stakeholders were able to describe various CMS program- integrity priorities and activities, such as home health being a fraud risk priority, the stakeholders could not communicate, articulate, or cite a common CMS strategic approach to address fraud risks in its programs. Incentives. The Fraud Risk Framework discusses creating incentives to help ensure effective implementation of the antifraud strategy once it is developed. Currently, some incentives within stakeholder relationships may complicate CMS’s antifraud efforts. As discussed earlier, CMS is a partner and provides oversight to states’ program-integrity functions. Officials from one state told us that they were reluctant to share their program vulnerabilities because CMS would use this information to later audit the state. Among contractors, CMS encourages information sharing through conferences and workshops; however, competition for CMS business among contractors can be a disincentive to information sharing. CMS officials acknowledged this concern and said that they expect contractors to share information related to fraud schemes, outcomes of investigations, and tips for addressing fraud, but not proprietary information such as algorithms to risk-score providers. Without developing and documenting an antifraud strategy based on a fraud risk assessment, as called for in the design and implement component of the Fraud Risk Framework, CMS cannot ensure that it has a coordinated approach to address the range of fraud risks and to appropriately target and allocate resources for the most-significant risks. Considering fraud risks to which the Medicare and Medicaid programs are most vulnerable, in light of the malicious intent of those who aim to exploit the programs, would help CMS to examine its current control activities and potentially design new ones with recognition of fraudulent behavior it aims to prevent. This focus on fraud is distinct from a broader view of program integrity and improper payments by considering the intentions and incentives of those who aim to deceive rather than well-intentioned providers who make mistakes. Also, continued growth of the programs, such as growth of Medicare Part C and Medicaid managed care, call for consideration of preventive fraud control activities across the entire network of entities involved. Further, considering the large size and complexity of Medicare and Medicaid and the extensive stakeholder network involved in managing fraud in the programs, a strategic approach to managing fraud risks within the programs is essential to ensure that a number of existing control activities and numerous stakeholder relationships and incentives are being aligned to produce desired results. Once developed, an antifraud strategy that is clearly articulated to various CMS stakeholders would help CMS to address fraud risks in a more coordinated and deliberate fashion. Thinking strategically about existing control activities, resources, tools, and information systems could help CMS to leverage resources while continuing to integrate Medicare and Medicaid program-integrity efforts along functional lines. A strategic approach grounded in a comprehensive assessment of fraud risks could also help CMS to identify future enhancements for existing control activities, such as new preventive capabilities for FPS or additional fraud factors in provider enrollment and revalidation, such as provider risk scoring, to stay in step with evolving fraud risks. CMS Has Established Monitoring and Evaluation Mechanisms That Could Inform a Risk-Based Antifraud Strategy for Medicare and Medicaid The evaluate and adapt component of the Fraud Risk Framework calls for federal managers to evaluate outcomes using a risk-based approach and adapt activities to improve fraud risk management. Furthermore, according to federal internal control standards, managers should establish and operate monitoring activities to monitor the internal control system and evaluate the results, which may be compared against an established baseline. Ongoing monitoring and periodic evaluations provide assurances to managers that they are effectively preventing, detecting, and responding to potential fraud. CMS has established monitoring and evaluation mechanisms for its program-integrity activities that it could incorporate into an antifraud strategy. In Medicare, CMS has taken steps to measure the rate of fraud in a particular service area. We have previously reported that agencies may face challenges measuring outcomes of fraud risk management activities in a reliable way. These challenges include the difficulty of measuring the extent of deterred fraud, isolating potential fraud from legitimate activity or other forms of improper payments, and determining the amount of undetected fraud. Despite these challenges, CMS has taken steps to estimate a fraud baseline—meaning the rate of probable fraud—in the home health benefit. In fiscal year 2016, CMS conducted a pretest in the Miami-Dade area of Florida to evaluate its potential measurement approach that could later be used in a nationwide study of probable fraud among home health agencies. The pretest was not a random sample and was not intended to produce a rate of fraud, but instead was intended to test the interview instruments and data-collection methodology CMS might use in a study nationwide. CMS and its contractor collected information from home health agencies, the attending providers, and Medicare beneficiaries in the Miami-Dade area in order to test these interview instruments. CMS completed this pretest, but, according to CMS officials, the agency does not yet have plans to roll out a nationwide study that would estimate a probable fraud rate for the Medicare FFS home health benefit. In its 2015 annual report to Congress, CMS stated that “documenting the baseline amount of fraud in Medicare is of critical importance, as it allows officials to evaluate the success of ongoing fraud prevention activities.” CMS officials working on the pilot told us that having an estimate of the rate of fraud in home health benefits would allow CMS to reliably assess its efforts at eliminating or reducing fraud. Without a baseline, officials said, the agency cannot know whether its antifraud efforts are as effective as they could be. We previously reported that the lack of a baseline for the amount of health-care fraud that exists limits CMS’s ability to determine whether its activities are effectively reducing health care fraud and abuse. A baseline estimate could provide an understanding of the extent of fraud and, with additional information on program activities, could help to inform decision making related to allocation of resources to combat health-care fraud. As described in the Fraud Risk Framework, in the absence of a fraud baseline, agencies can gather additional information on the short-term or intermediate outcomes of some antifraud initiatives, which may be more readily measured. For example, CMS has developed some performance measures to provide a basis for monitoring its progress towards meeting the program-integrity goals set in the HHS Strategic Plan and Annual Performance Plan. Specifically, CMS measures whether it is meeting its goal of “increasing the percentage of Medicare FFS providers and suppliers identified as high risk that receive an administrative action.” CMS does not set specific antifraud goals for other parts of Medicare or Medicaid; other CMS performance measures relate to measuring or reducing improper payments in CHIP, Medicaid, and the various parts of Medicare. CMS uses return-on-investment and savings estimates to measure the effectiveness of its Medicare program-integrity activities and FPS. For example, CMS uses return-on-investment to measure the effectiveness of FPS and, in response to a recommendation we made in 2012, CMS developed outcome-based performance targets and milestones for FPS. CMS has also conducted individual evaluations of its program-integrity activities, such as an interim evaluation of the prior-authorization demonstration for power mobility devices that began in 2012 and is currently implemented in 19 states. Commensurate with greater maturity of control activities in Medicare FFS compared to other parts of Medicare and Medicaid, monitoring and evaluation activities for Medicare Parts C and D and Medicaid are more limited. For example, CMS calculates savings for its program-integrity activities in Medicare Parts C and D, but not a full return-on-investment. CMS officials told us that calculating costs for specific activities is challenging because of overlapping activities among contractors. CMS officials said they continue to refine methods and develop new savings estimates for additional program-integrity activities. According to the Fraud Risk Framework, effective managers develop a strategy and evaluate outcomes using a risk-based approach. In developing an effective strategy and antifraud activities, managers consider the benefits and costs of control activities. Ongoing monitoring and periodic evaluations provide reasonable assurance to managers that they are effectively preventing, detecting, and responding to potential fraud. Monitoring and evaluation activities can also support managers’ decisions about allocating resources, and help them to demonstrate their continued commitment to effectively managing fraud risks. As CMS takes steps to develop an antifraud strategy, it could include plans for refining and building on existing methods such as return-on- investment or savings measures, and setting appropriate targets to evaluate the effectiveness of all of CMS’s antifraud efforts. Such a strategy would help CMS to efficiently allocate program-integrity resources and to ensure that the agency is effectively preventing, detecting, and responding to potential fraud. For example, while doing so would involve challenges, CMS’s strategy could detail plans to advance efforts to measure a potential fraud rate through baseline and periodic measures. Fraud rate measurement efforts could also inform risk assessment activities, identify currently unknown fraud risks, align resources to priority risks, and develop effective outcome metrics for antifraud controls. Such a strategy would also help CMS ensure that it has effective performance measures in place to assess its antifraud efforts beyond those related to providers in Medicare FFS, and establish appropriate targets to measure the agency’s progress in addressing fraud risks. As CMS makes plans to move forward with a strategy and to further develop evaluation and monitoring mechanisms, it will be important to share its efforts with stakeholders. The Fraud Risk Framework states that effective managers communicate lessons learned from fraud risk management activities to stakeholders. For example, CMS could be a leader to states in measuring the effectiveness of program-integrity efforts. Officials in three of the four states we spoke with expressed interest in receiving CMS guidance on how to measure the effectiveness of their Medicaid program-integrity efforts, such as by providing models for how to calculate return-on-investment. Conclusions Medicare and Medicaid provide health insurance to over 129 million Americans, but the size—in terms of number of beneficiaries and amount of expenditures—as well as complexity of these programs make them inherently susceptible to fraud and improper payments. CMS currently manages these risks across its programs as part of a broader approach to identifying and controlling for multiple sources of improper payments and by developing relationships with an extensive network of stakeholders. In Medicare and Medicaid specifically, we note that CMS has taken many important steps toward implementing a strategic approach for managing fraud. However, the agency could benefit by more fully aligning its efforts with the four components of the Fraud Risk Framework. CMS is well positioned to leverage its fraud risk management efforts— such as demonstrated leadership for combating fraud, existing control activities, and stakeholder relationships—to provide additional antifraud training, as well as to develop an antifraud strategy based on fraud risk assessments for Medicare and Medicaid. We recognize that the effort may be challenging, given the size and complexity of Medicare and Medicaid, and the need to balance antifraud activities with CMS’s other mission priorities. However, by not employing the actions identified in the Fraud Risk Framework and incorporating them in its approach to managing fraud risks, CMS is missing a significant opportunity to better ensure employee vigilance against fraud, and to organize and focus its many antifraud and program-integrity activities and related resources into a comprehensive strategy. Such a strategy would (1) provide reasonable assurance that CMS is targeting the most-significant fraud risks in its programs and (2) help protect the government’s substantial and growing investments in these programs. Recommendations for Executive Action We are making the following three recommendations to CMS: The Administrator of CMS should provide fraud-awareness training relevant to risks facing CMS programs and require new hires to undergo such training and all employees to undergo training on a recurring basis. (Recommendation 1) The Administrator of CMS should conduct fraud risk assessments for Medicare and Medicaid to include respective fraud risk profiles and plans for regularly updating the assessments and profiles. (Recommendation 2) The Administrator of CMS should, using the results of the fraud risk assessments for Medicare and Medicaid, create, document, implement, and communicate an antifraud strategy that is aligned with and responsive to regularly assessed fraud risks. This strategy should include an approach for monitoring and evaluation. (Recommendation 3) Agency Comments We provided a draft of this report to HHS and DOJ for comment. HHS provided written comments, which are reprinted in appendix I. DOJ did not have comments. HHS and DOJ also provided technical comments, which we incorporated as appropriate. In commenting on this report, HHS agreed with our three recommendations. Specifically, in response to our first recommendation to provide required fraud-awareness training to all employees, HHS stated that it will develop and implement a fraud-awareness training plan to ensure all CMS employees receive training. Regarding our second recommendation to conduct fraud risk assessments for Medicare and Medicaid, HHS stated that it is currently conducting a fraud risk assessment on the federally facilitated marketplace and, when this assessment is complete, will apply the lessons learned in assessing this program to fraud risk assessments of Medicare and Medicaid. In response to our third recommendation to create, document, implement, and communicate an antifraud strategy that is aligned with and responsive to regularly assessed fraud risks, HHS stated that it will develop respective risk-based antifraud strategies after completing fraud risk assessments for Medicare and Medicaid. We are sending copies of this report to the Acting Secretary of Health and Human Services, the Administrator of CMS, the Assistant Attorney General for Administration at DOJ, as well as appropriate congressional committees and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Health and Human Services Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Tonita Gillich (Assistant Director), Irina Carnevale (Analyst-in-Charge), Michael Duane, Laura Sutton Elsberg, and Catrin Jones made key contributions to this report. Also contributing to the report were Lori Achman, James Ashley, Colin Fallon, Leslie V. Gordon, Maria McMullen, Sabrina Streagle, and Shana Wallace.
Why GAO Did This Study CMS, an agency within the Department of Health and Human Services (HHS), provides health coverage for over 145 million Americans through its four principal programs, with annual outlays of about $1.1 trillion. GAO has designated the two largest programs, Medicare and Medicaid, as high risk partly due to their vulnerability to fraud, waste, and abuse. In fiscal year 2016, improper payment estimates for these programs totaled about $95 billion. GAO's Fraud Risk Framework and the subsequent enactment of the Fraud Reduction and Data Analytics Act of 2015 have called attention to the importance of federal agencies' antifraud efforts. This report examines (1) CMS's approach for managing fraud risks across its four principal programs, and (2) how CMS's efforts managing fraud risks in Medicare and Medicaid align with the Fraud Risk Framework. GAO reviewed laws and regulations and HHS and CMS documents, such as program-integrity manuals. It also interviewed CMS officials and a sample of CMS stakeholders, including state officials and contractors. GAO selected states based on fraud risk and other factors, such as geographic diversity. GAO selected contractors based on a mix of companies and geographic areas served. What GAO Found The approach that the Centers for Medicare & Medicaid Services (CMS) has taken for managing fraud risks across its four principal programs—Medicare, Medicaid, the Children's Health Insurance Program (CHIP), and the health-insurance marketplaces—is incorporated into its broader program-integrity approach. According to CMS officials, this broader program-integrity approach can help the agency develop control activities to address multiple sources of improper payments, including fraud. As the figure below shows, CMS views fraud as part of a spectrum of actions that may result in improper payments. CMS's efforts managing fraud risks in Medicare and Medicaid partially align with GAO's 2015 A Framework for Managing Fraud Risks in Federal Programs (Fraud Risk Framework). This framework describes leading practices in four components: commit , assess , design and implement , and evaluate and adapt . CMS has shown commitment to combating fraud in part by establishing a dedicated entity—the Center for Program Integrity—to lead antifraud efforts. Furthermore, CMS is offering and requiring antifraud training for stakeholder groups such as providers, beneficiaries, and health-insurance plans. However, CMS does not require fraud-awareness training on a regular basis for employees, a practice that the framework identifies as a way agencies can help create a culture of integrity and compliance. Regarding the assess and design and implement components, CMS has taken steps to identify fraud risks, such as by designating specific provider types as high risk and developing associated control activities. However, it has not conducted a fraud risk assessment for Medicare or Medicaid, and has not designed and implemented a risk-based antifraud strategy. A fraud risk assessment allows managers to fully consider fraud risks to their programs, analyze their likelihood and impact, and prioritize risks. Managers can then design and implement a strategy with specific control activities to mitigate these fraud risks, as well as an appropriate evaluation approach consistent with the evaluate and adapt component. By developing a fraud risk assessment and using that assessment to create an antifraud strategy and evaluation approach, CMS could better ensure that it is addressing the full portfolio of risks and strategically targeting the most-significant fraud risks facing Medicare and Medicaid. What GAO Recommends GAO recommends that CMS (1) provide and require fraud-awareness training to its employees, (2) conduct fraud risk assessments, and (3) create an antifraud strategy for Medicare and Medicaid, including an approach for evaluation. HHS concurred with GAO's recommendations.
gao_GAO-18-338
gao_GAO-18-338_0
Background BEP produces notes at the request of the Federal Reserve. Each year, the Federal Reserve determines how many currency notes are needed to meet the demand for currency. Federal Reserve and BEP officials then agree on a payment amount for note production, including costs associated with maintaining BEP’s facilities. The Federal Reserve’s payments are deposited into BEP’s revolving fund; the revolving fund is used for BEP’s operational expenses, including note production. According to Treasury officials, the revolving fund can pay for renovations and retrofitting of a production facility, but not for land purchase or new building construction. In 2016, the Federal Reserve paid around $660 million for note production. In order to cover all expenses associated with the Federal Reserve’s needs, including currency production, the Federal Reserve generates income primarily from the interest on their holdings of U.S. government securities, agency mortgage-backed securities, and agency debt acquired through open market operations. The Federal Reserve is required to transfer any surplus funds over $7.5 billion to the General Fund of the U.S. Treasury. Increases or decreases in operating costs or BEP’s currency production could affect these surpluses and subsequent transfers to the General Fund. Historically, the Federal Reserve has had significant surpluses. In 2016, the Federal Reserve transferred $92 billion to the General Fund. BEP’s Washington, D.C., facility consists of a 104-year old, multi-story, multi-wing Main Building and an 80-year old multi-story, multi-wing Annex Building (see fig. 1). The Main Building is the primary production building, and the Annex Building is used primarily for administrative functions. Both buildings qualify for historic designation and thus any alterations would be subject to certain requirements under the National Historic Preservation Act of 1966, as amended. In addition to these buildings, BEP leases a warehouse in Landover, Maryland, to store production supplies in part because the two Washington, D.C., buildings do not have the necessary infrastructure to accommodate shipments carried by large commercial trucks. BEP’s Fort Worth facility was built in order to ensure reliable currency production in the event of any disruption of operations at the D.C. facility. BEP was able to obtain donated land and a building in Fort Worth and therefore did not need to purchase land or construct a new facility. Specifically, in 1986, BEP accepted a proposal from the City of Fort Worth that included 100 acres of donated land and a donated building shell to be built to BEP’s specifications. BEP then used its revolving fund to pay for the building’s interior retrofitting, including a central energy plant and installation of currency presses. The Fort Worth facility began producing notes in December 1990 and was intended to produce around 25 percent of U.S. notes. According to BEP officials, as a result of increased demand for U.S. notes and production limitations associated with the D.C. facility, the Fort Worth facility has produced an increasingly large share of notes. In fiscal year 2016 the Fort Worth facility produced nearly 60 percent of notes, while the D.C. facility produced the remaining 40 percent. BEP’s Proposal for a New Production Facility Considered Project Costs and Feasibility, Security, Efficiency, Safety, and Future Flexibility BEP Studies from 2010 to 2017 Determined the Cost and Feasibility of Multiple Alternatives From 2010 through 2017, BEP contracted for various studies to investigate alternatives, costs, potential sites, and program requirements to ensure future currency production in the D.C. area (see table 1 for details of the studies). In BEP’s 2013 study and since then, the agency has focused on three alternatives: “Renovation”—a major renovation of the current facility “New build”—a new building in a different location that would house currency production and all administrative functions “Hybrid”—a new building in a different location that would house currency production, but having administrative functions in one of its current buildings According to BEP officials, the cost estimates in the 2013 study were an important factor in their preference for a new facility instead of a renovation. The 2013 study concluded that BEP should pursue the new build alternative because it was estimated to be the least costly option, could be completed in the shortest time frame, and promised the greatest efficiencies. The study found that the renovation alternative would be the most costly option and take the longest time to complete because it would require BEP to produce currency at its current location while it was being renovated. BEP officials told us this would require moving production equipment from the Main Building to the Annex during the renovation and back to the Main Building once it was renovated. According to GSA officials, renovations are often more costly than new construction. According to Federal Reserve officials, moving large, complex printing presses and machines from one building to another and then back again significantly expands the renovation’s timeframe, as time would be needed to test the machines to get them back into specification. The Federal Reserve further noted that some modern presses will not fit into the Main Building without significant structural alterations, which would add cost and time to a renovation. Following the release of the 2013 study, BEP proposed to the Secretary of Treasury, with the support of Treasury officials, that Treasury and BEP pursue the hybrid alternative as their first choice (see table 2 for details on BEP’s proposal). BEP officials told us that they, along with Treasury, selected the hybrid alternative even though the hybrid was more expensive than the new build alternative. According to BEP officials, the cost difference between the hybrid and new build was outweighed by the value of maintaining administrative functions in Washington, D.C., to facilitate the day-to-day decision-making process among BEP, Treasury, and Federal Reserve officials. According to Treasury officials, the ability for other Treasury employees to co-locate in the Main Building after the repurposing is completed would also provide long-term cost benefits to Treasury because Treasury could save on expensive lease agreements in downtown Washington, D.C. Further, Treasury officials noted that it is important that the Treasury Department maintain the Main Building as an asset because of its location and history, and Treasury officials prefer that BEP maintain some functions in the building. The 2017 study provided cost estimates of BEP’s and Treasury’s preferred hybrid option, as well as the renovation option that BEP officials said they would pursue if BEP does not receive the necessary legal authority to construct a new facility. The study estimated that the hybrid option would cost approximately $1.389 billion and that the renovation option would cost approximately $1.957 billion. Federal Reserve officials told us they concur with the 2013 study that a new facility is warranted, that a renovation of the existing facility would be more costly than a new facility, and a renovation would not provide the same degree of efficiency. Federal Reserve officials said that they prefer the new build alternative because the 2013 study identified this alternative as the least expensive option, and would provide a modern, efficient manufacturing process. These officials also told us that, whatever alternative BEP pursues, the Federal Reserve will be financially responsible —whether it is for a new building, a renovated building, or the continuation of the currency production process in the D.C. facility. BEP officials stated that they support a new building over a renovation because the new build would both be less expensive and have greater benefits than a renovation. Furthermore, BEP officials told us that while they prefer to remain in the D.C. area, they would approve of the construction of a new facility in a different location if necessary. However, BEP officials also told us that if BEP does not get the legal authority necessary to use its revolving fund to purchase land and build a new facility in 2018, BEP will pursue a renovation of the existing D.C. facility beginning at the end of 2018. BEP Considered Other Factors in Deciding to Propose a New Production Facility Security As a federal facility, BEP must meet physical security standards established by the Interagency Security Committee (ISC). According to an assessment conducted by BEP’s Office of Security, the D.C. facility does not meet many of the necessary requirements for a facility of its security level. While certain security improvements, such as blast resistant windows or vehicle barriers, could be installed if the facility is renovated, other standards could only be addressed with a new facility. Specifically, the current buildings are located in an urban center surrounded by buildings (see fig. 1 above). As a result, according to the assessment, the facility does not have a secure perimeter because it lacks the required setback between the building and any point where an unscreened vehicle can travel or park. BEP officials said that even after a renovation, the facility would continue to have inadequate setback distance. According to the assessment, the facility’s designation as a historic building also limits BEP’s ability to make changes to the current facility to meet the necessary level of protection. For example, the facility’s placement on the historic registry limits BEP’s ability to make certain structural changes that could mitigate the building’s chances of progressively collapsing in the event of certain types of destructive attacks or actions. BEP’s Office of Security attributed certain security deficiencies to the facility’s limited setback distance and the buildings’ structure, and determined that the D.C. facility is at relatively high risk to threats such as an externally-placed portable explosive device. Efficiency BEP aims to provide quality banknotes in an efficient, cost effective manner. However, BEP officials concluded that the layout of the D.C. facility makes production less efficient than the Fort Worth facility. According to BEP production data, from 2013 to 2016, manufacturing costs were higher at the D.C. facility for all comparable denominations. For example, in 2016, production costs of $1 and $20 notes were 23 percent and 7 percent higher, respectively, at the D.C. facility compared to the Fort Worth facility. Additionally, the D.C. facility employs more manufacturing personnel than Fort Worth, even though it produces fewer notes (see table 3). BEP officials attributed the difference in the costs to the D.C. facility’s multi-floor, multi-wing production layout. Specifically, in D.C., after notes are printed on one side, they are moved to another floor to dry for at least 72 hours, brought back to the original floor to be printed on the opposite side, and again moved to the other floor to dry. In Fort Worth, because the production occurs in one large room on one floor, these processes occur in adjacent spaces on the same floor. As a result, according to BEP, notes travel more than twice as far during production in the D.C. facility. According to BEP, Treasury, and Federal Reserve officials, a new production facility would offer greater efficiency gains than a renovated facility. According to BEP officials, maintaining production on one floor in an open space improves production efficiency. They added that a renovation of the D.C. facility could include tearing down some walls and raising ceilings, steps that could improve some production processes. However, they also noted that because the D.C. facility qualifies for a historic designation, according to BEP officials, a renovation could not alter the building’s shape. As a result, production would still occur on multiple levels and in separate wings if the facility were renovated. We have reported in the past that agencies faced challenges in rehabilitating and modernizing historic buildings for contemporary use because of their age, specific design characteristics, and their particular historical features. Safety According to its Strategic Plan, BEP is committed to providing a safe and positive work environment for its employees. However, BEP officials said that manufacturing employees at the D.C. facility face greater injury risk than at the Fort Worth facility. According to BEP workers’ compensation claim data, approved workers’ compensation claims at the D.C. facility accounted for approximately 67 percent of BEP’s approved claims from fiscal year 2013 through fiscal year 2016, or 200 of 297 approved claims. BEP officials attributed the higher number of workers’ compensation claims in the D.C. facility to the relatively high number of employees needed to produce fewer notes (see table 3) and the increased opportunity for employee injury because production material must be transported farther and between floors. BEP officials estimated that approximately 65 to 70 percent of all worker injuries are related to materials handling. BEP officials noted that there is an estimated $196-million deferred- maintenance backlog at the D.C. facility. This backlog includes maintenance to the facility’s electrical and architectural systems. Even if BEP had taken care of these maintenance issues in the past, it would not negate the need for a renovation or a new facility. BEP officials noted that a renovation would reduce some safety concerns, such as upgrading the facility’s electrical systems and adding more fire-rated exits as required by Occupational Safety and Health Administration regulations; however, a renovation would not be able to address the multi-floor production process that BEP officials attributed to employee injuries. Flexibility According to BEP officials, it is important for BEP to maintain flexible currency production to respond to production needs that may change over time. Specifically, BEP officials said that a production facility should have the ability to adapt to changes in production equipment. Both BEP and Federal Reserve officials told us that the new equipment likely will be larger than current machinery. According to a representative from a leading currency printing equipment manufacturer from which BEP buys its printing equipment, future equipment is unlikely to decrease in size. BEP officials said that, while the D.C. facility could be renovated to accommodate larger equipment, it would not be possible to replicate the large, open production floor of the Fort Worth facility, which allows for simple installation of equipment. BEP officials told us that, unlike the current D.C. facility, a new production facility would be able to easily accommodate the printing equipment necessary for security features that BEP is currently developing for the next currency redesign. Flexibility is also an important factor when considering the future demand for currency. The demand for currency fluctuates, and recent changes in how the public makes purchases could affect the demand for currency. Some observers have noted that the increased use of new payment technologies—such as online banking and phone applications—as well as the rise in online purchases may lead to a substantially reduced demand for currency. In a few countries, such as Sweden, noncash transactions have become common and the demand for currency has fallen substantially. In the United States, there are several indications that currency demand will not substantially decline within the next decade. For example, the yearly number of U.S. currency notes in circulation increased by 43 percent from 2008 to 2016. In addition, the number of ATMs in the United States continues to grow, and a 2016 Federal Reserve study of consumer payment choice found that cash still accounted for 32 percent of all transactions, and more than 50 percent of transactions under $25. This continued strength in the demand of cash has several sources. Cash can be seen as a hedge against uncertainties, such as natural disasters or political or economic turmoil, and also has advantages related to privacy, anonymity, and personal data security. Moreover, according to the Federal Deposit Insurance Corporation, approximately 25 percent of U.S. households have limited access to the products and services of the banking industry, and therefore, these “unbanked” and “underbanked” populations, who may not have many alternative means of payment, rely largely on cash. Federal Reserve and Treasury officials we spoke with do not believe that the use of cash in the U.S. will decline in any significant way over the next decade. In particular, the Federal Reserve predicts a continued rise in demand for cash over the next 10 years, despite the increased availability of noncash payment options, indicating that a new or renovated facility will still be required for currency production. According to BEP officials, a new production facility would better manage the ebbs and flows in the future demand for currency than a renovation of the current facility. Specifically, should production demand increase, a new production facility could be designed to easily scale to meet new production requirements. Conversely, should the demand for currency decline in the coming years or substantially decline in the future, unused space in a new facility could be partitioned off and be used for other purposes or by another Treasury agency. BEP Generally Followed Leading Capital-Planning Practices, and Its 2017 Cost Estimate Partially Met the Characteristics of a Reliable Cost Estimate BEP Generally Followed Applicable Leading Capital-Planning Practices Capital investments in infrastructure can require significant resources to construct, operate, and maintain over the course of their life-cycle. Leading capital-planning practices can help agencies determine the resources needed to meet their mission, goals, and objectives and how to efficiently and effectively satisfy those needs throughout the capital decision-making process. As shown in table 4, we found that BEP’s capital investment decision-making process that resulted in its decision to pursue a new currency-production facility (as part of the previously described hybrid option) followed three applicable capital-planning leading practices and substantially followed the fourth. Needs assessment: BEP followed this leading practice, which calls for comprehensively assessing the resources needed as a basis for investment decisions. BEP conducted a facility condition assessment in 2004 that contributed to BEP’s effort to seek a new production facility, resulting in the studies from 2010-2017 discussed above. The assessment identified the current condition of the facility and the facility’s capabilities, including production inefficiencies that led BEP to begin a multi-year effort to determine its immediate and future infrastructure needs. BEP also determined in 2004 that the agency had almost $200 million in deferred maintenance needs. BEP officials told us that they consulted with Federal Reserve officials and concluded that it would not be prudent to spend substantial funds to address this deferred maintenance. For example, officials determined that it would not be prudent to replace the heating and plumbing systems while pursuing a new production facility. As a result, BEP deferred some maintenance items, such as replacing heating systems, which would not compromise safety and production. However, BEP officials said that they prioritized and maintained critical items, such as its cleaning and recycling systems, and implemented energy conservation initiatives to help reduce costs. As of October 2017, BEP’s deferred maintenance backlog was about $196 million. Alternatives evaluation: BEP substantially followed this leading practice, which calls for a determination of how best to bridge performance gaps by identifying and evaluating alternative approaches. As noted above, BEP first considered multiple alternatives on how to achieve its mission to efficiently produce banknotes. Further, BEP considered different methods to fund and obtain land and a shell for a new production facility (see table 5). To evaluate alternatives for the location of a new facility, a contractor identified, in 2015, potential construction sites in the D.C. area and compared each site to a set of criteria. However, BEP officials told us that they discounted locations outside the metropolitan D.C. area because they believed it would be costly to relocate employees or hire and train new manufacturing personnel to replace employees who do not relocate. BEP officials said that the few employees who relocated from the D.C. facility to the Fort Worth facility when it first opened were paid $50,000 each for their move. Based on these factors, BEP focused on a D.C-area location and did not conduct an analysis of the financial implications of building a new facility outside the D.C. area, where construction or other costs could be less expensive. Strategic linkage: BEP followed this leading practice, which stresses the importance of linking plans for capital asset investments both to an organization’s overall mission and to its strategic goals. In the 2014-2018 Strategic Plan, BEP noted that it would seek approval to proceed with the 2013 study’s recommendation to construct a new production facility. According to the strategic plan, a new production facility would help achieve BEP’s long-articulated strategic goal of being a printer of world- class currency notes, providing its customers and the public with superior products through excellence in manufacturing and technological innovation. Furthermore, Treasury concurred with BEP’s assessment and added its request for legal authority to purchase land and build a new facility in the fiscal year 2018 President’s Budget proposal. Long-term capital plan: BEP followed this leading practice, which calls for a capital plan that documents an agency’s decisions and describes its mission, planning process, and risk management, among other things. BEP completed all of the key activities associated with this practice. For example, in its fiscal year 2018 capital investment plan, BEP lays out the purpose, goals, and benefits of a new currency production facility. It also notes the implications of exposing currency production to vulnerabilities relating to potential facility systems failures and inefficiencies. BEP’s 2017 Cost Estimate Partially Met the Four Characteristics of a High- Quality, Reliable Estimate A reliable cost estimate—a summation of individual cost elements—is critical to support the capital planning process by providing the basis for informed investment decision-making, realistic budget formulation and program resourcing, and accountability for results. BEP’s 2017 cost estimate includes a contractor-developed estimate of the cost for the construction of a new production plant and the repurposing of the Main Building for BEP’s administrative offices (the hybrid alternative) and a BEP-developed estimate of additional project costs, such as additional production equipment and real estate acquisition. We found this estimate partially met the four characteristics of a high-quality, reliable cost estimate (see table 6). In developing this estimate, BEP relied on GSA guidance that was available at the time. That guidance did not refer to leading practices for cost estimates that are identified in GAO’s Cost Guide. GSA has recently updated its guidance to refer to the leading practices in GAO’s Cost Guide, and BEP officials told us that they will follow this updated GSA guidance when developing any future cost estimates. Comprehensive: BEP’s 2017 cost estimate substantially met the comprehensive characteristic. For example, the estimate included most life-cycle cost components, defined the program and its current schedule and included a consistent work breakdown structure. However, the estimate did not include operating and sustainment costs or information regarding the ground rules and assumptions used to develop the costs. Well documented: BEP’s 2017 cost estimate partially met the well- documented characteristic. For example, the estimate documented the source data and the technical assumptions used for the construction costs, which were reviewed by GSA and BEP personnel. However, documentation for the contractor’s estimate and its sources for the factors used in the estimate did not include details to enable an outside cost analyst to replicate the work. According to BEP officials, the cost data are the contractor’s proprietary data. BEP officials also told us that sources for the factors used were based on subject matter expert opinion. Accurate: BEP’s cost estimate partially met the accurate characteristic. While we found minor rounding errors and no errors in the model build-up calculations and did not find any calculation or adjustment errors in the estimate, the estimate nonetheless did not provide information regarding the bias of the costs and the appropriateness of the estimating technique used. However, BEP did follow industry standards to develop contingency costs for a pre-design estimate for a program that has not yet been authorized. We also found that $515 million of the internal estimate (37 percent of the program’s total cost estimate) was based on undocumented subject matter opinion or escalated incorrectly from the 2013 study estimate. Further, BEP’s estimate did not use the same construction year mid-point as its contractor for the inflation assumptions. According to BEP officials, that lack is because BEP’s costs were projected based upon the contractor’s estimate of fiscal year 2022, while the production equipment was escalated to fiscal year 2021 because this is the projected year for purchasing equipment. The officials also acknowledged that this rationale, however, was not documented in the cost estimate. BEP clarified that the estimates did not explicitly state a confidence level because the estimate is in the pre-planning stage. They added that it is common in the design and construction industry that contingencies are applied to the estimate based on the completeness of design, and as the design progresses, these contingencies are reduced as more becomes known about the project. As there have not been actual costs yet, variances between planned and actual costs have not been documented, explained, and reviewed. Credible: BEP’s 2017 cost estimate partially met the credible characteristic. For example, BEP provided documentation showing that both BEP and GSA reviewed the contractor’s construction estimate and its technical assumptions. However, the estimate did not include a sensitivity analysis for the construction costs, a risk and uncertainty analysis, or cross-checks to see whether similar results could be obtained. A cross-check could include an independent cost estimate conducted by an outside group to determine whether other estimating methods would produce similar results, but BEP officials told us that no independent cost estimate was developed because this was too early in the project to do such a comparison and that the construction estimate was developed in response to a government contract statement of work to prepare a preliminary budget forecast for BEP. Rather, BEP relied on what it characterized as an extensive review by BEP management and GSA officials. Ability to Sell or Repurpose Potentially Vacant Space Could Affect the Total Cost to the Federal Government The alternative that BEP pursues could have a financial effect on the federal government and ultimately taxpayers. Below, we discuss potential costs and potential savings associated with the disposition of the three buildings under the different scenarios based on our review of BEP documents and interviews with Treasury and GSA officials (see fig. 2). For example, Treasury, which has custody and control over the Main Building and the Annex, could experience costs if it needs to spend money to upgrade these buildings, but could also experience savings if it can repurpose the buildings or consolidate its employees into fewer buildings. GSA, which serves as the federal government’s primary real property and disposal agent, could incur costs for the marketing and disposal process, but could create savings for the government if it could repurpose or sell any vacated buildings. Proceeds from sales of Treasury- controlled facilities would benefit the federal government. While it is possible to identify some potential costs and benefits, it is too early to determine which costs or benefits may be realized or to attempt to quantify them. GSA and Treasury officials told us that the actions of other agencies or interested third parties (e.g., those potentially interested in purchasing the Annex) would affect the costs and cost-savings of any alternative. In addition, there are factors outside of the government’s control, such as timing and market conditions, that could affect costs and cost-savings. For example, changes in the Washington, D.C., real estate market could affect the opportunity to sell the Annex. Based on interviews with officials at GSA, Treasury, the Federal Reserve, and BEP, we have identified the following potential costs and savings for each building. Potential costs and savings associated with the Main Building: Both BEP and Treasury officials told us that the Main Building will remain under Treasury’s custody and control, regardless of which alternative BEP undertakes. Renovation: BEP would use its revolving fund to replace existing heating/cooling systems and windows in the Main Building with higher efficiency ones. Ideally, there would be some long-term cost savings because the new systems would be less costly to operate. However, BEP officials told us that a renovation may be more expensive than currently estimated because the Main Building is over 100 years old and there could be unforeseen expenses depending on what is found once walls and ceilings are removed. New build: Treasury would likely pay to renovate the Main Building once BEP vacates it because the Main building would remain under Treasury’s custody and control. The cost of this renovation could be partially offset by savings associated with co-locating other Treasury offices in the Main Building after the renovation is complete. For example, Treasury bureaus currently have 15 leased facilities with about 1.9-million square feet in the downtown D.C. area. The annual cost of these facilities is $91.7 million. While, not all of the employees currently in leased space could move into a renovated Main Building, the Main Building’s 530,000 square feet could provide opportunities to reduce leasing costs. However, because these potential renovations and staff moves are not likely to occur for several years, Treasury officials told us that they are not able to determine either the costs or benefits of moving Treasury staff to the Main Building. Hybrid: BEP’s revolving fund would pay for the renovation of one- third of the Main Building that would serve as BEP’s administrative office and a future visitors’ center. This step would leave the remaining two-thirds to be renovated to a “warm lit shell” to allow others to occupy the building. At this time, Treasury does not know what entity or account would pay for the renovation of the remaining two-thirds because, according to Treasury officials, they have not determined what the use of the balance of the Main Building would be, including what entity would fund any modifications needed for new occupants. If Treasury decided to use the Main Building for its own staff, then Treasury could fund the cost to convert to offices for other Treasury agencies. Under this scenario, there is both a cost to Treasury to renovate the space it plans to use as well as a savings in having Treasury staff vacate other leased space and move to a Treasury-controlled building. Potential costs and savings associated with Treasury’s Annex: The Annex could either remain for BEP’s administrative offices or could be declared excess and transferred to GSA for disposal. Renovation: BEP’s revolving fund would cover the cost of renovating the entire Annex as a “warm lit shell” and a more extensive renovation of the portion of the Annex that BEP would use first as temporary space for its currency printing equipment and then permanently for its administrative office. According to BEP officials, the Annex would be renovated to accommodate currency-printing lines that would be relocated from the Main Building in order for the Main Building to be renovated. Once the Main Building is renovated, the Annex would then be renovated to become administrative space for BEP. This process could be quite costly and take more time as the Annex would be renovated twice for different purposes. However, if the unused part of the Annex could be used by Treasury for other Treasury offices, there could be some cost savings to Treasury. According to BEP officials, while BEP would use its revolving fund to renovate the Annex to a “warm lit shell,” the agency that ultimately occupies the unused space would be responsible for the costs associated with repurposing that space for its own purposes. New build and Hybrid: BEP’s revolving fund would pay for any necessary environmental clean-up needed in order for the Annex to be declared as excess and transferred to GSA for disposal. GSA, as part of its mission, would incur costs such as marketing, conducting the disposition process, and concluding the property transfer. GSA’s disposal process can result in the building being transferred for use by another Federal agency, being sold to a local or state government via a negotiated sale, being conveyed to a public entity or eligible non- profit for public uses (e.g. homeless use), or being sold to a private party via a public sale. As the Annex is centrally located in Washington, D.C., the building could be attractive to potential developers. GSA recently sold another federal building near the Annex for over $30 million. GSA officials believe that there would be significant market interest in the Annex due to the Annex’s location and recent private development in the area. Treasury and GSA officials stated that proceeds from the sale of the Annex would be deposited into the Land and Water Conservation Fund to benefit the federal government. On the other hand, there is no guarantee that GSA would be able to sell the Annex: our previous work found that the most frequent method of disposal for federal buildings from fiscal years 2011 through 2015 was demolition (57 percent) rather than sale (14 percent). Federal buildings identified for disposal may not be suitable for sale for reasons such as their age, location, and condition, factors that often make demolition the preferred disposal method. The unique configuration of the Annex with its five wings, its age and condition, and historic-designation eligibility could deter some potential buyers. The future demand for the building, interest from private-sector buyers, and the general economic and real estate market are uncertain and can change quickly. If the Annex is not sold and remains on the government’s real property inventory, generally BEP or Treasury would be responsible for any annual maintenance costs for the building. Alternatively, the unsold Annex could be donated to a state or local government that would then be responsible for maintenance costs. Potential costs and savings associated with the leased warehouse: The warehouse is a GSA-leased property. Renovation: BEP would continue its annual leasing of the warehouse, which would still be needed to accommodate large trucks that cannot access the D.C. facility. The current lease costs approximately $3.4 million each year, and BEP recovers about $500,000 per year of these costs by permitting other Treasury components to use the building through interagency agreements. New build and Hybrid: If BEP discontinued its lease after a new facility is completed, it would save approximately $2.9 million per year. If BEP ended its lease prior to the end of the lease term, GSA would need to find another entity to occupy the warehouse for the remainder of the lease term. Agency Comments We provided copies of the draft report to the BEP, GSA, the Federal Reserve, and Treasury for review and comment. BEP coordinated with Treasury in providing comments. In these comments, reproduced in Appendix I, BEP emphasized the factors that led BEP to determine that a new facility is the preferred alternative for its currency production process and acknowledged our findings on those factors. BEP and the Federal Reserve also provided technical comments, which we incorporated as appropriate. GSA did not provide comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Director of the Bureau of Engraving and Printing, the Secretary of the Treasury, the Chair of the Federal Reserve Board, and the Administrator of the General Services Administration. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Comments from the Department of the Treasury Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, John W. Shumann (Assistant Director); Martha Chow (Analyst in Charge); Amy Abramowitz; Lacey Coppage; Delwen Jones; Jennifer Leotta; Josh Ormond; and Tomas Wind made key contributions to this report.
Why GAO Did This Study BEP, within Treasury, designs and produces U.S. currency notes at BEP's facilities in Washington, D.C., and Fort Worth, Texas. The Federal Reserve pays for BEP's operational expenses, including currency production. BEP is requesting legal authority to purchase land and construct a new production facility in the D.C. area. BEP officials told GAO that, if it does not receive the necessary legal authority for a new production facility, it will renovate the D.C. facility. GAO was asked to review BEP's facility planning process. This report: (1) describes the results of facility studies that BEP has funded and factors that led BEP to propose a new production facility, (2) examines the extent to which BEP's actions align with leading capital planning and cost estimating practices, and (3) describes other factors that could affect total federal costs of BEP's actions. GAO analyzed BEP documents and data from 2010-2017 on currency note production, visited both BEP production facilities, assessed BEP's actions against leading capital planning and cost estimating practices, and interviewed officials from BEP, GSA, the Federal Reserve, and Treasury. GAO provided the draft report to BEP, GSA, the Federal Reserve, and Treasury for review. BEP coordinated with Treasury in its comments. In the comments, reproduced in Appendix I, BEP emphasized the factors that led BEP to determine that a new facility is the preferred alternative. BEP and the Federal Reserve also provided technical comments, which we incorporated as appropriate. GSA did not provide comments. What GAO Found The Bureau of Engraving and Printing's (BEP) studies and research determined that a new production facility would be less expensive and better address BEP's need for secure, efficient, and flexible currency production than a renovation of its Washington, D.C. facility. According to 2017 cost estimates, BEP's preferred option—a new production facility in the Washington, D.C., area and some renovated administrative space in its current D.C. facility—would cost approximately $1.4 billion, while a renovation of its current facility for both production and administrative functions would cost approximately $2.0 billion. A new facility similar to BEP's Texas facility could have a secure perimeter that meets federal building security standards. Such a perimeter is not possible with the current facility. A new facility could also house production on a single production floor to allow for a more efficient production process. BEP generally followed leading capital-planning practices, and its 2017 cost estimate of a new production facility partially met the characteristics of a reliable cost estimate. BEP's capital planning followed leading practices, for example, by including a needs assessment, a link to BEP's strategic plan, and a long-term capital plan. BEP's cost estimate partially followed leading practices, for example, by including most life-cycle cost components and documentation of the data used for the estimate. However, it did not include sufficient sensitivity analyses, which identify a range of costs-based on varying assumptions. BEP officials stated that they plan to follow the updated GSA guidance that includes GAO's cost-estimating leading practices when updating this early stage estimate. The ability to sell or repurpose any part of the current D.C. facility could affect the total federal costs of BEP's actions. According to officials from the Department of the Treasury (Treasury) and the General Services Administration (GSA), there could be savings if Treasury could consolidate staff or operations into the vacated facility. There could also be savings if the unneeded facility could be sold to a private buyer. However there would be costs to prepare the facility for use by other entities or if the unneeded facility does not sell. Agency officials said that it is too early to determine specific costs and savings.
gao_GAO-18-675T
gao_GAO-18-675T_0
Unlike the United States, Most IEA Members Rely on Private Reserves to Meet Reserve Obligations and Hold Significant Proportions of Their Reserves as Petroleum Products As we found in our May 2018 report, in terms of how they meet their IEA 90-day reserve obligations, most other IEA members differ from the United States in two basic ways. First, as of December 2017, most other IEA members rely at least in part on private rather than public reserves to meet their obligations. As of December 2017, 18 of the 25 IEA members that met their 90-day reserve obligation and had a formal process for holding and releasing reserves relied entirely or in part on private reserves to meet their obligations. Specifically, based on IEA data as of December 2017, these 18 countries met their 90-day reserve obligations through private reserves and either had no public reserves or had public reserves of less than 90 days. Unlike the 18 countries that rely at least in part on private reserves, as of December 2017, the United States and 6 other IEA members met the 90-day reserve obligation exclusively through public reserves. The second way other IEA members differ from the United States is that most hold at least a third of their reserves as petroleum products, according to a 2014 IEA report. Holding petroleum products can be advantageous during certain disruptions because such reserves can be directly distributed to consumers, whereas crude oil must first be refined and turned into products, adding response time. In contrast, more than 99 percent of the SPR (665.5 million barrels as of March 2018) is held as crude oil. Because of the large U.S. refining sector, crude oil from the SPR can be domestically refined into petroleum products to meet demand. DOE Has Not Identified the Optimal Size for the SPR or the Potential Need for Regional Product Reserves As we found in our May 2018 report, DOE has not identified the optimal size or the potential need for additional petroleum product reserves for the SPR. In 2016, DOE completed a long-term strategic review of the SPR after its last comprehensive examination had been conducted in 2005. The 2016 review examined the expected benefits of several SPR sizes, but it did not identify an optimal size and was limited in several ways. In particular, in the review, DOE did not fully consider recent and expected future changes in market conditions, such as the implications of projected fluctuations in net imports or the role of the private sector in responding to supply disruptions. Recent changes have contributed to SPR and private reserves reaching historically high levels on a net imports basis. These changes are expected to continue to evolve— according to government projections, the United States will become a net exporter in the late 2020s before again becoming a net importer between 2040 and 2050. In February 2005, we found that agencies should reexamine their programs if conditions change. Without addressing the limitations of its 2016 review and periodically performing reexaminations in the future, DOE cannot be assured that the SPR will be sized appropriately into the future. In May 2018, we recommended that DOE (1) supplement its 2016 review by conducting an additional analysis that takes into account, among other things, the costs and benefits of a wide range of different SPR sizes and (2) take actions to ensure that it periodically conducts and provides to Congress a strategic review of the SPR. DOE partially agreed with the first recommendation and stated that it will conduct an additional analysis to assess the purpose, goals, and objectives of the SPR, taking into account private sector response, oil market projections, and any other relevant factors, that will lead to an evaluation of possible optimal sizes of the SPR in the future. DOE agreed with the second recommendation. DOE has also not fully identified whether additional regional petroleum product reserves should be part of the SPR. The Quadrennial Energy Review of 2015 recommended that DOE analyze the need for additional or expanded regional product reserves by undertaking updated cost- benefit analyses for all of the regions of the United States that have been identified as vulnerable to fuel supply disruptions. In response, DOE studied the costs and benefits of regional petroleum product reserves in the West Coast and Southeast Coast, though it did not finalize or publicly release these studies. Nevertheless, the draft studies concluded that a product reserve in the Southeast would provide significant net economic benefits to the region and the United States, particularly in the event of a major hurricane, while further analyses are needed to determine the potential benefits of a reserve on the West Coast. According to DOE officials, the agency has no plans to conduct additional studies. Without completing studies on the costs and benefits of regional petroleum product reserves, DOE cannot ensure that it and Congress have the information they need to make decisions about whether additional regional product reserves are needed. In our May 2018 report, we recommended that DOE conduct or complete such studies. DOE disagreed with this recommendation, though we continue to believe that conducting these analyses will provide Congress with needed information. DOE Has Taken Steps to Update Its Modernization Plans but Is Hindered by Uncertainty Regarding the SPR’s Long-term Size As we found in our May 2018 report, DOE has taken steps to account for the effects of congressionally mandated oil sales in its plans for modernizing the SPR, though DOE’s current plans, developed in 2016, are based on information largely developed prior to recent congressionally mandated sales of an additional 117 million barrels of oil. According to DOE documents, the SPR modernization program is focused on a life extension project to modernize aging infrastructure to ensure that the SPR will be able to meet its mission requirements for the next several decades. The project’s scope of work has undergone several revisions since its inception in response to changing conditions and requirements, according to the agency. DOE has estimated that the SPR’s modernization will cost up to $1.4 billion, and according to officials, the agency had spent $22 million as of the end of February 2018. According to DOE officials, in March 2018, DOE commenced a study— the SPR post-sale configuration study targeted for completion in October 2018—to examine potential future reserve configurations and to account for the effects of congressionally mandated sales on the reserve and its modernization. Information from the study will inform DOE’s updates to the SPR’s modernization plans, according to DOE officials. Although the SPR had a design capacity to hold 713.5 million barrels of oil, in January 2017, the SPR held 695 million barrels. As shown in figure 2, congressionally mandated sales will cause excess storage capacity to grow to 308 million barrels or more by the end of fiscal year 2027— meaning that about 43 percent of the SPR’s total design capacity to store oil would be unused. In its ongoing SPR post-sale configuration study, DOE plans to explore some options to use potentially excess SPR assets, such as spare storage capacity. In withdrawing oil to meet congressionally mandated oil sales currently in place (290 million barrels through fiscal year 2027), DOE could close at least one SPR site based on our analysis of projected excess storage capacity. For example, if DOE were to close the smallest SPR site, Bayou Choctaw in Louisiana, the agency could also explore selling the connected pipeline and marine terminal, which are currently being leased to a private company. DOE could also consider leasing excess storage capacity to other countries so that they could store oil at the SPR. DOE had not entered into any such leases with other countries and had not considered such leases as of May 2018 because, according to DOE, the SPR has historically lacked capacity to store additional oil. DOE had not proposed any of these options or explored the revenue the agency could generate by selling or leasing these assets. However, according to DOE officials, the agency will examine the feasibility of such options in the ongoing SPR post-sale configuration study. In the course of our work, we also identified other options for handling potentially excess SPR assets that DOE was not planning on examining as of May 2018, largely because DOE did not have the authority to pursue them, according to agency officials. First, DOE could explore leasing storage capacity to private industry. U.S. oil production has generally increased over the last decade. As a result, the private sector may want to lease excess SPR capacity, which may be cheaper than above-ground storage, according to a representative of a private company we interviewed. Fees for doing so could help defray SPR storage or maintenance costs. However, agency officials told us that the Energy Policy and Conservation Act gave DOE authority to lease underutilized storage to other countries but not to the private sector. Second, if Congress determines that the SPR holds oil in excess of that needed domestically, DOE could explore selling contingent contracts for the excess oil rather than selling the oil outright. Australian and New Zealand officials told us that such contracts would help their countries meet their IEA 90-day reserve obligations. Australian officials told us that they have discussed this option with DOE. Currently the United States and Australia have agreed, through an arrangement, to allow Australia to contract for petroleum stocks located in the United States and controlled by commercial entities. While the arrangement does not cover government-owned oil in the SPR, if it did, based on our analysis, DOE could generate up to approximately $15 million if Australia purchased the maximum allowable amount of oil specified in an arrangement through contracts for excess SPR oil in 2018. However, although the Energy Policy and Conservation Act allows DOE to lease underutilized storage to other countries, DOE lacks the authority to sell contracts for the oil and does not plan to seek this authority, according to DOE officials. DOE officials told us that they did not plan to examine these options. According to DOE’s real property asset management order, the agency is to identify real property assets that are no longer needed to meet the program’s mission needs and that may be candidates for reuse or disposal. Once identified, the agency is to undertake certain actions, including determining whether to dispose of these assets by sale or lease. As part of its SPR post-sale configuration study, DOE plans to determine whether it is appropriate to close SPR facilities, and the relative benefit of any closures would be informed by potential lease revenues from maintaining sites so they could be leased, according to agency officials. However, as mentioned previously, we identified other options for handling potentially excess SPR assets that DOE was not planning to examine in its study. Although DOE does not currently have the authority to implement these options, according to officials, examining their potential use, including possible revenue enhancement, could inform Congress as it examines whether it should grant such authority. Without examining a full range of options in the SPR post-sale configuration study, DOE risks missing beneficial ways to modernize the SPR while saving taxpayer resources. In May 2018, we recommended that in completing its ongoing SPR post-sale configuration study, DOE should consider a full range of options for handling potentially excess assets and, if needed, request congressional authority for the disposition of these assets. DOE agreed with this recommendation. Finally, as DOE takes steps to plan for the SPR’s modernization, ongoing uncertainty regarding the SPR’s long-term size and configuration have complicated DOE’s efforts. Congress has generally set the SPR’s size by mandating purchases or sales of oil. DOE officials told us they do not know whether Congress will mandate additional sales over the next 10 years or whether other changes may be required to the configuration of the reserve. Any additional congressionally mandated sales would require DOE to again revisit its modernization plans and assessments of the potential uses of any excess SPR assets. Oil market projections also have implications for the future of the SPR. The United States is projected to become a net exporter by the late 2020s and would then no longer have a 90-day reserve obligation, but it is projected to return to being a net importer between 2040 and 2050. These projected fluctuations could affect the desired size of the SPR in the future. Such uncertainties create risks for DOE’s modernization plans, as DOE may end up spending funds on facilities that later turn out to be unnecessary should Congress ultimately decide on a larger- or smaller-sized SPR than DOE anticipates. In May 2018, we suggested that Congress may wish to consider setting a long-range target for the size and configuration of the SPR that takes into account projections for future oil production, oil consumption, the efficacy of the existing SPR to respond to domestic supply disruptions, and U.S. IEA obligations. In conclusion, we found that given the constrained budget environment and the evolving nature of energy markets and their vulnerabilities, it is important that DOE endeavor to ensure that the SPR is an efficient and effective use of federal resources. Chairman Upton, Ranking Member Rush, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff members have any questions about this testimony, please contact Frank Rusco, Director, Natural Resources and Environment, at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony included Quindi Franco (Assistant Director), Nkenge Gibson (Analyst-in- Charge), Philip Farah, Ellen Fried, Cindy Gilbert, Gregory Marchand, Celia Mendive, Patricia Moye, Camille Pease, Oliver Richard, Dan Royer, Rachel Stoiko, and Marie Suding. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Over 4 decades ago, Congress authorized the SPR—the world's largest government-owned stockpile of emergency crude oil—to reduce the impact of disruptions in supplies of petroleum products. Since 2015, Congress has also mandated sales of SPR oil to fund the modernization of SPR facilities and other national priorities. DOE manages the SPR, whose storage and related infrastructure is aging, and has plans to modernize its facilities. As a member of the International Energy Agency, the United States is obligated to maintain reserves equivalent to at least 90 days of the previous year's net imports (imports minus exports). As of March 2018, the SPR held about 665 million barrels of crude oil, about 138 days of net imports. This testimony highlights GAO's May 2018 report on the SPR, including the extent to which (1) DOE has identified the optimal size of the SPR, and (2) DOE's plans for modernizing the SPR take into account the effects of congressionally mandated crude oil sales. GAO reviewed DOE's documents and studies and interviewed agency officials. What GAO Found The Department of Energy (DOE) has not identified the optimal size of the Strategic Petroleum Reserve (SPR). In 2016, DOE completed a long-term strategic review of the SPR after its last comprehensive examination was conducted in 2005. The 2016 review examined the benefits of several SPR sizes, but it did not identify an optimal size and was limited in several ways. In particular, in the review, DOE did not fully consider recent and expected future changes in market conditions, such as the implications of projected fluctuations in net imports or the role of the private sector in responding to supply disruptions. These changes have contributed to SPR and private reserves reaching historically high levels on a net imports basis. These changes are expected to continue to evolve, and according to government projections, the United States will become a net exporter in the late 2020s before again becoming a net importer between 2040 and 2050. GAO has found that agencies should reexamine their programs if conditions change. GAO recommended that DOE supplement its 2016 review by conducting an additional analysis, and take actions to ensure the agency periodically conducts a strategic review of the SPR. DOE generally agreed with these recommendations. DOE has taken steps to account for congressionally mandated sales of SPR crude oil in its $1.4 billion modernization plans for SPR's infrastructure and facilities. However, DOE's current plans, developed in 2016, are based on information largely developed prior to recent congressionally mandated sales of an additional 117 million barrels of oil. According to DOE officials, the agency began a study in March 2018 to assess the effects of these sales on the SPR's modernization. However, GAO reported that this study was not examining a full range of options for handling any excess SPR assets that may be created by currently mandated sales or any additional sales that may be mandated in the future, inconsistent with an agency order on real property asset management that calls for identifying excess assets. For example, according to officials, DOE does not currently have the authority to lease unused storage capacity to the private sector, and DOE was not planning to examine this option. If authorized, leasing unused SPR storage capacity could generate revenues that could help offset the costs of modernization. GAO recommended that DOE should consider a full range of options for handling potentially excess assets and, if needed, request congressional authority for the disposition of these assets. DOE agreed with this recommendation. What GAO Recommends GAO made four recommendations, including that DOE (1) supplement the 2016 review by conducting an additional analysis, (2) ensure it periodically reexamines the SPR, and (3) consider a full range of options for handling potentially excess assets. DOE partially agreed with the first recommendation and agreed with the other two recommendations.
gao_GAO-18-267T
gao_GAO-18-267T_0
Background VA’s mission is to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation by ensuring that they receive medical care, benefits, social support, and lasting memorials. In carrying out this mission, the department operates one of the largest health care delivery systems in America, providing health care to millions of veterans and their families at more than 1,500 facilities. The department’s three major components—the Veterans Health Administration (VHA), the Veterans Benefits Administration (VBA), and the National Cemetery Administration (NCA)—are primarily responsible for carrying out its mission. More specifically, VHA provides health care services, including primary care and specialized care, and it performs research and development to improve veterans’ needs. VBA provides a variety of benefits to veterans and their families, including disability compensation, educational opportunities, assistance with home ownership, and life insurance. Further, NCA provides burial and memorial benefits to veterans and their families. Collectively, the three components rely on approximately 340,000 employees to provide services and benefits. These employees work in VA’s Washington, D.C. headquarters, as well as 170 medical centers, approximately 750 community-based outpatient clinics, 300 veterans centers, 56 regional offices, and more than 130 cemeteries situated throughout the nation. VA Relies Extensively on IT The use of IT is critically important to VA’s efforts to provide benefits and services to veterans. As such, the department operates and maintains an IT infrastructure that is intended to provide the backbone necessary to meet the day-to-day operational needs of its medical centers, veteran- facing systems, benefits delivery systems, memorial services, and all other systems supporting the department’s mission. The infrastructure is to provide for data storage, transmission, and communications requirements necessary to ensure the delivery of reliable, available, and responsive support to all VA staff offices and administration customers, as well as veterans. According to department data as of October 2016, there were 576 active or in-development systems in VA’s inventory of IT systems. These systems are intended to be used for the determination of benefits, benefits claims processing, and access to health records, among other services. VHA is the parent organization for 319 of these systems. Of the 319 systems, 244 were considered mission-related and provide capabilities related to veterans’ health care delivery. For example, VHA’s systems provide capabilities to establish and maintain electronic health records that health care providers and other clinical staff use to view patient information in inpatient, outpatient, and long-term care settings. VistA serves an essential role in helping the department to fulfill its health care delivery mission. Specifically, VistA is an integrated medical information system for all veterans’ health information. It was developed in-house by the department’s clinicians and IT personnel and has been in operation since the early 1980s. As such, the system has long been vital to helping ensure the quality of health care received by the nation’s veterans and their dependents. VistA is comprised of more than 200 applications that assist in the delivery of health care and perform other important functions within the department, including financial management, enrollment, and registration. Some of these applications have been in operation for over 30 years and, according to VA, have become increasingly difficult and costly to maintain. As such, the department has expended extensive resources to modernize the system and increase its ability to allow for the viewing or exchange of patient information with the Department of Defense (DOD) and private sector health providers. In addition, as we recently reported, VHA has unaddressed needs that indicate its current health IT systems, including VistA, do not fully support the organization’s business functions. Specifically, about 39 percent of all requests related to health IT needs have remained unaddressed after more than 5 years. Electronic health records are particularly crucial for optimizing the health care provided to veterans, many of whom may have health records residing at multiple medical facilities within and outside the United States. Taking steps toward interoperability—that is, collecting, storing, retrieving, and transferring veterans’ health records electronically—is significant to improving the quality and efficiency of care. One of the goals of interoperability is to ensure that patients’ electronic health information is available from provider to provider, regardless of where it originated or resides. VA Manages IT Resources Centrally Since 2007, VA has been operating a centralized organization, the Office of Information and Technology (OI&T), in which most key functions intended for effective management of IT are performed. This office is led by the Assistant Secretary for Information and Technology—VA’s Chief Information Officer (CIO). The office is responsible for providing strategy and technical direction, guidance, and policy related to how IT resources are to be acquired and managed for the department, and for working closely with its business partners—such as VHA—to identify and prioritize business needs and requirements for IT systems. Among other things, OI&T has responsibility for managing the majority of VA’s IT-related functions, including the maintenance and modernization of VistA. As of 2016, OI&T was comprised of more than 15,000 staff, with more than half of these positions filled by contractors. VA Requested Nearly $4.1 Billion for Fiscal Year 2018 For fiscal year 2018, the department’s budget request included nearly $4.1 billion for IT. The department requested approximately $359 million for new systems development or modernization efforts, approximately $2.5 billion for maintaining existing systems, and approximately $1.2 billion for payroll and administration. For example, in its fiscal year 2018 budget submission, the department requested appropriations to support five IT portfolios, including the development and operations and maintenance for programs and projects related to the: Medical portfolio, which provides technology solutions to deliver modern, high-quality medical care capabilities to veterans ($944.2 million); Benefit portfolio, which addresses the technology needs managed by the Veterans Benefit Administration ($296.9 million); Memorial Affairs portfolio, which provides support for the modernization of applications and services for National Cemeteries at 133 locations nationwide ($24.5 million); Corporate portfolio, which consists of back office operations supporting the major business lines and department management ($270.6 million); and Enterprise IT, which provides the underlying infrastructure to enable the other portfolios to operate and includes such things as cybersecurity, data centers, cloud services, telephony, enterprise software, and data connectivity ($1.289 billion). VA’s Management of IT Has Contributed to High- Risk Designations In 2015, we designated VA Health Care as a high-risk area for the federal government and, currently, we continue to be concerned about the department’s ability to ensure that its resources are being used cost- effectively and efficiently to improve veterans’ timely access to health care. In part, we identified limitations in the capacity of VA’s existing systems, including the outdated, inefficient nature of certain systems and a lack of system interoperability—that is, the ability to exchange and use electronic health information—as contributors to the department’s IT challenges related to health care. These challenges present risks to the timeliness, quality, and safety of the health care. While we recently reported that the department has begun to demonstrate leadership commitment to addressing IT challenges, more work remains. Also, in February 2015, we added Improving the Management of IT Acquisitions and Operations to our list of high-risk areas. Specifically, federal IT investments too frequently fail or incur cost overruns and schedule slippages while contributing little to mission-related outcomes. We have previously testified that the federal government has spent billions of dollars on failed IT investments, including, for example, VA’s Scheduling Replacement Project, which was terminated in September 2009 after spending an estimated $127 million over 9 years; and its Financial and Logistics Integrated Technology Enterprise program, which was intended to be delivered by 2014 at a total estimated cost of $609 million, but was terminated in October 2011 due to challenges in managing the program. This high-risk area highlighted several critical IT initiatives in need of additional congressional oversight, including (1) reviews of troubled projects; (2) efforts to increase the use of incremental development; (3) efforts to provide transparency relative to the cost, schedule, and risk levels for major IT investments; (4) reviews of agencies’ operational investments; (5) data center consolidation; and (6) efforts to streamline agencies’ portfolios of investments. We noted that agencies’ implementation of these initiatives was inconsistent and that more work remained to demonstrate progress in achieving IT acquisition and operation outcomes. We also recently issued an update to our high-risk report and noted that, while progress has been made in addressing the high-risk area of IT acquisitions and operations, significant work remains to be completed. For example, we noted, among other things, that additional work was needed to establish action plans for federal agencies to modernize or replace obsolete systems. Specifically, we pointed out that many federal systems use outdated software languages and hardware, which has increased spending on operations and maintenance of technology investments. VA was among a handful of departments with one or more archaic legacy systems. As discussed in our recent report on legacy systems used by federal agencies, we identified 2 of the department’s systems as being over 50 years old, and among the 10 oldest investments and/or systems that were reported by 12 selected agencies. Personnel and Accounting Integrated Data (PAID)—This 53-year old system automates time and attendance for employees, timekeepers, payroll, and supervisors. It is written in Common Business Oriented Language (COBOL), a programming language developed in the late 1950s and early 1960s, and runs on IBM mainframes. Benefits Delivery Network (BDN)—This 51-year old system tracks claims filed by veterans for benefits, eligibility, and dates of death. It is a suite of COBOL mainframe applications. Ongoing uses of antiquated systems, such as PAID and BDN, contribute to agencies spending a large, and increasing, proportion of their IT budgets on operations and maintenance of systems that have outlived their effectiveness and are consuming resources that outweigh their benefits. Accordingly, we have recommended that VA identify and plan to modernize or replace its legacy systems. The department concurred with our recommendation and stated that it plans to retire and replace PAID with the Human Resources Information System Shared Service Center in 2017. The department also stated that it has general plans to roll the capabilities of BDN into another system and to retire BDN in 2018. FITARA Is Intended to Help VA and Other Agencies Improve Their Acquisitions of IT Congress enacted federal IT acquisition reform legislation (commonly referred to as the Federal Information Technology Acquisition Reform Act, or FITARA) in December 2014. This legislation was intended to improve agencies’ acquisitions of IT and enable Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. The law applies to VA and other covered agencies. It includes specific requirements related to seven areas, including data center consolidation and optimization, agency CIO authority, and government-wide software purchasing. Federal data center consolidation initiative (FDCCI). Agencies are required to provide the Office of Management and Budget (OMB) with a data center inventory, a strategy for consolidating and optimizing their data centers (to include planned cost savings), and quarterly updates on progress made. The law also requires OMB to develop a goal for how much is to be saved through this initiative, and provide annual reports on cost savings achieved. Agency CIO authority enhancements. CIOs at covered agencies are required to (1) approve the IT budget requests of their respective agencies, (2) certify that IT investments are adequately implementing incremental development, as defined in capital planning guidance issued by OMB, (3) review and approve contracts for IT, and (4) approve the appointment of other agency employees with the title of CIO. Government-wide software purchasing program. The General Services Administration is to develop a strategic sourcing initiative to enhance government-wide acquisition and management of software. In doing so, the law requires that, to the maximum extent practicable, the General Services Administration should allow for the purchase of a software license agreement that is available for use by all executive branch agencies as a single user. Expanding upon FITARA, the Making Electronic Government Accountable by Yielding Tangible Efficiencies Act of 2016, or the “MEGABYTE Act,” further enhanced CIOs’ management of software licenses by requiring agency CIOs to establish an agency software licensing policy and a comprehensive software license inventory to track and maintain licenses, among other requirements. In June 2015, OMB released guidance describing how agencies are to implement FITARA. This guidance is intended to, among other things: assist agencies in aligning their IT resources with statutory establish government-wide IT management controls that will meet the law’s requirements, while providing agencies with flexibility to adapt to unique agency processes and requirements; clarify the CIO’s role and strengthen the relationship between agency CIOs and bureau CIOs; and strengthen CIO accountability for IT costs, schedules, performance, and security. VA Has Pursued Four VistA Modernization Initiatives Since 2001, with About a Billion Dollars Obligated for Contractors’ Activities During Fiscal Years 2011 through 2016 In our draft report that is currently with VA for comments, we discuss the history of VA’s efforts to modernize its health information system, VistA. These four efforts—HealtheVet, the integrated Electronic Health Record (iEHR), VistA Evolution, and the Electronic Health Record Modernization (EHRM)—reflect varying approaches that the department has considered to achieve a modernized health care system over the course of nearly two decades. The modernization efforts are described as follows. In 2001, VA undertook its first VistA modernization project, the HealtheVet initiative, with the goals of standardizing the department’s health care system and eliminating the approximately 130 different systems used by its field locations at that time. HealtheVet was scheduled to be fully implemented by 2018 at a total estimated development and deployment cost of about $11 billion. As part of the effort, the department had planned to develop or enhance specific areas of system functionality through six projects, which were to be completed between 2006 and 2012. Specifically, these projects were to provide capabilities to support VA’s Health Data Repository and Patient Financial Services System, as well as the Laboratory, Pharmacy, Imaging, and Scheduling functions. In June 2008, we reported that the department had made progress on the HealtheVet initiative, but noted issues with project planning and governance. In June 2009, the Secretary of Veterans Affairs announced that VA would stop financing failed projects and improve the management of its IT development projects. Subsequently, in August 2010, the department reported that it had terminated the HealtheVet initiative. In February 2011, VA began its second modernization initiative, the iEHR program, in conjunction with DOD. The program was intended to replace the two separate electronic health record systems used by the two departments with a single, shared system. Moreover, because both departments would be using the same system, this approach was expected to largely sidestep the challenges that had been encountered in trying to achieve interoperability between their two separate systems. Initial plans called for the development of a single, joint system consisting of 54 clinical capabilities to be delivered in six increments between 2014 and 2017. Among the agreed-upon capabilities to be delivered were those supporting laboratory, anatomic pathology, pharmacy, and immunizations. According to VA and DOD, the single iEHR system had an estimated life cycle cost of $29 billion through the end of fiscal year 2029. However, in February 2013, the Secretaries of VA and DOD announced that they would not continue with their joint development of a single electronic health record system. This decision resulted from an assessment of the iEHR program that the secretaries had requested in December 2012 because of their concerns about the program facing challenges in meeting deadlines, costing too much, and taking too long to deliver capabilities. In 2013, the departments abandoned their plan to develop the integrated system and stated that they would again pursue separate modernization efforts. In December 2013, VA initiated its VistA Evolution program as a joint effort of VHA and OI&T that was to be completed by the end of fiscal year 2018. The program was to be comprised of a collection of projects and efforts focused on improving the efficiency and quality of veterans’ health care by modernizing the department’s health information systems, increasing the department’s data exchange and interoperability with DOD and private sector health care partners, and reducing the time it takes to deploy new health information management capabilities. Further, the program was intended to result in lower costs for system upgrades, maintenance, and sustainment. According to the department’s March 2017 cost estimate, VistA Evolution was to have a life cycle cost of about $4 billion through fiscal year 2028. Since initiating VistA Evolution in December 2013, VA has completed a number of key activities that were called for in its plans. For example, the department delivered capabilities, such as the ability for health providers to have an integrated, real-time view of electronic health record data through the Joint Legacy Viewer, as well as the ability for health care providers to view sensitive DOD notes and highlight abnormal test results for patients. VA also initiated work to standardize VistA across the 130 VA facilities and released enhancements to its legacy scheduling, pharmacy, and immunization systems. In addition, the department released the enterprise Health Management Platform, which is a web- based user interface that assembles patient clinical data from all VistA instances and DOD. Although VistA Evolution is ongoing, VA is currently in the process of revising its plan for the program as a result of the department recently announcing its pursuit of a fourth VistA modernization program (discussed below). For example, the department determined that it would no longer pursue additional development or deployment of the enterprise Health Management Platform—a major VistA Evolution component— because the new modernization program is envisioned to provide similar capabilities. In June 2017, the VA Secretary announced a significant shift in the department’s approach to modernizing VistA. Specifically, rather than continue to use VistA, the Secretary stated that the department plans to acquire the same electronic health record system that DOD is implementing. In this regard, DOD has contracted with the Cerner Corporation to provide a new integrated electronic health record system. According to the Secretary, VA has chosen to acquire this same product because it would allow all of VA’s and DOD’s patient data to reside in one system, thus enabling seamless care between the department and DOD without the manual and electronic exchange and reconciliation of data between two separate systems. The VA Secretary added that this fourth modernization initiative is intended to minimize customization and system differences that currently exist within the department’s medical facilities, and ensure the consistency of processes and practices within VA and DOD. When fully operational, the system is intended to be the single source for patients to access their medical history and for clinicians to use that history in real time at any VA or DOD medical facility, which may result in improved health care outcomes. According to VA’s Chief Technology Officer, Cerner is expected to provide integration, configuration, testing, deployment, hosting, organizational change management, training, sustainment, and licenses necessary to deploy the system in a manner that meets the department’s needs. To expedite the acquisition, in June 2017, the Secretary signed a “Determination and Findings,” which noted a public interest exception to the requirement for full and open competition, and authorized VA to issue a solicitation directly to the Cerner Corporation. According to the Secretary, VA expects to award a contract to Cerner in December 2017, and deployment of the new system is anticipated to begin 18 months after the contract has been signed. VA’s Executive Director for the Electronic Health Records Modernization System stated that the department intends to incrementally deploy the new system to its medical facilities. Each facility is expected to continue using VistA until the new system has been deployed at that location. All VA medical facilities are anticipated to have the new system implemented within 7 to 8 years after the first deployment. Figure 1 shows a timeline of the four efforts that VA has pursued to modernize VistA since 2001. VA Obligated about $1.1 Billion for VistA Modernization Contracts During Fiscal Years 2011 through 2016 For iEHR and VistA Evolution, the two modernization initiatives for which VA could provide contract data, the department obligated approximately $1.1 billion for contracts with 138 different contractors during fiscal years 2011 through 2016. Specifically, the department obligated approximately $224 million and $880 million, respectively, for contracts associated with these efforts. Of the 138 contractors, 34 of them performed work supporting both iEHR and VistA Evolution. The remaining 104 contractors worked exclusively on either iEHR or VistA Evolution. Funding for the 34 contractors that worked on both iEHR and VistA Evolution totaled about $793 million of the $1.1 billion obligated for contracts on the two initiatives. Obligations for contracts awarded to the top 15 of these 34 contractors (which we designated as key contractors) accounted for about $741 million (about 67 percent) of the total obligated for contracts on the two initiatives. The remaining 123 contractors were obligated about $364 million for their contracts. The 15 key contractors were obligated about $564 million and $177 million for VistA Evolution and iEHR contracts, respectively. Table 1 identifies the key contractors and their obligated dollar totals for the two efforts. Additionally, we determined that, of the $741 million obligated to the key contractors, $411 million (about 55 percent) was obligated for contracts supporting the development of new system capabilities, $256 million (about 35 percent) was obligated for contracts supporting project management activities, and $74 million (about 10 percent) was obligated for contracts supporting operations and maintenance for iEHR and VistA Evolution. VA obligated funds to all 15 of the key contractors for system development, 13 of the key contractors for project management, and 12 of the key contractors for operations and maintenance. Figure 2 shows the amounts obligated for each of these areas. Further, based on the key contractors’ documentation, for the iEHR program, VA obligated $102 million for development, $65 million for project management, and $10 million for operations and maintenance. For the VistA Evolution Program, VA obligated $309 million for development, $191 million for project management, and $64 million for operations and maintenance. Figure 3 shows the amounts obligated for contracts on the VistA Evolution and iEHR programs for development, project management, and operations and maintenance. In addition, table 2 shows the amounts that each of the 15 key contractors were obligated for the three types of contract activities performed on iEHR and VistA Evolution. VA Is in the Process of Developing Plans for Its Latest VistA Modernization Initiative Industry best practices and IT project management principles stress the importance of sound planning for system modernization projects. These plans should identify key aspects of a project, such as the scope, responsible organizations, costs, schedules, and risks. Additionally, planning should begin early in the project’s lifecycle and be updated as the project progresses. Since the VA Secretary announced that the department would acquire the same electronic health record system as DOD, VA has begun planning for the transition from VistA Evolution to EHRM. However, the department is still early in its efforts, pending the contract award. In this regard, the department has begun developing plans that are intended to guide the new EHRM program. For example, the department has developed a preliminary description of the organizations that are to be responsible for governing the EHRM program. Further, the VA Secretary announced in congressional testimony in November 2017, a key reporting responsibility for the program—stating that the Executive Director for the Electronic Health Records Modernization System will report directly to the department’s Deputy Secretary. In addition, the department has developed a preliminary timeline for deploying its new electronic health record system to VA’s medical facilities, and a 90-day schedule that depicts key program activities. The department also has begun documenting the EHRM program risks. Beyond the aforementioned planning activities undertaken thus far, the Executive Director stated that the department intends to complete a full suite of planning and acquisition management documents to guide the program, including a life cycle cost estimate and an integrated master schedule to establish key milestones over the life of the project. To this end, the Executive Director told us that VA has awarded two program management contracts to support the development of these plans to MITRE Corporation and Booz Allen Hamilton. According to the Executive Director, VA also has begun reviewing the VistA Evolution Roadmap, which is the key plan that the department has used to guide VistA Evolution since 2014. This review is expected to result in an updated plan that is to prioritize any remaining VistA enhancements needed to support the transition from VistA Evolution to the new system. According to the Executive Director, the department intends to complete the development of its plans for EHRM within 90 days after award of the Cerner contract, which is anticipated to occur in December 2017. Further, beyond the development of plans, VA has begun to staff an organizational structure for the modernization initiative, with the Under Secretary of Health and the Assistant Secretary for Information and Technology (VA’s Chief Information Officer) designated as executive sponsors. It has also appointed a Chief Technology Officer from OI&T, and a Chief Medical Officer from VHA, both of whom are to report to the Executive Director. VA’s efforts to develop plans for EHRM and to staff an organization to manage the program encompass key aspects of project planning that are important to ensuring effective management of the department’s latest modernization initiative. However, the department remains early in its modernization planning efforts, many of which are dependent on the system acquisition contract award, which has not yet occurred. The department’s continued dedication to completing and effectively executing the planning activities that it has identified will be essential to helping minimize program risks and guide this latest electronic health record modernization initiative to a successful outcome—one which VA, for almost two decades, has yet to achieve. VA’s Progress toward Consolidating and Optimizing Data Centers and Addressing Other Key FITARA-Related Area Falls Short of Performance Targets Beyond managing its system modernization efforts, such as VistA, VA has to ensure the effective implementation of the IT acquisition requirements called for in FITARA. Pursuant to FITARA, in August 2016, the Federal CIO issued a memorandum that announced the Data Center Optimization Initiative (DCOI). According to OMB, this new initiative supersedes and builds on the results of FDCCI, and is also intended to improve the performance of federal data centers in areas such as facility utilization and power usage. Among other things, DCOI requires 24 federal departments and agencies, including VA, to develop plans and report on strategies (referred to as DCOI strategic plans) to consolidate inefficient infrastructure, optimize existing facilities, improve security posture, and achieve costs savings. Further, the memorandum establishes a set of five data center optimization metrics and performance targets intended to measure agency’s progress in the areas of (1) server utilization and automated monitoring, (2) energy metering, (3) power usage effectiveness, (4) facility utilization, and (5) virtualization. The guidance also indicates that OMB is to maintain a public dashboard that will display consolidation-related costs savings and optimization performance information for the agencies. However, in a series of reports that we issued from July 2011 through August 2017, we noted that, while data center consolidation could potentially save the federal government billions of dollars, weaknesses existed in several areas, including agencies’ data center consolidation plans, data center optimization, and OMB’s tracking and reporting on related cost savings. Further, we previously reported that VA’s progress toward closing data centers, and realizing the associated cost savings, lagged behind that of other covered agencies. More recently, VA reported a total inventory of 415 data centers, of which 39 had been closed as of August 2017. While the department anticipates another 10 data centers will be closed by the end of fiscal year 2018, these closures fall short of the targets set by OMB. Specifically, even if VA meets all of its planned targets for closure, it will only close about 9 percent of its tiered data centers and about 18.7 percent of its non-tiered data centers by the end of fiscal year 2018, which is short of the respective 25 and 60 percent targets set by OMB. Further, while VA has reported $23.61 million in data center-related cost savings and avoidances for 2012 through August 2017, the department does not expect to realize further savings from the additional 10 data center closures in the next year. In addition, in August 2017 we reported that agencies needed to address challenges in optimizing their data centers in order to achieve cost savings. Specifically, we noted that, according to the 24 agencies’ data center consolidation initiative strategic plans as of April 2017, most agencies were not planning to meet OMB’s optimization targets by the end of fiscal year 2018. As of February 2017, VA reported meeting one of the five data center optimization metrics related to power usage effectiveness. Also, the department’s data center optimization strategic plan indicates that the department plans to meet three of the five metrics by the end of fiscal year 2018. Further, while OMB directed agencies to replace manual collection and reporting of metrics with automated tools no later than fiscal year 2018, VA had only implemented automated tools at 6 percent of its data centers. VA’s CIO Has Certified Adequate Incremental Development for Its Major IT Investments for Fiscal Year 2017, but Needs to Update Related Policy OMB has emphasized the need to deliver investments in smaller parts, or increments, in order to reduce risk, deliver capabilities more quickly, and facilitate the adoption of emerging technologies. In 2010, it called for agencies’ major investments to deliver functionality every 12 months and, since 2012, every 6 months. Subsequently, FITARA codified a requirement that agency CIOs certify that IT investments are adequately implementing incremental development, as defined in the capital planning guidance issued by OMB. Later OMB guidance on the law’s implementation—issued in June 2015—directed agency CIOs to define processes and policies for their agencies which ensure that they certify that IT resources are adequately implementing incremental development. Between May 2014 and November 2017, we reported on agencies’ efforts to utilize incremental development practices for selected major investments. In November 2017, we noted that agencies reported that 62 percent of major IT software development investments were certified by the agency CIO as using adequate incremental development in fiscal year 2017, as required by FITARA. VA’s CIO certified the use of adequate incremental development for all 10 of its major IT investments. However, VA had not yet updated the department’s policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA as we recommended. The department stated that it plans to address our recommendation to establish a policy and that the policy is targeted for completion in 2017. VA Has Made Progress in Developing and Using a Comprehensive Inventory of Software Licenses Federal agencies engage in thousands of licensing agreements annually. Effective management of software licenses can help organizations avoid purchasing too many licenses that result in unused software. In addition, effective management can help avoid purchasing too few licenses, which results in noncompliance with license terms and causes the imposition of additional fees. Federal agencies are responsible for managing their IT investment portfolios, including the risks from their major information system initiatives, in order to maximize the value of these investments to the agency. OMB developed a policy that requires agencies to conduct an annual, agency-wide IT portfolio review to, among other things, reduce commodity IT spending. Such areas of spending could include software licenses. We previously identified seven elements that a comprehensive software licensing policy should address: identify clear roles, responsibilities, and central oversight authority within the department for managing enterprise software license agreements and commercial software licenses; establish a comprehensive inventory (at least 80 percent of software license spending and/or enterprise licenses in the department) by identifying and collecting information about software license agreements using automated discovery and inventory tools; regularly track and maintain software licenses to assist the agency in implementing decisions throughout the software license management life cycle; analyze software usage and other data to make cost-effective provide training relevant to software license management; establish goals and objectives of the software license management consider the software license management life-cycle phases (i.e., requisition, reception, deployment and maintenance, retirement, and disposal phases) to implement effective decision making and incorporate existing standards, processes, and metrics. We previously made recommendations to VA to (1) develop an agency- wide comprehensive policy for the management of software licenses that includes guidance for using analysis to better inform investment decision making, (2) employ a centralized software license management approach that is coordinated and integrated with key personnel, (3) establish a comprehensive inventory of software licenses using automated tools, (4) track and maintain a comprehensive inventory of software licenses using automated tools and metrics, (5) analyze agency-wide software license data to identify opportunities to reduce costs and better inform investment decision making, and (6) provide software license management training to appropriate personnel. Consistent with our recommendation, in July 2015, VA issued a comprehensive software licensing policy that addressed weaknesses we previously identified. The department also issued a directive that documents VA’s software license management policy and responsibilities for central management of agency-wide software licenses, consistent with our recommendations. By implementing our recommendations, VA should be better positioned to consistently and cost-effectively manage software throughout the agency. In August 2017, the department also provided documentation showing that it had generated a comprehensive inventory of software licenses using automated tools for the majority of agency software license spending or enterprise-wide licenses. This inventory can serve to reduce redundant applications and help identify other cost saving opportunities. Further, the department implemented a solution to analyze agency-wide software license data, including usage and costs. This solution should allow VA to identify cost saving opportunities and inform future investment decisions. In addition, the department has provided information indicating that appropriate personnel receive software license management training. In conclusion, VA has made extensive use of numerous contractors and has obligated more than $1 billion for contracts that supported two of four VistA modernization programs that the department has initiated. VA has recently begun the fourth modernization program in which it plans to replace VistA with the same commercially available electronic health record system that is used by DOD. However, the department’s latest modernization effort is in the early stages of planning and is dependent on the system acquisition contract award in December 2017. VA’s completion and effective execution of plans will be essential to guiding this latest electronic health record modernization initiative to a successful outcome. Beyond VistA, the department continues to make progress on key FITARA-related initiatives. Although the department has made progress in the area of software licensing, additional actions in the areas of data center consolidation and optimization, as well as incremental system development can better position VA to effectively manage its IT. We plan to continue to monitor the department’s progress on these important activities. Chairman Hurd, Ranking Member Kelly, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. GAO Contact and Staff Acknowledgments If you or your staffs have any questions about this testimony, please contact David A. Powner at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony statement. GAO staff who made key contributions to this statement are Mark Bird (Assistant Director), Jacqueline Mai (Analyst in Charge), Justin Booth, Chris Businsky, Rebecca Eyler, Paris Hawkins, Valerie Hopkins, Brandon S. Pettis, Jennifer Stavros-Turner, Eric Trout, Christy Tyson, Eric Winter, and Charles Youman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The use of IT is crucial to helping VA effectively serve the nation's veterans and, each year, the department spends billions of dollars on its information systems and assets. However, VA has faced challenges spanning a number of critical initiatives related to modernizing its major systems. To improve all major federal agencies' acquisitions and hold them accountable for reducing duplication and achieving cost savings, in December 2014 Congress enacted federal IT acquisition reform legislation (commonly referred to as the Federal Information Technology Acquisition Reform Act , or FITARA). GAO was asked to summarize its previous and ongoing work regarding VA's history of efforts to modernize VistA, including past use of contractors, and the department's recent effort to acquire a commercial electronic health record system to replace VistA. GAO was also asked to provide an update on VA's progress in key FITARA-related areas, including (1) data center consolidation and optimization, (2) incremental system development practices, and (3) software license management. VA generally agreed with the information upon which this statement is based. What GAO Found For nearly two decades, the Department of Veterans Affairs (VA) has undertaken multiple efforts to modernize its health information system—the Veterans Health Information Systems and Technology Architecture (known as VistA). Two of VA's most recent efforts included the Integrated Electronic Health Record (iEHR) program, a joint program with the Department of Defense (DOD) intended to replace separate systems used by VA and DOD with a single system; and the VistA Evolution program, which was to modernize VistA with additional capabilities and a better interface for all users. VA has relied extensively on assistance from contractors for these efforts. VA obligated over $1.1 billion for contracts with 138 contractors during fiscal years 2011 through 2016 for iEHR and VistA Evolution. Contract data showed that the 15 key contractors that worked on both programs accounted for $741 million of the funding obligated for system development, project management, and operations and maintenance to support the two programs (see figure). VA recently announced that it intends to change its VistA modernization approach and acquire the same electronic health record system that DOD is implementing. With respect to key FITARA-related areas, the department has reported progress on consolidating and optimizing its data centers, although this progress has fallen short of targets set by the Office of Management and Budget. VA has also reported $23.61 million in data center-related cost savings, yet does not expect to realize further savings from additional closures. In addition, VA's Chief Information Officer (CIO) certified the use of adequate incremental development for 10 of the department's major IT investments; however, VA has not yet updated its policy and process for CIO certification as GAO recommended. Finally, VA has issued a software licensing policy and has generated an inventory of its software licenses to inform future investment decisions. What GAO Recommends GAO has made multiple recommendations to VA aimed at improving the department's IT management. VA has generally agreed with the recommendations and begun taking responsive actions.
gao_GAO-18-414
gao_GAO-18-414_0
Background The Bureau’s address canvassing operation updates its address list and maps, which are the foundation of the decennial census. An accurate address list both identifies all households that are to receive a notice by mail requesting participation in the census (by Internet, phone, or mailed- in questionnaire) and serves as the control mechanism for following up with households that fail to respond to the initial request. Precise maps are critical for counting the population in the proper locations—the basis of congressional apportionment and redistricting. Our prior work has shown that developing an accurate address list is challenging—in part because people can reside in unconventional dwellings, such as converted garages, basements, and other forms of hidden housing. For example, as shown in figure 1, what appears to be a single-family house could contain an apartment, as suggested by its two doorbells. During address canvassing, the Bureau verifies that its master address list and maps are accurate to ensure the tabulation for all housing units and group quarters is correct. For the 2010 Census, the address canvassing operation mobilized almost 150,000 field workers to canvass almost every street in the United States and Puerto Rico to update the Bureau’s address list and map data—and in 2012 reported the cost at nearly $450 million. The cost of going door-to-door in 2010, along with the emerging availability of imagery data, led the Bureau to explore an approach for 2020 address canvassing that would allow for fewer boots on the ground. Traditionally, the Bureau went door-to-door to homes across the country to verify addresses. This “in-field address canvassing” is a labor-intensive and expensive operation. To achieve cost savings, in September 2014 the Bureau decided to use a reengineered approach for building its address list for the 2020 Census and not go door-to-door (or “in-field”) across the country, as it has in prior decennial censuses. Rather, some areas (known as “blocks”) would only need a review of their address and map information using computer imagery and third-party data sources— what the Bureau calls “in-office” address canvassing procedures. According to the Bureau’s address canvassing operational plan, in-office canvassing had two phases: During the first phase, known as “Interactive Review,” Bureau employees use current aerial imagery to determine if areas have housing changes, such as new residential developments or repurposed structures, or if the areas match what is in the Bureau’s master address file. The Bureau assesses the extent to which the number of housing units in the master address file is consistent with the number of units visible in the current imagery. If the housing shown in the imagery matches what is listed in the master address file, then those areas are considered to be resolved or stable and would not be canvassed in-field. During the second phase, known as “Active Block Resolution,” employees would try to resolve coverage concerns identified during the first phase and verify every housing unit by virtually canvassing the entire area. As part of this virtual canvass, the Bureau would compare what is found in imagery to the master address file data and other data sources in an attempt to resolve any discrepancies. If Bureau employees still could not reconcile the discrepancies, such as housing unit count or street locations with what is on the address list, then they would refer these blocks to in-field address canvassing. However, in March 2017, citing budget uncertainty the Bureau decided to discontinue the second phase of in-office review for the 2020 Census. According to the Bureau, in order to ensure that the operations implemented in the 2018 End-to-End Test were consistent with operations planned for the 2020 Census, the Bureau added the blocks originally resolved during the second phase of in-office review back into the in-field workload for the test. The cancellation of Active Block Resolution is expected to increase the national workload of the in-field canvassing workload by 5 percentage points (25 percent to 30 percent). During in-field address canvassing, listers use laptop computers to compare what they see on the ground to what is on the address list and map. Listers confirm, add, delete, or move addresses to their correct map positions. At each housing unit, listers are trained to speak with a knowledgeable resident to confirm or update address data, ask about hidden housing units, confirm the housing unit location on the map, (known as the map spot) and collect a map spot using global positioning systems (GPS). If no one is available, listers are to use house numbers and street signs to verify the address data. The data are transmitted electronically to the Bureau. The Census Bureau expects that the End-to-End Test for address canvassing will identify areas for improvement and changes that need to be made for the 2020 Census. Our prior work has shown the importance of robust testing. Rigorous testing is a critical risk mitigation strategy because it provides information on the feasibility and performance of individual census-taking activities, their potential for achieving desired results, and the extent to which they are able to function together under full operational conditions. In February 2017, we added the 2020 Census to GAO’s High-Risk List because operational and other issues are threatening the Bureau’s ability to deliver a cost-effective enumeration. We reported on concerns about the Bureau’s capacity to implement innovative census-taking methods, uncertainties surrounding critical information technology systems, and the quality of the Bureau’s cost-estimates. Underlying these issues are challenges in such essential management functions as the Bureau’s ability to: collect and use real-time indicators of cost, performance, and schedule; follow leading practices for cost estimation; scheduling; risk management; IT acquisition, development, testing, and security; and cost-effectively deal with contingencies including, for example, fiscal constraints, potential changes in design, and natural disasters. The Listers Generally Followed Procedures, but the Bureau Experienced Some Issues Reassigning Work, Estimating Workload and Lister Productivity, and Managing to Staffing Goals The Bureau completed in-field address canvassing as scheduled by September 29, 2017, canvassing approximately 340,400 addresses. Most of the listers we observed generally followed procedures. For example, 15 of 18 listers knocked on doors, and 16 of 18 looked for hidden housing units, which is important for establishing that address lists and maps are accurate and for identifying hard-to-count populations. Those procedures include taking such steps as: comparing the housing units they see on the “ground” to the housing units on the address list, knocking on all doors so they could speak with a resident to confirm the address (even if the address is visible on the mailbox or house) and to confirm that there are no other living quarters such as a basement apartment, looking for “hidden housing units”, looking for group quarters such as group homes or dormitories, and confirming the location of the housing unit on a map with GPS coordinates collected on the doorstep. To the extent procedures were not followed, it generally occurred when listers did not go up to the door and speak with a resident or take a map spot on the doorstep. Failure to follow procedures could adversely affect a complete count, as addresses could be missed or a group quarter could be misclassified as a residential address. After we alerted the Bureau to our observations, the Bureau agreed moving forward, to emphasize the importance of following procedures during training for in-field address canvassing. Some Listers Duplicated Each Other’s Work Due to a Lack of Operational Procedures for Reassigning Work Address canvassing has tight time frames, so work needs to be assigned efficiently. Sometimes this means the Bureau needs to reassign work from one lister to another. During address canvassing, the Bureau discovered that reassigned census blocks sometimes would appear in both the new and the original listers’ work assignments. In some cases, this led to blocks being worked more than once, which decreased efficiency, increased costs, and could create confusion and credibility issues when two different listers visit a house. According to Bureau procedures, listers were instructed to connect to the Bureau’s Mobile Case Management (MCM) system to download work assignments (address blocks) and to transmit their completed work at the beginning and end of the work day but not during the work day. Thus during the work day, they were unaware when unworked blocks had been reassigned to another lister. Bureau officials also told us that the Listing and Mapping Application (LiMA) software used to update the address file and maps was supposed to have the functionality to prevent blocks from being worked more than once, but this functionality was not developed because of budget cuts. For 2020, Bureau officials told us they plan to create operational procedures for reassigning work. According to Bureau officials, they plan to require supervisors to contact the original lister when work is reassigned. We have requested a copy of those procedures; however, the Bureau has not finalized them. Standards for Internal Control in the Federal Government (Standards for Internal Control) call for management to design control activities, such as policies and procedures to achieve objectives. Finalizing these procedures should help prevent blocks from being canvassed more than once. The Bureau Has Not Evaluated Workload, Productivity Rates, and Staffing Assumptions for Address Canvassing The Bureau conducts tests under census-like conditions, in part, to verify 2020 Census planning assumptions, such as workload, how many houses per hour a lister can verify (also known as a lister’s productivity rate), and how many people the Bureau needs to hire for an operation. Moreover, one of the objectives of the test is to validate that the operations being tested are ready at the scale needed for the 2020 Census. For the 2018 End-to-End Test, the Bureau completed in-field address canvassing on time at two sites and early at one site; despite workload increases at all three test sites and hiring shortfalls at two sites. The Bureau credits this success to better than expected productivity. As the Bureau reviews the results of address canvassing, evaluating the factors that affected workload, productivity rates, and staffing and making adjustments to its estimates, if necessary, before the 2020 Census would help the Bureau ensure that address canvassing has the appropriate number of staff and equipment to complete the work in the required time frame. Workload For the 2020 Census, the Bureau estimates it will have to send 30 percent of addresses to the field for listers to verify. However, at the three test sites, the workload was higher than this estimate (see table 1). At one test site, the percent of addresses verified through in-field address canvassing was 76 percent or 46 percentage points more than the Bureau’s expected 2020 Census in-field address canvassing workload estimate of 30 percent. Bureau officials told us that the 30 percent in-field workload estimate is a national average and is not specific to any of the three test sites. Prior to the test, officials said that the Bureau also knew that the West Virginia site was assigning new addresses to some of the test site’s housing units due to local government emergency 911 address conversion and that the in-field workload would be greater in West Virginia when compared to the other test sites. We requested documentation for the Bureau’s original estimate that 30 percent of the 133.8 million expected addresses would be canvassed in- field for the 2020 Census. However, the Bureau was unable to provide us with documentation to support how they arrived at the 30 percent estimate. Instead, the Bureau provided us with a November 2017 methodology document that showed three in-field address canvassing workload scenarios, whereby, between 41.9 and 45.1 percent of housing units would need to go to the field for address canvassing. The three scenarios consider a range of stability in the address file as well as different workload estimates for in-field follow-up. At 30 percent the Bureau would need to canvass about 40.2 million addresses; however, at 41.9 and 45.1 percent the Bureau would need to canvass between 56 million and 60.4 million addresses, respectively. According to Bureau officials, they are continuing to assess whether changes to its in-office address canvassing procedures would be able to reduce the in-field address canvassing workload to 30 percent, while at the same time maintaining address quality. However, Bureau officials did not provide us with documentation to show how the in-field address canvassing workload would be reduced because the proposed changes were still being reviewed internally. Workload for address canvassing directly affects cost – the greater the workload the more people as well as laptop computers needed to carry out the operation. We found that the 30 percent workload threshold is what is reflected in the December 2017 updated 2020 Census cost estimate that was used to support the fiscal year 2019 budget request. Thus, if the 30 percent threshold is not achieved then the in-field canvassing workload will likely increase for the 2020 Census and the Bureau would be at risk of exceeding its proposed budget for the address canvassing operation. Standards for Internal Control call for organizations to use quality information to achieve their objectives. Thus, continuing to evaluate and finalize workload estimates for in-field address canvassing with the most current information will help ensure the Bureau is well-positioned to conduct addressing canvassing for the 2020 Census. For example, according to Bureau officials, preliminary workload estimates will need to be delivered by January 2019 for hiring purposes and the final in-field workload numbers for address canvassing will need to be determined by June 2019 for the start of address canvassing, which is set to begin in August 2019. Moreover, by February 2019 the Bureau’s schedule calls for it to determine how many laptops will be needed to conduct 2020 Census address canvassing. Lister Productivity At the test sites, listers were substantially more productive than the Bureau expected. The expected production rate is defined as the number of addresses expected to be completed per hour, and it affects the cost of the address canvassing operation. This rate includes time for actions other than actually updating addresses, such as travel time. In the 2010 Census the rates reflected different geographic areas, and the country was subdivided into three areas: urban/suburban, rural, and very rural. According to Bureau officials, for the 2020 Census the Bureau will have variable production rates based on geography, similar to the design used in the 2010 Census. The Bureau told us they have not finalized the 2020 Census address canvassing production rates. Table 2 shows the expected and actual productivity rates (addresses per hour) for the in-field address canvassing operation at all three test sites. To ensure address canvassing for the test was consistent with the 2020 Census, Bureau officials told us they included the blocks resolved during the now discontinued second phase of in-office review, into the in-field workload for the test. The Bureau attributed the greater productivity to this discontinued second phase. Bureau officials told us that they believe that listers spent less time updating those blocks because they had already been resolved, and any necessary changes were already incorporated. Moreover, while benefitting from the second phase of in-office address canvassing may be one explanation for why listers were more productive. Bureau officials told us that they are unable to evaluate the differences in expected versus actual productivity for blocks added to the workload as a result of the discontinued second phase because of limitations with the data. However, there could be other reasons as well such as travel time and geography. Standards for Internal Control require that organizations use quality information to achieve their objectives. Therefore, continuing to evaluate other factors from the 2018 End-to-End Test that may have increased or could potentially decrease productivity will be important for informing lister productivity rates for 2020, as productivity affects the number of listers needed to carry out the operation, the number of staff hours charged to the operation, and the number of laptops to be procured. Hiring For the 2018 End-to-End Test address canvassing operation, the Bureau hired fewer listers than it assumed it needed at two sites and hired more at the other site. In West Virginia, 60 percent of the required field staff was hired and in Washington, 74.5 percent of the required field staff was hired. Nevertheless, the operation finished on schedule at both these sites. In contrast in Rhode Island the Bureau hired 112 percent of the required field staff and finished early. According to Bureau officials, both the West Virginia and Washington state test sites started hiring field staff later than expected because of uncertainty surrounding whether the Bureau would have sufficient funding to open all three test sites for the 2018 End-to-End Test. When a decision was made to open all three sites for the address canvassing operation only, that decision came late, and Bureau officials told us that once they were behind in hiring and were never able to catch up because of low unemployment rates and the short duration of the operation. According to Bureau officials, their approach to hiring for the 2018 End-to-End Test was similar to that used for the 2010 and 2000 Censuses. In both censuses the Bureau’s goal was to recruit and hire more workers than it needed because of immutable deadlines and attrition. After the 2010 Census we reported that the Bureau had over recruited; conversely, for the 2000 Census the Bureau had recruited in the midst of one of the tightest labor markets in three decades. Thus we recommended, and the Bureau agreed to evaluate current economic factors that are associated with and predictive of employee interest in census work, such as national and regional unemployment levels, and use these available data to determine the potential temporary workforce pool and adjust its recruiting approach. The Bureau implemented this recommendation, and used unemployment and 2010 Census data to determine a base recruiting goal at both the Los Angeles, California and Houston, Texas 2016 census test sites. Specifically, the recruiting goal for Los Angeles was reduced by 30 percent. Bureau officials told us that it continues to gather staffing data from the 2018 End-to-End Test that will be important to consider looking forward to 2020. Although address canvassing generally finished on schedule even while short staffed, Bureau officials told us they are carefully monitoring recruiting and hiring data to ensure they have sufficient staff for the test’s next census field operation non-response follow-up, when census workers go door-to-door to follow up with housing units that have not responded. Non-response follow-up is set to begin in May 2018. According to test data as of March 2018, the Bureau is short of its recruiting goal for this operation which is being conducted in Providence County, Rhode Island. The Bureau’s goal is to recruit 5,300 census workers and as of March 2018, the Bureau had only recruited 2,732 qualified applicants to fill 1,166 spots for training and deploy 1,049 census workers to conduct non-response follow-up. Bureau officials told us they believe that low unemployment is making it difficult to meet its recruiting goals in Providence County, Rhode Island, but they are confident they will be able to hire sufficient staff without having to increase pay rates. Recruiting and retaining sufficient staff to carry out operations as labor- intensive as address canvassing and nonresponse follow-up for the 2020 Census is a huge undertaking with implications for cost and accuracy. Therefore, striking the right staffing balance for the 2020 Census is important for ensuring deadlines are met and costs are controlled. Resolving Challenges from the Address Canvassing Test Will Better Position the Bureau for the 2020 Census The Bureau Does Not Have Procedures to Ensure All Collected Address Canvassing Data Are Retained Bureau officials told us that during the test 11 out of 330 laptop computers did not properly transmit address and map data collected for 25 blocks. The lister-collected address file and map data are supposed to be electronically transmitted from the listers’ laptops to the Bureau’s data processing center in Jeffersonville, Indiana. The data are encrypted and remain on the laptop until the laptops are returned to the Bureau where the encrypted data are deleted. Prior to learning that not all data had properly transmitted off the laptops, data on seven of the laptops was deleted. Data on the remaining four laptops were still available. In Providence, Rhode Island, where the full test will take place, the Bureau recanvassed blocks where data were lost to ensure that the address and map information for nonresponse follow-up was correct. Recanvassing blocks increases costs and can lead to credibility problems for the Bureau when listers visit a home twice. Going into address canvassing for the End-to-End Test, Bureau officials said they knew there was a problem with the LiMA software used to update the Bureau’s address lists and maps. Specifically, address and map updates would not always transfer when a lister transmitted their completed work assignments from the laptop to headquarters. Other census surveys using LiMA had also encountered the same software problem. Moreover, listers were not aware that data had not transmitted because there was no system-generated warning. Bureau officials are working to fix the LiMA software problem, but told us that the software problem has been persistent across other census surveys that use LiMA and they are not certain it will be fixed. Bureau officials told us that prior to the start of address canvassing they created an alert report to notify Bureau staff managing the operation at headquarters if data were not properly transmitted. When transmission problems were reported, staff was supposed to remotely retrieve the data that were not transmitted. This workaround was designed to safeguard the data but according to officials was not used. Bureau officials told us that they do not know whether this was because the alert reports were not viewed by responsible staff or whether the alert report to notify the Bureau staff managing the operation was not triggered. Bureau officials told us they recognize the importance of following procedures to monitor alert reports, and acknowledge that the loss of data on seven of the laptops may have been avoided had the procedures that alert reports get triggered and monitored been followed; however, officials did not know why the procedures were not followed. For 2020, if the software problem is not resolved, then officials said the Bureau plans to create two new alert reports to monitor the transmission of data. One report would be triggered when the problem occurs and a second report would capture a one-to-one match between data on the laptop and data transmitted to the data center so that discrepancies would be immediately obvious. While these new reports should help ensure that Bureau staff are alerted when data has not properly transmitted, the Bureau has not determined and addressed why the procedures that required an alert report get triggered and then reviewed by Bureau staff did not work as intended. Standards for Internal Control require that organizations safeguard data and follow policies and procedures to achieve their objectives. Thus, either fixing the LiMA software problem, or if the software problem cannot be fixed, then determining and addressing why procedures that alert reports get triggered and monitored were not followed would position the Bureau to help prevent future data losses. More Useful and Accurate Monitoring Data for Field Supervisors Would Strengthen Management of Operations To effectively manage address canvassing, the Bureau needs to be able to monitor the operation’s progress in near real time. Operational issues such as listers not working assigned hours or falling behind schedule need to be resolved quickly because of the tight time frames of the address canvassing and subsequent operations. During the address canvassing test, the Bureau encountered several challenges that hindered its efforts to efficiently monitor lister activities as well as the progress of the address canvassing operation. System Alerts Were Not Consistently Used by Supervisors The Bureau provides data-driven tools for the census field supervisors to manage listers, including system alerts that identify issues that require the supervisor to follow-up with a lister. For the address canvassing operation, the system could generate 14 action codes that covered a variety of operational issues such as unusually high or low productivity (which may be a sign of fraud or failure to follow procedures) and administrative issues such as compliance with overtime and completion of expense reports and time cards. During the operation, over 8,250 alerts were sent to CFSs or about 13 alerts were sent per day per CFS. Each alert requires the CFS to take action and then record how the alert was resolved. CFSs told us and the Bureau during debriefing sessions that they believed many of the administrative alerts were erroneous and they dismissed them. For example, during our site visit one CFS showed us an alert that incorrectly identified that a timecard had not been completed. The CFS then showed us that the lister’s timecard had indeed been properly completed and submitted. CFSs we spoke to said that they often dismissed alerts related to expense reports and timecards and did not pay attention to them or manage them. Bureau officials reported that one CFS was fired for not using the alerts to properly manage the operation. To assist supervisors, these alerts need to be reliable and properly used. Bureau officials said that they examined alerts for errors after we told them about our observation. They reported that they did not find any errors in the alerts. They believe that CFSs may not fully understand that the alerts stay active until they are marked as resolved by the CFS. For example, if a CFS gets an alert that a lister has not completed a timecard the alert will remain active until the CFS resolves the alert by stating the time card was completed. The Bureau’s current CFS manual does not address that by the time a CFS sees the alert a lister may have already taken action to resolve it. Because this was a reoccurring situation, CFSs told us they had a difficult time managing the alerts. Standards for Internal Control call for an agency to use quality information to achieve objectives. Bureau officials acknowledge that it is a problem that some CFSs view the alerts as erroneous and told us they plan to address the importance of alerts in training. We spoke to Bureau officials about making the alerts more useful to CFSs, such as by differentiating between critical and noncritical alerts and streamlining alerts by perhaps combining some of them. Bureau officials told us they would monitor the alerts during the 2018 End-to-End Test’s nonresponse follow-up operation and make adjustments if appropriate. However, while the Bureau told us it will monitor alerts for the non-response follow-up operation, the Bureau does not have a plan for how it will examine and make alerts more useful. Ensuring alerts are properly followed up on is critical to the oversight and management of an operation. If the CFSs view the alerts as unreliable, they could be likely to miss key indicators of fraud such as unusually high or low productivity or an unusually high or low number of miles driven. Moreover, monitoring overtime alerts and the submission of daily time cards and expense reports is also important to ensure that overtime is appropriately approved before worked and that listers get paid on time. The Bureau’s Management Dashboard Did Not Always Display Accurate Information Another tool the Bureau uses to monitor operations is its Unified Tracking System (UTS), a management dashboard that combines data from a variety of Census systems, bringing the data to one place where the users can run or create reports. It was designed to track metrics such as the number and percentage of blocks assigned and blocks completed as well as the actual expenditures of an operation compared to the budgeted expenditures. However, information in UTS was not always accurate during address canvassing. For example UTS did not always report the correct number of addresses assigned and completed by site. As a result, Bureau managers reported they did not rely on UTS and instead used data from the source systems that fed into it. Bureau officials agreed that inaccurate data is a problem and that this workaround was inefficient as users had to take extra time to go to multiple systems to get the correct data. Bureau officials reported problems importing information from the feeder systems into UTS because of data mismatches. They said that address canvassing event codes were not processed sequentially, as they should have been, which led to inaccurate reporting. Bureau officials told us that they did not specify that the codes needed to be processed in chronological order as part of the requirements for UTS. Bureau officials said UTS passed the requisite readiness reviews and tests. However, Bureau officials also acknowledged that some of these problems could have been caught by exception testing which was not done prior to production. To resolve this issue for 2020, Bureau officials stated they are developing new requirements for UTS to automatically consider the chronological order of event codes. The Bureau told us they are working on these UTS requirements and will provide us with documentation when they are complete. They also said the Bureau plans to implement a process which compares field management reports with UTS reports to help ensure that the reports have the same definitions and are reporting accurate information. Standards for Internal Control call for an organization’s data be complete and accurate and processed into quality information to achieve their objectives. Thus, finalizing UTS requirements for the address canvassing reporting should help increase efficiency for the 2020 Census by avoiding time consuming workarounds. The Bureau Does Not Have Documented Procedures to Address Broadband Internet Service Coverage Gaps The Bureau has taken significant steps to use technology to reduce census costs. These steps include using electronic systems to transmit listers’ assignments and address and map data. However, during the address canvassing test, several listers and CFSs at the three test sites experienced problems with Internet connections primarily during training. The West Virginia site, which was more rural than the other sites, experienced the most problems with Internet connectivity. All six West Virginia CFSs reported Internet connectivity problems during the operation. As a work around, CFSs told us that a couple of their listers transmitted their work assignments from libraries where they could access the Internet. Bureau officials stated that the laptops in the 2018 End-to-End Test only used two broadband Internet service providers, which may have contributed to some of the Internet access issues. Bureau officials added that despite the reported Internet connectivity issues, the 2018 End-to- End Test for address canvassing finished on schedule and without any major problems. While this might be true for the test, we have previously reported that minor problems can become big challenges when the census scales up to the entire nation. Therefore, it is important that these issues get resolved before August 2019 when in-field address canvassing for the 2020 Census is set to begin. The Bureau is analyzing the cellular network coverage across all 2020 Census areas using coverage maps and other methods to determine which carrier is appropriate (including a backup carrier) for geographic areas where network coverage is limited. According to Bureau officials, they anticipate identifying the cellular carriers for each of its 248 area census offices by the summer of 2018. The officials said they are considering both national and regional carriers to provide service in some geographic areas because the best service provider in a certain geographic area may not be one of the national providers, but a regional provider. In those cases, listers and other staff in those areas will receive devices with the regional carrier. According to Bureau officials, for the 2020 Census, the ability to access multiple carriers should provide field staff with better connectivity around the country. We also found that there was no guidance for listers and CFSs on what to do if they experienced Internet connectivity problems and were unable to access the Internet. Bureau officials told us that staff in the field can use different methods to access the Internet, such as using home wireless networks or mobile hotspots located at libraries, or coffee shops to transmit data. However, the Bureau did not provide such instructions to listers. In addition, the Bureau also does not define what constitutes a secure Internet public connection. Ensuring data are safeguarded is important because census data are confidential. Bureau officials told us that the Bureau plans to provide instructions to field staff on what to do if they are unable to access census systems and what constitutes a secure Internet connection for the next 2018 End-to-End Test field operation, non-response follow-up. However, the Bureau has not finalized or documented these instructions. Standards for Internal Control call for management to design control activities, such as providing instructions to employees to achieve objectives. Finalizing these instructions to field staff will help ensure listers have complete information on how to handle problems with Internet connectivity and that data are securely transmitted. The Bureau Has Not Identified Alternative Sites for Listers to Take Online Training When Access to the Internet is Unavailable Some listers had difficulty accessing the Internet to take online training for address canvassing. This is the first decennial census that the Bureau is using online training, in previous decennials training was instructor-led in a class room. According to the Bureau, in addition to the Bureau provided laptop, listers also needed a personal home computer or laptop and Internet access at their home in order to complete the training. However, while the Bureau reported that listers had access to a personal computer to complete the training, we found some listers did not have access to the Internet at their home and were forced to find workarounds to access the training. According to American Community Survey data from 2015, among all households, 77 percent had a broadband Internet subscription. Bureau officials told us they are aware that not all households have access to the Internet and that the Bureau’s field division is working on back-up plans for accessing online training. Specifically, Bureau officials told us for 2020 they plan to identify areas of the country that could potentially have connectivity issues and plan to identify alternative locations such as libraries or community centers where Internet connections are available to ensure all staff has access to training. However, they have not finalized those plans to identify locations for training sites. Standards for Internal Control call for management to design control activities, such as having plans in place to achieve objectives. Finalizing these plans to identify alternative training locations will help ensure listers have a place to access training. Conclusions The Bureau’s re-engineered approach for address canvassing shows promise for controlling costs and maintaining accuracy. However, the address canvassing operation in the 2018 End-to-End test identified the need to reexamine assumptions and make some procedural and technological improvements. For example, at a time when plans for in- field address canvassing should be almost finalized, the Bureau is in the process of evaluating workload and productivity assumptions to ensure sufficient staff are hired and that enough laptop computers are procured. Moreover, Bureau officials have not finalized (1) procedures for reassigning work from one lister to another to prevent the unnecessary duplication of work assignments, (2) instructions for using the Internet when connectivity is a problem to ensure listers have access to training and the secure transmission of data to and from the laptops, and (3) plans for alternate training locations. To ensure address and map data are not lost during transmission, Bureau officials will also need to either (1) fix the problem with the LiMA software used to update the address and map files or (2) determine and address why procedures that alert reports be triggered and monitored were not followed. Finally, the Bureau has made progress in using data driven technology to manage address canvassing operations. However, ensuring data used by supervisors to oversee and monitor operations are both useful and accurate will help field supervisors take appropriate action to address supervisor alerts and will help managers monitor the real-time progress of the address canvassing operation. With little time remaining it will be important to resolve these issues. Making these improvements will better ensure address canvassing for the actual enumeration, beginning in August 2019, fully functions as planned and achieves desired results. Recommendations for Executive Action We are making the following seven recommendations to the Department of Commerce and the Census Bureau: Secretary of Commerce should ensure the Director of the U.S. Census Bureau continues to evaluate and finalize workload estimates for in-field address canvassing as well as evaluates the factors that impacted productivity rates during the 2018 End-to-End Test and, if necessary, make changes to workload and productivity assumptions before the 2020 Census in-field address canvassing operation to help ensure that assumptions that impact staffing and the number of laptops to be procured are accurate. (Recommendation 1) Secretary of Commerce should ensure the Director of the U.S. Census Bureau finalizes procedures for reassigning blocks to prevent the duplication of work. (Recommendation 2) Secretary of Commerce should ensure the Director of the U.S. Census Bureau finalizes backup instructions for the secure transmission of data when the Bureau’s contracted mobile carriers are unavailable. (Recommendation 3) Secretary of Commerce should ensure the Director of the U.S. Census Bureau finalizes plans for alternate training locations in areas where Internet access is a barrier to completing training. (Recommendation 4) Secretary of Commerce should ensure the Director of the U.S. Census Bureau takes action to either fix the software problem that prevented the successful transmission of data, or if that cannot be fixed, then determine and address why procedures that alert reports be triggered and monitored were not followed. (Recommendation 5) Secretary of Commerce should ensure the Director of the U.S. Census Bureau develops a plan to examine how to make CFS alerts more useful so that CFSs take appropriate action, including alerts a CFS determines are no longer valid because of timing differences. (Recommendation 6) Secretary of Commerce should ensure the Director of the U.S. Census Bureau finalizes UTS requirements for address canvassing reporting to ensure that the data used by census managers who are responsible for monitoring real-time progress of address canvassing are accurate before the 2020 Census. (Recommendation 7) Agency Comments and Our Evaluation We provided a draft of this report to the Department of Commerce. In its written comments, reproduced in appendix I the Department of Commerce agreed with our recommendations. The Census Bureau also provided technical comments that we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we are sending copies of this report to the Secretary of Commerce, the Under Secretary of Economic Affairs, the Acting Director of the U.S. Census Bureau, and interested congressional committees. The report also will be available at no charge on GAO’s website at http://www.gao.gov. If you have any questions about this report please contact me at (202) 512-2757 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Commerce Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Lisa Pearson, Assistant Director; Kate Wulff, Analyst-in-Charge; Mark Abraham; Devin Braun; Karen Cassidy; Robert Gebhart; Richard Hung; Kirsten Lauber; Krista Loose; Ty Mitchell; Kayla Robinson; Kate Sharkey; Stewart Small; Jon Ticehurst; and Timothy Wexler made key contributions to this report.
Why GAO Did This Study The success of the decennial census depends in large part on the Bureau's ability to locate every household in the United States. To accomplish this monumental task, the Bureau must maintain accurate address and map information for every location where a person could reside. For the 2018 End-to-End Test, census workers known as listers went door-to-door to verify and update address lists and associated maps in selected areas of three test sites—Bluefield-Beckley-Oak Hill, West Virginia; Pierce County, Washington; and Providence County, Rhode Island. GAO was asked to review in-field address canvassing during the End-to-End Test. This report determines whether key address listing activities functioned as planned during the End-to-End Test and identifies any lessons learned that could inform pending decisions for the 2020 Census. To address these objectives, GAO reviewed key documents including test plans and training manuals, as well as workload, productivity and hiring data. At the three test sites, GAO observed listers conducting address canvassing. What GAO Found The Census Bureau (Bureau) recently completed in-field address canvassing for the 2018 End-to-End Test. GAO found that field staff known as listers generally followed procedures when identifying and updating the address file; however, some address blocks were worked twice by different listers because the Bureau did not have procedures for reassigning work from one lister to another while listers work offline. Bureau officials told GAO that they plan to develop procedures to avoid duplication but these procedures have not been finalized. Duplicating work decreases efficiency and increases costs. GAO also found differences between actual and projected data for workload, lister productivity, and hiring. For the 2020 Census, the Bureau estimates it will have to verify 30 percent of addresses in the field. However, at the test sites, the actual workload ranged from 37 to 76 percent of addresses. Bureau officials told GAO the 30 percent was a nationwide average and not site specific; however, the Bureau could not provide documentation to support the 30 percent workload estimate. At all three test sites listers were significantly more productive than expected possibly because a design change provided better quality address and map data in the field, according to the Bureau. Hiring, however, lagged behind Bureau goals. For example, at the West Virginia site hiring was only at 60 percent of its goal. Bureau officials attributed the shortfall to a late start and low unemployment rates. Workload and productivity affect the cost of address canvassing. The Bureau has taken some steps to evaluate factors affecting its estimates, but continuing to so would help the Bureau refine its assumptions to better manage the operation's cost and hiring. Listers used laptops to connect to the Internet and download assignments. They worked offline and went door-to-door to update the address file, then reconnected to the Internet to transmit their completed assignments. Bureau officials told GAO that during the test 11 out of 330 laptops did not properly transmit address and map data collected for 25 blocks. Data were deleted on 7 laptops. Because the Bureau had known there was a problem with software used to transmit address data, it created an alert report to notify the Bureau staff if data were not properly transmitted. However, Bureau officials said that either responsible staff did not follow procedures to look at the alert reports or the reports were not triggered. The Bureau is working to fix the software problem and develop new alert reports, but has not yet determined and addressed why these procedures were not followed. The Bureau's data management reporting system did not always provide accurate information because of a software issue. The system was supposed to pull data from several systems to create a set of real-time cost and progress reports for managers to use. Because the data were not accurate, Bureau staff had to rely on multiple systems to manage address canvassing. The Bureau agreed that not only is inaccurate data problematic, but that creating workarounds is inefficient. The Bureau is developing new requirements to ensure data are accurate but these requirements have not been finalized. What GAO Recommends GAO is making seven recommendations to the Department of Commerce and Bureau including to: (1) finalize procedures for reassigning work, (2) continue to evaluate workload and productivity data, (3) fix software problem, or determine and address why procedures were not followed, and (4) finalize report requirements to ensure data are accurate. The Department of Commerce agreed with GAO's recommendations, and the Bureau provided technical comments that were incorporated, as appropriate.
gao_GAO-19-83
gao_GAO-19-83_0
Background Drug manufacturers seeking to develop and receive approval to market an orphan drug go through two separate FDA processes. The drug manufacturer may first apply for orphan designation, where FDA determines if the drug is eligible and meets the criteria for designation. The manufacturer may then apply to FDA for approval to market the orphan drug. Orphan Designation Eligibility and FDA’s Process for Granting the Designation There are a variety of circumstances under which a manufacturer’s drug is eligible for orphan designation. A drug is eligible for orphan designation when it is intended to treat a disease that affects fewer than 200,000 people in the United States. A drug is also eligible for orphan designation when it is intended to treat a disease that affects 200,000 or more people in the United States and there is no reasonable expectation of recovering the cost of drug development and marketing from U.S. sales. In addition, a drug that is intended to treat a specific population of a non-rare disease (known as an orphan subset) is eligible for orphan designation when a property of the drug (e.g., toxicity profile, mechanism of action, or prior clinical experience) limits its use to this subset of the population. FDA’s Office of Orphan Products Development (OOPD) administers the orphan drug program and evaluates orphan designation applications. When a drug manufacturer submits a designation application, OOPD receives and assigns it to a reviewer based on factors such as prior experience related to a particular rare disease and workload across OOPD reviewers. The drug manufacturer’s application is required to include such items as a description of the rare disease, documentation of the number of people affected by the disease in the United States (the population estimate), and a scientific rationale explaining why the drug may effectively treat the disease. The manufacturer can submit an orphan designation application at any point prior to submitting a marketing application. When making an orphan designation decision, OOPD guidance requires reviewers to evaluate the manufacturer’s application and record information about the drug and disease on a standard review template. OOPD reviewers are also expected to independently verify certain information included in the application. For example, OOPD reviewers may review independent sources to verify the population estimate provided by the manufacturer, including comparing the population estimate against prior related orphan designations. Once the OOPD reviewer’s decision is recorded on the standard review template, it undergoes a secondary review that has typically been completed by the Director of the Orphan Drug Designation Program. This secondary review is intended to ensure the quality of the application review and the consistency of the review across all related designation applications. There are three possible outcomes from the designation review: (1) the orphan designation is granted, (2) the application is pending with the manufacturer due to OOPD finding it deficient, or (3) the orphan designation is denied. OOPD sends the drug manufacturer a decision letter detailing the outcome of its review. If the application is pending or denied, the decision letter describes OOPD’s concerns with granting the orphan designation (e.g., insufficient evidence to support its scientific rationale) and the manufacturer may address these concerns either in an amendment to the original application (for pending status) or as a new application (for denied status). (See fig. 1.) FDA’s Marketing Approval Process FDA’s marketing approval process is the same for all drugs, regardless of orphan status. (See fig. 2.) Once a manufacturer has assessed the safety and efficacy of a new drug through preclinical testing and clinical trials, it may apply to FDA for approval to market the drug in the United States. To do so, a drug manufacturer submits its research in a new drug application (NDA) or biologic license application (BLA) to FDA, which then reviews and approves the drug for marketing if it is shown to be safe and effective for its intended use. The two FDA centers responsible for reviewing applications to market drugs in the United States are the Center for Biologics Evaluation and Research (CBER) and the Center for Drug Evaluation and Research (CDER). Upon completing its review of a marketing application, FDA will send an action letter with its determination to the drug manufacturer. The time elapsed from the date FDA receives the application to the date it issues an action letter informing the drug manufacturer of the agency’s decision is defined as one review cycle. If FDA does not approve the marketing application and the drug manufacturer resubmits the application, a new review cycle begins. When FDA approves a drug manufacturer’s marketing application, it approves the drug to treat one or more specific uses, known as indications. The approved indication is based on the clinical trial data provided in the manufacturer’s marketing application and is typically narrower than the orphan designation, which is based on early drug development data for the drug’s intended use in the rare disease. For example, one drug was granted orphan designation for the treatment of cystic fibrosis (the rare disease), while the drug’s marketing approval was for the treatment of cystic fibrosis in patients 12 years and older who have a certain genetic mutation (the indication). The orphan drug marketing exclusivity incentive (a period of protection from competition) only applies to the drug’s approved indication. OOPD determines orphan drug marketing exclusivity after receiving notification of the drug’s marketing approval from CBER and CDER. Because orphan drugs are often developed to treat patients with unmet medical needs, they may be eligible for one or more of FDA’s expedited programs. FDA’s four expedited programs—accelerated approval, breakthrough therapy designation, fast track designation, and priority review—are intended to facilitate and expedite the development and review of new drugs to address unmet medical needs in the treatment of a serious disease. Depending on the type of expedited program, manufacturers of new drugs may receive a variety of benefits, such as additional opportunities to meet with and obtain advice from FDA officials during drug development or a shorter FDA review time goal for the marketing application. FDA Implemented Its Modernization Plan to Address Growing Demand for Orphan Designations, and Has Recently Met Timeliness Goals In June 2017, FDA issued its Orphan Drug Modernization Plan and has implemented a number of steps under the plan to address the demand for orphan designations. According to OOPD data, the number of new designation applications received grew from 185 in 2008 to 527 in 2017 (an increase of 185 percent), while the number of designations granted also grew during the same period. (See fig. 3.) Prior to implementing the modernization plan, OOPD had amassed a backlog of 138 applications that were pending review for more than 120 days. The modernization plan therefore established two goals: (1) eliminating the backlog of designation applications within 90 days (by September 25, 2017), and (2) ensuring that new designation applications are reviewed within 90 days of receipt. To accomplish its first goal, the modernization plan outlined seven actions FDA planned to take to temporarily increase OOPD resources for reviewing designation applications. For example, the agency established an experienced team of senior OOPD reviewers to focus solely on the backlog of designation applications. In addition, OOPD initially enlisted temporary assistance from CBER and CDER reviewers who expressed interest in helping clear the backlog. FDA officials told us OOPD also subsequently received reviewer assistance from the Office of Medical Products and Tobacco. OOPD trained these additional reviewers on the orphan designation review process and criteria for granting orphan status. As a result of these efforts, FDA cleared the application backlog by August 28, 2017, nearly a month ahead of its goal. (See table 1 for the seven actions FDA took as part of its modernization plan to clear the designation application backlog.) To accomplish FDA’s second goal of reviewing new designation applications within 90 days of receipt, the modernization plan outlined eight steps the agency planned to take to improve the efficiency of its application review process. For example, OOPD implemented a standard review template in October 2017 that it had developed under the modernization plan’s first goal to address the backlog of applications. This template outlines information that reviewers are supposed to record, as applicable, from each application and evaluate when making a designation decision—namely, the (1) background information, (2) clinical superiority analysis, (3) orphan subset analysis, (4) population estimate, and (5) scientific rationale that the drug may effectively treat the disease. (See app. I for more information about what is recorded in OOPD’s review template.) The review template also includes the designation recommendation, as well as the secondary reviewer’s concurrence with the designation determination. FDA officials reported that before implementing this review template, OOPD reviewers documented less-structured narrative information about each application on a prior form. In addition, OOPD introduced online training for manufacturers on the information to include in a designation application and the common issues OOPD has encountered when reviewing an application. According to officials, this training is intended to enhance the consistency and quality of designation applications, which may ultimately reduce OOPD requests for additional information from manufacturers. (See table 2 for the eight steps the agency took to improve the timeliness of its designation application review process.) In July 2017, OOPD began using the new internal tracking report to monitor adherence to its 90-day timeliness goal. As of March 2018, FDA officials reported that OOPD management has received these tracking reports on a daily basis, which identify the number of days that have elapsed for each application pending review, among other things. According to these tracking reports, OOPD has overall met its 90-day timeliness goal for reviewing designation applications since mid- September 2017 and has completed most application reviews within 60 days of receipt. For example, as of July 20, 2018, OOPD had 35 applications pending review for 0 to 30 days; 31 applications pending review for 31 to 60 days; 9 applications pending review for 61 to 90 days; and no applications pending review for more than 90 days. FDA Uses Consistent Criteria to Grant Orphan Designation, but Reviews Do Not Include Complete Information FDA Generally Applies Consistent Criteria in Reviewing Applications for Orphan Designation, but Did Not Ensure that All Required Information was Appropriately Recorded or Used OOPD applies two consistent criteria (i.e., two particular criteria that all designation applications must meet) when determining whether to grant a drug orphan status: (1) the disease that the drug is intended to treat affects fewer than 200,000 people in the United States, and (2) there is adequate scientific rationale that the drug may effectively treat the disease. For circumstances involving orphan subsets of a non-rare disease or clinical superiority, additional criteria are required for orphan designation. According to OOPD data, of the 3,690 orphan designation applications received from 2008 to 2017, OOPD determined that the majority of them met these criteria and granted them orphan status. Specifically, approximately 71 percent of applications were granted orphan designation as of April 2018. The remaining designation applications were placed in a pending status awaiting the manufacturer’s response to OOPD concerns (21 percent), denied orphan designation (5 percent), or withdrawn (2 percent). (See table 3.) In addition, our analysis of 148 OOPD review templates completed for new designation applications received from October to December 2017 provided further detail on OOPD’s designation determinations since implementing its Orphan Drug Modernization Plan. We found that for this time period, 87 designation applications (59 percent) were granted orphan status, 57 designation applications (39 percent) were placed in pending status awaiting further information from the manufacturer, and 4 designation applications (3 percent) were denied orphan status. The most common reason OOPD did not grant orphan designation was due to concerns with the adequacy of the manufacturer’s scientific rationale, which occurred in 43 of the 61 pending or denied review templates. OOPD reviewers noted various concerns with the scientific rationale provided in these designation applications, including that the manufacturer did not provide sufficient or adequate data to support their scientific rationale, or that the manufacturer did not provide data from the strongest available model for testing the drug. Of the five review template sections where reviewers are required to record information, we found that OOPD does not ensure that all required information is consistently recorded in the background information section and evaluated when making designation decisions. OOPD instructs reviewers to document background information, including elements of the regulatory history of the drug (e.g., U.S. and foreign marketing history), and previous orphan designations for both the drug and the disease. Our analysis found that 102 of 148 OOPD review templates were missing one or more elements of the regulatory history of a drug. (See table 4.) In addition, we found that 19 of 148 review templates did not capture all prior orphan designations for the drug and disease. In one case, the OOPD reviewer did not record any prior orphan designation for the disease in the review template and placed the designation application in pending status due to concerns with the manufacturer’s population estimate. However, the disease that was the subject of the application had 36 related orphan designations at the time of the review, 7 of which had been granted in 2017. According to FDA officials, although the background information required in the review template may not directly affect a designation decision, it provides important context that is critical to ensuring a complete review of a designation application. For example, FDA officials told us that in cases where the designation application is for a disease with little published information available, it may help to know the drug’s U.S. marketing history to identify whether CBER or CDER has experience with the disease. Additionally, the prior orphan designation history can help the OOPD reviewer identify previously accepted methodologies to estimate the population for a disease. Despite requiring its reviewers to record background information for each designation application, OOPD’s guidance does not provide instructions on how to use this information when evaluating the applications. Internal control standards for the federal government specify that agencies should record relevant, reliable, and timely information, and process that information into quality data that enables staff to carry out their responsibilities. Without instructions on how to use the background information required in its review templates, OOPD reviewers may not consistently use all of the information needed to conduct a complete evaluation of a designation application. Additionally, OOPD instructs its reviewers to consider evidence found in independent sources to verify the population estimate provided in a designation application. However, in 23 of 148 OOPD review templates, reviewers did not include the results of any such independent verification in their evaluation of the manufacturer’s population estimate. Internal control standards state that agencies should conduct checks of their recorded data to ensure its accuracy and completeness, but we found that OOPD does not fully conduct such data checks. Without ensuring that its reviewers conduct and record the results of independent verification of population estimates, OOPD cannot be assured that quality information is consistently informing its designation determinations. For the 148 templates we reviewed, we found that OOPD granted orphan designation to 26 applications missing required information. Specifically, we found that OOPD granted designation to 11 applications where the reviewer did not record prior orphan designation history, to 13 applications where the reviewer did not document independent verification of the manufacturer’s population estimate, and to 2 applications where the reviewer did neither. In cases where the background information was incomplete or there was no documentation of independent verification of the manufacturer’s population estimate, there also was no evidence that the secondary reviewer verified the completeness of these sections of the review templates. Most Orphan Designation Applications Had a Population Estimate of Fewer than 100,000 and Over Half of the Applications Target One of Four Therapeutic Areas Approximately 71 percent of orphan designation applications received by FDA from 2008 to 2017 were for drugs intended to treat diseases affecting 100,000 or fewer people. In addition, half of the applications received during this time frame were for drugs intended to treat populations of 50,000 or fewer people. (See fig. 4.) For applications that OOPD granted orphan designation, the population estimates for the diseases they were intended to treat ranged from 0 to 199,966 people. Of 3,491 orphan designation applications OOPD received from 2008 to 2017, over half were for the therapeutic areas of oncology (30 percent), neurology (13 percent), hematology (7 percent), and gastroenterology and liver (6 percent). Thirty-seven other therapeutic areas accounted for the remaining 44 percent of applications, with each therapeutic area accounting for 5 percent or fewer of designation applications received during this time frame. Some of these other therapeutic areas included pulmonary, immunology, cardiology, and dermatology. (See fig. 5.) Additionally, our analysis of 148 OOPD review templates from October to December 2017 found that 29 applications (20 percent) requested orphan status based on an orphan subset claim, 7 of which were granted orphan designation; and 7 applications (5 percent) requested orphan status based on a clinical superiority claim, 2 of which were granted orphan designation. FDA’s Orphan Drug Marketing Approvals Increased from 2008 to 2017, Were Focused in Two Therapeutic Areas, and Typically Required about 9 Months for Agency Review FDA approved 351 orphan drugs for marketing from 2008 to 2017. Orphan drug marketing approvals have increased over this period, from 17 in 2008 to 77 in 2017, and have accounted for an increasing proportion of all FDA marketing approvals. Orphan drug marketing approvals also vary by certain characteristics, but were typically in one of two therapeutic areas and required about 9 months for FDA review, among other commonalities. Therapeutic area. From 2008 to 2017, 53.3 percent of orphan drug marketing approvals were in one of two therapeutic areas that were also common for granted designations: oncology (42.5 percent) and hematology (10.8 percent). There were 27 different therapeutic areas overall, with 7 of those areas having 10 or more approved orphan drugs. (See app. II for FDA’s orphan drug marketing approvals from 2008 to 2017 by therapeutic area.) Number of indications. Of the 351 orphan drug marketing approvals from 2008 to 2017, there were 252 unique drugs, because drugs can be approved for more than one orphan indication. For example, the oncology drug Velcade received FDA approval in 2008 as a first-line therapy for multiple myeloma, and received approval for a second indication in 2014 for treatment of mantle cell lymphoma if the patient has not received at least one prior therapy. (See app. II.) The majority of drugs had one orphan indication (77.4 percent) or two orphan indications (15.9 percent). However, several drugs (6.7 percent) were approved to treat three or more orphan indications. Two oncology drugs had the most approved orphan indications: Imbruvica (10 orphan indications) and Avastin (9 orphan indications). New drug or new indication for previously approved drug. The majority (61.5 percent) of orphan drug marketing approvals from 2008 to 2017 have been for a new drug not previously approved for any use, while the remainder (38.5 percent) have been for a new indication for a drug previously approved to treat a rare or non-rare disease. (See fig. 6.) Of the new orphan drugs that received marketing approval, the majority have been for novel uses—new molecular entities or new therapeutic biologics that are often innovative and serve previously unmet medical needs, or otherwise significantly help to advance patient care and public health. FDA review time. For orphan drug marketing approvals from 2008 to 2017, the median time from FDA receiving a marketing application to approval was about 9 months, and ranged from 75 days to about 17 years. FDA averaged about 1.2 review cycles for these drugs, with the number of cycles ranging from one to four reviews. Two neurology drugs each had the largest number of reviews (four). Expedited programs. Approximately 71 percent of orphan drug marketing approvals from 2008 to 2017 benefitted from at least one type of FDA’s four primary expedited programs (accelerated approval, breakthrough therapy designation, fast track designation, or priority review). Most orphan drug approvals in each year received priority review, while less than half received accelerated approval, breakthrough therapy designation, or fast track designation in the year the drug was approved. (See fig. 7.) Very few (six) orphan drug approvals were granted all four of these expedited programs in the year approved. FDA Issued Guidance and Offered Training to Address Ongoing Rare Disease Drug Development Challenges FDA Developed Guidance and Training to Better Inform Its Reviewers and the Public about Rare Disease Drug Development Challenges To address rare disease drug development challenges, FDA has established guidance for internal and public use, and offered training to its reviewers. FDA’s guidance and training on rare diseases includes topics related to more general drug development issues, as well as the agency’s marketing approval process as it applies to orphan drugs. In general, FDA’s review centers—CBER and CDER—are responsible for establishing guidance on general rare disease drug development issues. For example, FDA published draft guidance for industry in August 2015 on common issues in rare disease drug development. The guidance discusses important aspects of drug development, such as the need for an adequate understanding of the natural history of the disease and the drug’s proposed mechanism of action, and the standard of evidence to establish safety and effectiveness. CBER published additional draft guidance in July 2018 on rare disease drug development specific to gene therapy in order to help manufacturers consider issues such as limited study population size, safety issues, and outcomes. FDA has also conducted studies to understand rare disease drug development challenges. In March 2011, FDA issued a report to Congress on the strengths and weaknesses of its regulatory process with respect to rare and neglected tropical diseases. In that report, a group of expert FDA officials found that its regulations allowed experienced reviewers to use flexibility and scientific judgment in determining the safety and efficacy of rare disease drugs. However, the group also noted areas for improvement, such as the need to develop training for FDA reviewers and to increase communication efforts with stakeholders, including industry and advocacy organizations. One other key area the group identified was the need to analyze the agency’s orphan drug marketing approvals to further understand the factors helping or hindering drug development. To do so, FDA analyzed a subset of orphan drug approvals and published two studies: FDA’s February 2012 publication on rare disease drug approvals between 2006 and 2011 found that substantial proportions of marketing approvals were for innovative drugs, and most clinical studies were highly unique in terms of the study design, controls, and outcome measures used. FDA concluded that developing defined policy and consistency around such diverse drugs and unique clinical studies would be difficult. FDA’s May 2012 publication on marketing applications between 2006 and 2010 concluded that, due to the high approval rates for applications targeting rare diseases in its study, increased efforts in the agency’s review process would be unlikely to substantially increase the number of new rare disease drugs. FDA’s patient engagement programs have also focused on rare disease drug development. As of February 2016, the agency reported that nearly half of patient-focused drug development meetings—meetings to obtain the patient perspective on specific diseases and their treatments—have been focused on rare diseases. In addition, four of six patient advocacy groups we interviewed said that they used this type of meeting or another structured meeting to provide FDA input on their rare disease. One patient advocacy group told us that its meeting with FDA helped lead to issued guidance on drug development for Duchenne muscular dystrophy. As part of its efforts to better inform reviewers about the agency’s regulatory framework and drug development challenges with respect to rare diseases, FDA has developed a training course and holds an annual all-day meeting for reviewers. (See table 5.) In its rare disease training course, FDA describes its authority to be flexible in reviewing marketing applications for rare disease drugs. Multiple studies found that FDA has regularly used this flexibility in approving rare disease therapies; for example, by allowing marketing approval based on one adequate and well-controlled study, rather than requiring two. Stakeholders and Research Identified Ongoing Rare Disease Drug Development Challenges, while Opinions on the Orphan Drug Act Incentives Varied Stakeholders we interviewed, including industry experts and patient advocacy groups, and research we reviewed identified general rare disease drug development challenges, as well as more specific concerns pertaining to the ODA incentives and pricing. However, opinions of some of the concerns attributed to the ODA incentives varied among stakeholders. Barriers to rare disease drug development. The two barriers to rare disease drug development most commonly cited among stakeholders we interviewed were (1) the need for more basic scientific research (e.g., understanding patient experiences and progression of symptoms, known as a disease’s natural history), and (2) the difficulty in recruiting small populations for clinical trials. One drug manufacturer explained that, when a disease affects a small population, it is hard to identify and recruit participants, because they may be geographically dispersed or have to travel long distances to participate in the trial. Identifying these participants and enrolling them into a clinical trial is therefore both labor- and resource-intensive. A number of studies conducted by FDA and others identified similar challenges, as well as other rare disease drug development issues. For example, a 2010 study by the National Academies of Science, Engineering, and Medicine noted that researchers still lack a basic understanding of the mechanisms that underlie many rare diseases. Another drug development challenge identified in the study is attracting trained investigators to study rare diseases. To address some of these challenges, OOPD has a number of grant programs focused on rare disease drug development, including one that funds studies that track the natural history of a disease over time to identify demographic, genetic, environmental, and other variables that may lead to drug development. In addition, FDA’s fiscal year 2019 budget justification includes a request for funds to develop clinical trial networks to create an understanding of the natural history and clinical outcomes of rare diseases. Significance of ODA incentives in fostering drug development. Although many stakeholders we spoke with categorized the ODA’s incentives as significant to rare disease drug development, two stakeholder groups we spoke with—industry experts and drug manufacturers—largely categorized the incentives as less important than did other stakeholders. For example, two of four drug manufacturers we interviewed told us that their company’s drug development decisions are based on the disease areas it wants to target and not due to ODA incentives. In addition, several stakeholders noted non-ODA drivers of orphan drug growth, including the ability to command high prices and advances in scientific discovery for some rare diseases. Several studies also noted limitations of the ODA incentives, including the structure of the orphan drug tax credit, the decreasing impact of the marketing exclusivity incentive in protecting orphan drugs from competition, and the ability of the incentives to target “truly” rare conditions that would not otherwise have obtained sufficient investment. For example, the Congressional Research Service reported in December 2016 that the benefits of the orphan drug tax credit are limited to companies with positive tax liabilities. As a result, the Congressional Research Service concluded that the typical small startup company investing in the development of an orphan drug may be unable to take advantage of the tax credit during its first few years of operation when its expenses exceed its revenue and cash flow may be a problem. Certain circumstances under which drug manufacturers may obtain ODA incentives. Several stakeholders we spoke with were critical of how drug manufacturers may obtain ODA incentives, such as for drugs that were already approved to treat another disease or for multiple orphan designations for the same drug. For example, one industry expert argued that granting multiple orphan designations for the same drug subverts the purpose of the ODA to support development of drugs that may not otherwise be profitable, as a drug manufacturer can make a return on investment from the drug from multiple patient groups rather than just one. In contrast, many patient advocacy groups we spoke with noted that drug manufacturers’ ability to obtain ODA incentives under certain circumstances, such as multiple orphan designations for the same drug, are needed for further investment in drug development. In particular, they noted that this provides an incentive for manufacturers to demonstrate their drugs are safe and effective for individuals who have a rare disease (particularly for FDA-approved drugs with an unapproved use—known as off-label use) and account for any differences within rare diseases. A number of studies raised similar concerns about these and other issues, including off-label use of orphan drugs. Specifically, one study noted that, due to increasing investment in precision medicine, manufacturers may develop drugs treating a particular genetic subset of a non-rare disease. These subsets may qualify for ODA incentives, even though they may not face the same development challenges as “true” rare diseases. For example, three orphan drugs were approved as treatments for a subset of non-small cell lung cancers that have a specific gene mutation. According to the study, these drugs can also be used off- label for diseases other than the non-small cell lung cancer subset for which they were originally approved. FDA has taken steps in recent years to address certain circumstances under which drug manufacturers may obtain orphan designation. For example, the agency recently issued guidance stating that it no longer plans to grant orphan designation to pediatric subsets of non-rare diseases. The agency attributed its decision, in part, to a loophole that could result in a drug receiving an orphan designation for a pediatric subset being exempt from requirements under the Pediatric Research Equity Act to study drug safety and effectiveness in pediatric subpopulations. FDA also held a workshop in May 2018 to seek input on appropriate orphan designation for certain oncology treatments to stay current with evolving knowledge. Orphan drug pricing. Stakeholders we interviewed and research we identified also raised concerns about the high prices drug manufacturers can charge for orphan drugs when receiving ODA incentives. Several stakeholders we spoke with noted that it was difficult to discuss the ODA without addressing concerns with how orphan drugs are priced. For example, one patient advocacy group told us that it may be appropriate for a drug to receive multiple orphan designations, but that the drug manufacturer should revise the price of its drug to reflect the number of orphan designations. Several studies have also pointed to high orphan drug prices as a public health challenge in terms of access and affordability, particularly when orphan drug development may be less costly than non-orphan drugs due to smaller and fewer efficacy and safety trials, shorter FDA review time, higher marketing approval success rates, and lower marketing costs. One study found an inverse relationship between the price of orphan drugs and their volume of use (i.e., the more expensive the orphan drug, the fewer patients who use the drug), and noted that over the past 20 years spending on medicine in the U.S. market has shifted increasingly toward drugs that treat relatively few people, such as those with rare diseases. Conclusions With significant unmet need for most rare diseases, the ODA provides manufacturers with a variety of incentives if they develop drugs that meet orphan designation criteria. To ensure that drug manufacturers’ claims in their orphan designation applications are accurate, FDA must conduct thorough and consistent evaluations. FDA took several steps beginning in June 2017 to improve the consistency and efficiency of these evaluations, including introducing a standard review template and guidance for completing it. However, we found that FDA does not always ensure that all information is consistently recorded in its review templates and evaluated when making designation determinations, which are critical steps needed to understand the full context of a drug’s intended use in the rare disease. FDA has a number of options it could take to ensure that reviewers obtain all necessary information and use it to inform orphan designation determinations. For example, we found that FDA’s guidance was not always clear in instructing reviewers how they should use the information they record. Clarifying these requirements in guidance could help reviewers make use of this information, including the secondary reviewers who ensure the consistency and quality of designation reviews. While FDA action to improve its designation reviews will not address the broader rare disease drug development challenges identified by stakeholders we interviewed and research we analyzed, it could help FDA ensure the consistency of its review process, particularly as demand for orphan designations continues to grow. Recommendation for Executive Action We are making the following recommendation to FDA: The Commissioner of FDA should ensure that information from orphan drug designation applications is consistently recorded in OOPD review templates and evaluated by OOPD reviewers when making an orphan designation decision. (Recommendation 1) Agency Comments We provided a draft of this report to the Department of Health and Human Services (HHS) for comment. In its written comments, reproduced in appendix III, the agency concurred with our recommendation. HHS also provided technical comments, which we incorporated as appropriate. In its response, HHS stated that it would consider our recommendation as part of FDA’s ongoing efforts to evaluate and revise the designation review template, and to train reviewers. Regarding the background information in the review template, HHS also noted that many drugs requesting orphan designation do not have relevant regulatory history, particularly adverse actions, as these drugs are early in drug development at the time of requesting orphan designation. However, HHS agreed with the importance of consistently documenting and utilizing background information, and stated that FDA will continue to apply consistent criteria to its review decisions. We are sending copies of this report to the Secretary of Health and Human Services, appropriate congressional committees, and other interested parties. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact us at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Information Recorded in OOPD’s Standard Designation Review Template In October 2017, the Food and Drug Administration’s Office of Orphan Products Development (OOPD) introduced a standard review template, along with guidance for how to complete it, to aid its reviewers in evaluating orphan designation applications. OOPD guidance instructs its reviewers to record information about the drug and disease on the standard review template, as well as the results of independent verification done for certain information included in the application. The template is then used with the designation application to determine whether to grant orphan designation to a drug. (See table 6 for the information recorded in OOPD review templates.) Appendix II: Orphan Drug Marketing Approvals from 2008 to 2017 The Food and Drug Administration (FDA) approved 351 orphan drugs for marketing from 2008 to 2017 in 27 different therapeutic areas. Forty-two percent (149) of orphan drug marketing approvals were in oncology, with six other therapeutic areas having 10 or more approved orphan drugs. (See table 7 for information on orphan drug marketing approvals from 2008 to 2017 by therapeutic area.) Additionally, the 351 orphan drug marketing approvals were for 252 unique drugs, because drugs can be approved for more than one orphan indication. The majority of drugs had one orphan indication (77.4 percent) or two orphan indications (15.9 percent). However, several drugs (6.7 percent) were approved to treat three or more orphan indications. Appendix III: Comments from the Department of Health and Human Services Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Marcia Crosse (Director), Robert Copeland (Assistant Director), E. Jane Whipple (Analyst-in-Charge), and Brienne Tierney made key contributions to this report. Also contributing were Kaitlin Farquharson, Alison Granger, Drew Long, and Vikki Porter.
Why GAO Did This Study The ODA provides incentives, including tax credits and exclusive marketing rights, for manufacturers to develop drugs to treat rare diseases, which are typically defined as affecting fewer than 200,000 people in the United States. Approximately 7,000 rare diseases affect an estimated 30 million people in the United States, and only 5 percent of rare diseases have FDA-approved treatments. GAO was asked to examine FDA's orphan drug processes. In this report, GAO examines, among other things, (1) the actions FDA has taken to address the growing demand for orphan designations; (2) the extent to which FDA has used consistent criteria and complete information in reviewing orphan designation applications; and (3) the steps FDA has taken to address rare disease drug development challenges. GAO analyzed FDA documents and data, as well as all designation review templates FDA completed as of March 2018 for applications received from October to December 2017. GAO interviewed agency officials, as well as stakeholders, including drug manufacturers, industry experts, and patient advocacy groups. What GAO Found The Food and Drug Administration's (FDA) Office of Orphan Products Development is responsible for reviewing drug manufacturer applications for orphan designation. Drugs granted this designation treat rare diseases and may receive various incentives under the Orphan Drug Act (ODA). As the number of orphan designation applications received and granted has grown, FDA outlined several process changes in its June 2017 modernization plan to improve designation review timeliness and consistency. In evaluating designation applications, FDA reviewers generally apply two consistent criteria—(1) the size of the rare disease population, and (2) the scientific rationale that the drug may effectively treat the disease. To inform their evaluation, reviewers must record certain background information in a standard review template, such as the drug's U.S. marketing history. Officials told us this information provides important context, such as whether FDA has experience with a little known disease, critical to ensuring a complete designation application review. However, GAO's analysis of 148 designation review templates found that FDA does not consistently record or evaluate background information when making designation decisions. For example, 48 of 148 review templates GAO analyzed were missing information on the drug's U.S. marketing history. As such, FDA cannot be sure that reviewers are conducting complete evaluations that include all critical information needed for assessing its criteria. Stakeholders GAO interviewed and research GAO reviewed identified a number of rare disease drug development challenges, such as the difficulty in recruiting small populations for clinical trials, with differing opinions about the ODA incentives. For example, several stakeholders were critical of manufacturers obtaining multiple orphan designations—and ODA incentives—for the same drug when the drug may otherwise be profitable from treating multiple patient groups. However, many patient advocacy groups noted that granting ODA incentives in these circumstances is needed to encourage drug manufacturers to study the safety and efficacy of drugs in rare disease populations. What GAO Recommends FDA should ensure that all required information for reviews of orphan designation applications is consistently recorded and evaluated. The agency concurred with our recommendation.
gao_GAO-18-397T
gao_GAO-18-397T_0
CBP Has Made Progress Deploying Surveillance Technology along the Southwest Border, but Has Not Fully Assessed Effectiveness On multiple occasions since 2011, we have reported on the progress the Border Patrol has made deploying technologies along the southwest border. Figure 1 shows the land-based surveillance technology systems used by the Border Patrol. In November 2017, we reported on the progress the Border Patrol made deploying technology along the southwest border in accordance with its 2011 Arizona Technology Plan and 2014 Southwest Border Technology Plan. For example, we reported that, according to officials, the Border Patrol had completed deployments of all planned Remote Video Surveillance Systems (RVSS), Mobile Surveillance Capability systems, and Unattended Ground Sensors, as well as 15 of 53 Integrated Fixed Tower systems to Arizona. The Border Patrol had also completed deployments of select technologies to Texas and California, including deploying 32 Mobile Surveillance Capability systems. In addition, the Border Patrol had efforts underway to deploy other technology programs, but at the time of our report, some of those programs had not yet begun deployment or were not yet under contract. For example, we reported that, according to the Border Patrol officials responsible for the RVSS program, the Border Patrol had begun planning the designs of the command and control centers and towers for the Rio Grande Valley sector in Texas. Further, we reported that the Border Patrol had not yet initiated deployments of RVSS to Texas because, according to Border Patrol officials, the program had only recently completed contract negotiations for procuring those systems. Additionally, the Border Patrol initially awarded the contract to procure and deploy Mobile Video Surveillance System units to Texas in 2014, but did not award the contract until 2015 because of bid and size protests, and the vendor that was awarded the contract did not begin work until March 2016. Our November 2017 report includes more detailed information about the deployment status of surveillance technology along the southwest border as of October 2017. We also reported in November 2017 that the Border Patrol had made progress identifying performance metrics for the technologies deployed along the Southwest Border, but additional actions are needed to fully implement our prior recommendations in this area. For example, in November 2011, we found that CBP did not have the information needed to fully support and implement the Arizona Technology Plan and recommended that CBP (1) determine the mission benefits to be derived from implementation of the Arizona Technology Plan and (2) develop and apply key attributes for metrics to assess program implementation. CBP concurred with our recommendations and has implemented one of them. Specifically, in March 2014, we reported that CBP had identified mission benefits of its surveillance technologies to be deployed along the southwest border, such as improved situational awareness and agent safety. However, the agency had not developed key attributes for performance metrics for all surveillance technologies to be deployed. Further, we reported in March 2014 that CBP did not capture complete data on the contributions of these technologies. When used in combination with other relevant performance metrics or indicators, these data could be used to better determine the impact of CBP’s surveillance technologies on CBP’s border security efforts and inform resource allocation decisions. Therefore, we recommended that CBP (1) require data on technology contributions to apprehensions or seizures to be tracked and recorded within its database and (2) subsequently analyze available data on apprehensions and technological assists—in combination with other relevant performance metrics or indicators, as appropriate—to determine the contribution of surveillance technologies. CBP concurred with our recommendations and has implemented one of them. Specifically, in June 2014, the Border Patrol issued guidance informing agents that the asset assist data field—which records assisting technology or other assets (canine teams)—in its database had become a mandatory data field. While the Border Patrol has taken action to collect data on technology, it has not taken additional steps to determine the contribution of surveillance technologies to CBP’s border security efforts. In April 2017, we reported that the Border Patrol had provided us a case study that assessed technology assist data, along with other measures, to determine the contributions of surveillance technologies to its mission. We reported that this was a helpful step in developing and applying performance metrics; however, the case study was limited to one border location and the analysis was limited to select technologies. In November 2017, we reported that Border Patrol officials demonstrated the agency’s new Tracking, Sign Cutting, and Modeling (TSM) system, which they said is intended to connect between agents’ actions (such as identification of a subject through the use of a camera) and results (such as an apprehension) and allow for more comprehensive analysis of the contributions of surveillance technologies to the Border Patrol’s mission. One official said that data from the TSM will have the potential to provide decision makers with performance indicators, such as changes in apprehensions or traffic before and after technology deployments. However, at the time of our review, TSM was still early in its use and officials confirmed that it was not yet used to support such analytic efforts. We continue to believe that it is important for the Border Patrol to assess technologies’ contributions to border security and will continue to monitor the progress of the TSM and other Border Patrol efforts to meet our 2011 and 2014 recommendations. CBP Is Planning to Construct New Physical Barriers, but Has Not Yet Assessed the Impact of Existing Fencing Fencing Is Intended to Assist Agents in Performing Their Duties, but Its Contributions to Border Security Operations Have Not Been Assessed We have reported on the significant investments CBP has made in tactical infrastructure along the southwest border. The Illegal Immigration Reform and Immigrant Responsibility Act of 1996 (IIRIRA), as amended, provides that the Secretary of Homeland Security shall take actions, as necessary, to install physical barriers and roads in the vicinity of the border to deter illegal crossings in areas of high illegal entry. The Secure Fence Act of 2006, in amending IIRIRA, required DHS to construct at least two layers of reinforced fencing as well as physical barriers, roads, lighting, cameras, and sensors on certain segments of the southwest border. From fiscal years 2005 through 2015, CBP increased the total miles of primary border fencing on the southwest border from 119 miles to 654 miles—including 354 miles of primary pedestrian fencing and 300 miles of primary vehicle fencing. In addition, CBP has deployed additional layers of pedestrian fencing behind the primary border fencing, including 37 miles of secondary fencing. From fiscal years 2007 through 2015, CBP spent approximately $2.4 billion on tactical infrastructure on the southwestern border—and about 95 percent, or around $2.3 billion, was spent on constructing pedestrian and vehicle fencing. CBP officials reported it will need to spend additional amounts to sustain these investments over their lifetimes. In 2009, CBP estimated that maintaining fencing would cost more than $1 billion over 20 years. CBP used various fencing designs to construct the 654 miles of primary pedestrian and vehicle border fencing. Figure 2 shows examples of existing pedestrian fencing deployed along the border. In February 2017, we reported that border fencing had benefited border security operations in various ways, according to the Border Patrol. For example, according to officials, border fencing improved agent safety, helped reduce vehicle incursions, and supported Border Patrol agents’ ability to respond to illicit cross-border activities by slowing the progress of illegal entrants. However, we also found that, despite its investments over the years, CBP could not measure the contribution of fencing to border security operations along the southwest border because it had not developed metrics for this assessment. We reported that CBP collected data that could help provide insight into how border fencing contributes to border security operations. For example, we found that CBP collected data on the location of illegal entries that could provide insight into where these illegal activities occurred in relation to the location of various designs of pedestrian and vehicle fencing. We reported that CBP could potentially use these data to compare the occurrence and location of illegal entries before and after fence construction, as well as to help determine the extent to which border fencing contributes to diverting illegal entrants into more rural and remote environments, and border fencing’s impact, if any, on apprehension rates over time. Therefore, we recommended in February 2017 that the Border Patrol develop metrics to assess the contributions of pedestrian and vehicle fencing to border security along the southwest border using the data the Border Patrol already collects and apply this information, as appropriate, when making investment and resource allocation decisions. The agency concurred with our recommendation. As of December 2017, officials reported that CBP plans to establish initial metrics by March 2018 and finalize them in January 2019. CBP Faces Challenges in Sustaining Tactical Infrastructure and Has Not Provided Guidance on Its Process for Identifying and Deploying Tactical Infrastructure In February 2017, we also reported that CBP was taking a number of steps to sustain tactical infrastructure along the southwest border; however, it continued to face certain challenges in maintaining this infrastructure. For example, CBP had funding allocated for tactical infrastructure sustainment requirements, but had not prioritized its requirements to make the best use of available funding, since CBP also required contractors to address urgent repair requirements. According to Border Patrol officials, CBP classifies breaches to fencing, grates, or gates as urgent and requiring immediate repair because breaches increase illegal entrants’ ability to enter the country unimpeded. At the time of our February 2017 review, the majority of urgent tactical infrastructure repairs on the southwest border were fence breaches, according to Border Patrol officials. From fiscal years 2010 through 2015, CBP recorded a total of 9,287 breaches in pedestrian fencing, and repair costs averaged $784 per breach. While contractors provide routine maintenance and address urgent repairs on tactical infrastructure, certain tactical infrastructure assets used by the Border Patrol—such as border fencing—become degraded beyond repair and must be replaced. For example, in February 2017 we reported that CBP had provided routine maintenance and repair services to the primary legacy pedestrian fencing in Sunland Park, New Mexico. However, significant weather events had eroded the foundation of the fencing, according to the Border Patrol officials in the El Paso sector, and in 2015 CBP began to replace 1.4 miles of primary pedestrian fence in this area. We also reported on several additional CBP projects to replace degraded, legacy pedestrian fencing with more modern, bollard style fencing. For example, in fiscal year 2016, CBP began removing and replacing an estimated 7.5 miles of legacy primary pedestrian fencing with modern bollard style fencing within the Tucson sector. In addition, from fiscal years 2011 through 2016, CBP completed four fence replacement projects that replaced 14.1 miles of primary pedestrian legacy fencing in the Tucson and Yuma sectors at a total cost of approximately $68.26 million and an average cost of $4.84 million per mile of replacement fencing. We plan to provide information on additional fence replacement projects in a forthcoming report. In 2014, the Border Patrol began implementing the Requirements Management Process that is designed to facilitate planning for funding and deploying tactical infrastructure and other requirements, according to Border Patrol officials. At the time of our February 2017 review, Border Patrol headquarters and sector officials told us that the Border Patrol lacked adequate guidance for identifying, funding, and deploying tactical infrastructure needs as part of this process. In addition, officials reported experiencing some confusion about their roles and responsibilities in this process. We reported that developing guidance on this process would provide more reasonable assurance that the process is consistently followed across the Border Patrol. We therefore recommended that the Border Patrol develop and implement written guidance to include roles and responsibilities for the steps within its requirements process for identifying, funding, and deploying tactical infrastructure assets for border security operations. The agency concurred with this recommendation and stated that it planned to update the Requirements Management Process and, as part of that update, planned to add communication and training methods and tools to better implement the process. As of December 2017, DHS plans to complete these efforts by September 2019. CBP Has Tested Barrier Prototypes and Plans to Construct New Barriers in San Diego and Rio Grande Valley Sectors In response to the January 2017 Executive Order, CBP established the Border Wall System Program to replace and add to existing barriers along the southwest border. In April 2017, DHS leadership authorized CBP to procure barrier prototypes, which are intended to help refine requirements and inform new or updated design standards for the border wall system. CBP subsequently awarded eight contracts with a total value of $5 million for the construction, development, and testing of the prototypes. From October to December 2017, CBP tested eight prototypes—four constructed from concrete and four from other materials—and evaluated them in five areas: breachability, scalability, constructability, design, and aesthetics. CBP officials said the prototype evaluation results are expected by March 2018. CBP has selected the San Diego and Rio Grande Valley sectors for the first two segments of the border wall system. In the San Diego sector, CBP plans to replace 14 miles of existing primary and secondary barriers. The primary barriers will be rebuilt to existing design standards, but the secondary barriers will be rebuilt to new design standards once established. In the Rio Grande Valley sector, CBP plans to extend an existing barrier by 60 miles using existing design standards. CBP intends to prioritize construction of new or replacement physical barriers based on threat levels, land ownership, and geography, among other things. We have ongoing work reviewing the Border Wall System Program, and we plan to report on the results of that work later this year. The Border Patrol Has Continued to Face Staffing Challenges In November 2017 we reported that, in fiscal years 2011 through 2016, the Border Patrol had statutorily-established minimum staffing levels of 21,370 full-time equivalent agent positions, but the Border Patrol has faced challenges in staffing to that level. Border Patrol headquarters, with input from the sectors, determines how many authorized agent positions are allocated to each of the sectors. According to Border Patrol officials, these decisions take into account the relative needs of the sectors, based on threats, intelligence, and the flow of illegal activity. Each sector’s leadership determines how many of the authorized agent positions will be allocated to each station within their sector. At the end of fiscal year 2017, the Border Patrol reported it had over 19,400 agents on board nationwide, and that over 16,600 of the agents were staffed to sectors along the southwest border. As mentioned earlier, the January 2017 executive order called for the hiring of 5,000 additional Border Patrol agents, subject to available appropriations, and as of November 2017 we reported that the Border Patrol planned to have 26,370 agents by the end of fiscal year 2021. The Acting Commissioner of CBP reported in a February 2017 memo to the Deputy Secretary for Homeland Security that from fiscal year 2013 to fiscal year 2016, the Border Patrol hired an average of 523 agents per year while experiencing a loss of an average of 904 agents per year. The memo cited challenges such as competing with other federal, state, and local law enforcement organizations for applicants. In particular, the memo noted that CBP faces hiring and retention challenges compared to DHS’s U.S. Immigration and Customs Enforcement (which is also planning to hire additional law enforcement personnel) because CBP’s hiring process requires applicants to take a polygraph examination, Border Patrol agents are deployed to less desirable duty locations, and Border Patrol agents generally receive lower compensation. In November 2017, we reported that the availability of agents is one key factor that affects the Border Patrol’s deployment strategy. In particular, officials from all nine southwest border sectors cited current staffing levels and the availability of agents as a challenge for optimal deployment. We reported that, as of May 2017, the Border Patrol had 17,971 authorized agent positions in southwest border sectors, but only 16,522 of those positions were filled—a deficit of 1,449 agents—and eight of the nine southwest border sectors had fewer agents than the number of authorized positions. As a result of these staffing shortages, resources were constrained and station officials had to make decisions about how to prioritize activities for deployment given the number of agents available. We also reported in November 2017 that within sectors, some stations may be comparatively more understaffed than others because of recruitment and retention challenges, according to officials. Generally, sector officials said that the recruitment and retention challenges associated with particular stations were related to quality of life factors in the area near the station—for example, agents may not want to live with their families in an area without a hospital, with low-performing schools, or with relatively long commutes from their homes to their duty station. This can affect retention of existing agents, but it may also affect whether a new agent accepts a position in that location. For example, officials in one sector said that new agent assignments are not based solely on agency need, but rather also take into consideration agent preferences. These officials added that there is the potential that new agents may decline offers for stations that are perceived as undesirable, or they may resign their position earlier than they otherwise would to pursue employment in a more desirable location. We have ongoing work reviewing CBP’s efforts to recruit, hire, and retain its law enforcement officers, including Border Patrol agents. Chairwoman McSally, Ranking Member Vela, and Members of the Subcommittee, this concludes my prepared statement. I will be happy to answer any questions you may have. GAO Contact and Staff Acknowledgments For questions about this statement, please contact Rebecca Gambler at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony are Jeanette Henriquez (Assistant Director), Leslie Sarapu (Analyst-in- Charge), Ashley Davis, Alana Finley, Tom Lombardi, Marycella Mierez, and Claire Peachey. Related GAO Products Southwest Border Security: Border Patrol Is Deploying Surveillance Technologies but Needs to Improve Data Quality and Assess Effectiveness. GAO-18-119. Washington, D.C.: November 30, 2017. Border Patrol: Issues Related to Agent Deployment Strategy and Immigration Checkpoints. GAO-18-50. Washington, D.C.: November 8, 2017. 2017 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-17- 491SP. Washington, D.C.: April 26, 2017. Homeland Security Acquisitions: Earlier Requirements Definition and Clear Documentation of Key Decisions Could Facilitate Ongoing Progress. GAO-17-346SP. Washington, D.C.: April 6, 2017. Southwest Border Security: Additional Actions Needed to Better Assess Fencing’s Contributions to Operations and Provide Guidance for Identifying Capability Gaps. GAO-17-331. Washington, D.C.: February 16, 2017. Southwest Border Security: Additional Actions Needed to Better Assess Fencing’s Contributions to Operations and Provide Guidance for Identifying Capability Gaps. GAO-17-167SU. Washington, D.C.: December 22, 2016. Border Security: DHS Surveillance Technology, Unmanned Aerial Systems and Other Assets. GAO-16-671T. Washington, D.C.: May 24, 2016. 2016 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-16- 375SP. Washington, D.C.: April 13, 2016. Homeland Security Acquisitions: DHS Has Strengthened Management, but Execution and Affordability Concerns Endure. GAO-16-338SP. Washington, D.C.: March 31, 2016. Southwest Border Security: Additional Actions Needed to Assess Resource Deployment and Progress. GAO-16-465T. Washington, D.C.: March 1, 2016. Border Security: Progress and Challenges in DHS’s Efforts to Implement and Assess Infrastructure and Technology. GAO-15-595T. Washington, D.C.: May 13, 2015. Homeland Security Acquisitions: Addressing Gaps in Oversight and Information is Key to Improving Program Outcomes. GAO-15-541T. Washington, D.C.: April 22, 2015. Homeland Security Acquisitions: Major Program Assessments Reveal Actions Needed to Improve Accountability. GAO-15-171SP. Washington, D.C.: April 22, 2015. 2015 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-15- 404SP. Washington, D.C.: April 14, 2015. Arizona Border Surveillance Technology Plan: Additional Actions Needed to Strengthen Management and Assess Effectiveness. GAO-14-411T. Washington, D.C.: March 12, 2014. Arizona Border Surveillance Technology Plan: Additional Actions Needed to Strengthen Management and Assess Effectiveness. GAO-14-368. Washington, D.C.: March 3, 2014. Border Security: Progress and Challenges in DHS Implementation and Assessment Efforts. GAO-13-653T. Washington, D.C.: June 27, 2013. Border Security: DHS’s Progress and Challenges in Securing U.S. Borders. GAO-13-414T. Washington, D.C.: March 14, 2013. Border Patrol: Key Elements of New Strategic Plan Not Yet in Place to Inform Border Security Status and Resource Needs. GAO-13-25. Washington, D.C.: December 10, 2012. U.S. Customs and Border Protection’s Border Security Fencing, Infrastructure and Technology Fiscal Year 2011 Expenditure Plan. GAO- 12-106R. Washington, D.C.: November 17, 2011. Arizona Border Surveillance Technology: More Information on Plans and Costs Is Needed before Proceeding. GAO-12-22. Washington, D.C.: November 4, 2011. Homeland Security: DHS Could Strengthen Acquisitions and Development of New Technologies. GAO-11-829T. Washington, D.C.: July 15, 2011. Border Security: DHS Progress and Challenges in Securing the U.S. Southwest and Northern Borders. GAO-11-508T. Washington, D.C.: March 30, 2011. Border Security Preliminary Observations on the Status of Key Southwest Border Technology Programs. GAO-11-448T. Washington, D.C.: March 15, 2011. Secure Border Initiative: DHS Needs to Strengthen Management and Oversight of Its Prime Contractor. GAO-11-6. Washington, D.C.: October 18, 2010. U.S. Customs and Border Protection’s Border Security Fencing, Infrastructure and Technology Fiscal Year 2010 Expenditure Plan. GAO- 10-877R. Washington, D.C.: July 30, 2010. Department of Homeland Security: Assessments of Selected Complex Acquisitions, GAO-10-588SP. Washington, D.C.: June 30, 2010. Secure Border Initiative: DHS Needs to Reconsider Its Proposed Investment in Key Technology Program. GAO-10-340. Washington, D.C.: May, 5, 2010. Secure Border Initiative: DHS Has Faced Challenges Deploying Technology and Fencing Along the Southwest Border, GAO-10-651T. Washington, D.C.: May 4, 2010. Secure Border Initiative: Testing and Problem Resolution Challenges Put Delivery of Technology Program at Risk. GAO-10-511T. Washington, D.C.: March 18, 2010. Secure Border Initiative: DHS Needs to Address Testing and Performance Limitations That Place Key Technology Program at Risk. GAO-10-158. Washington, D.C.: January 29, 2010. Secure Border Initiative: Technology Deployment Delays Persist and the Impact of Border Fencing Has Not Been Assessed. GAO-09-1013T. Washington, D.C.: September 17, 2009. Secure Border Initiative: Technology Deployment Delays Persist and the Impact of Border Fencing Has Not Been Assessed. GAO-09-896. Washington, D.C.: September 9, 2009. Border Patrol: Checkpoints Contribute to Border Patrol’s Mission, but More Consistent Data Collection and Performance Measurement Could Improve Effectiveness. GAO-09-824. Washington, D.C.: August 31, 2009. Customs and Border Protection’s Secure Border Initiative Fiscal Year 2009 Expenditure Plan. GAO-09-274R. Washington, D.C.: April 30, 2009. Secure Border Initiative Fence Construction Costs. GAO-09-244R. Washington, D.C.: January 29, 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study DHS has employed a variety of technology, tactical infrastructure, and personnel assets to help secure the nearly 2,000 mile long southwest border. Since 2009, GAO has issued over 35 products on the progress and challenges DHS has faced in using technology, infrastructure, and other resources to secure the border. GAO has made over 50 recommendations to help improve DHS's efforts, and DHS has implemented more than half of them. This statement addresses (1) DHS efforts to deploy and measure the effectiveness of surveillance technologies, (2) DHS efforts to maintain and assess the effectiveness of existing tactical infrastructure and to deploy new physical barriers, and (3) staffing challenges the Border Patrol has faced. This statement is based on three GAO reports issued in 2017, selected updates conducted in 2017, and ongoing work related to DHS acquisitions and the construction of physical barriers. For ongoing work GAO analyzed DHS and CBP documents, interviewed officials within DHS, and visited border areas in California. What GAO Found The U.S. Border Patrol, within the Department of Homeland Security's (DHS) U.S. Customs and Border Protection (CBP), has made progress deploying surveillance technology—a mix of radars, sensors, and cameras—along the southwest U.S. border. As of October 2017, the Border Patrol had completed the planned deployment of select technologies to several states along the southwest border. The Border Patrol has also made progress toward assessing performance of surveillance technologies, but additional actions are needed to fully implement GAO's 2011 and 2014 recommendations in this area. For example, the Border Patrol has not yet used available data to determine the contribution of surveillance technologies to border security efforts. CBP spent about $2.3 billion to deploy fencing from fiscal years 2007 through 2015 and constructed 654 miles of fencing by 2015. The Border Patrol has reported that border fencing supports agents' ability to respond to illicit cross-border activities by slowing the progress of illegal entrants. GAO reported in February 2017 that CBP was taking a number of steps in sustaining tactical infrastructure—such as fencing, roads, and lighting—along the southwest border. However, CBP has not developed metrics that systematically use data it collects to assess the contributions of border fencing to its mission, as GAO has recommended. CBP concurred with the recommendation and plans to develop metrics by January 2019. Further, CBP established the Border Wall System Program in response to a January 2017 executive order that called for the immediate construction of a southwest border wall. This program is intended to replace and add to existing barriers along the southwest border. In April 2017, DHS leadership gave CBP approval to procure barrier prototypes, which are intended to help inform new design standards for the border wall system. Physical Barriers in San Diego, California, April 2016 The Border Patrol has faced challenges in achieving a staffing level of 21,370 agents, the statutorily-established minimum from fiscal years 2011 through 2016. As of September 2017, the Border Patrol reported it had about 19,400 agents. GAO reported in November 2017 that Border Patrol officials cited staffing shortages as a challenge for optimal deployment. As a result, officials had to make decisions about how to prioritize activities for deployment given the number of agents available. What GAO Recommends In recent reports, GAO made or reiterated recommendations for DHS to, among other things, assess the contributions of technology and fencing to border security. DHS generally agreed, and has actions planned or underway to address these recommendations.
gao_GAO-18-21
gao_GAO-18-21_0
Background Duty-Free Stores in the United States 19 U.S.C. § 1555(b)(8)(D). portion was then being smuggled back into the United States without payment of U.S. taxes. Congress legislated on duty-free stores through the Omnibus Foreign Trade and Competitiveness Act of 1988, which required duty-free stores located in airports to restrict the sale of duty-free merchandise to any one individual to “personal use quantities,” which is defined as “quantities only suitable for uses other than resale.”10 During consideration of the legislative language that was enacted in 1988, a senator introduced an amendment to permit duty-free stores located along the border to continue to sell goods in wholesale quantities. In introducing the amendment, the senator observed that a large part of the sales by the border stores along the U.S.-Mexico border were in wholesale quantities and that restricting the stores’ ability to sell in such quantities would adversely affect the stores’ business and the regional economy. Congress adopted the law with the amendment and applied the concept of “personal use quantities” only to airport duty-free stores.12 This requirement does not apply to land border stores. Agencies with Roles Related to Duty-Free Cigarette Exports Pub. L. No. 100-418, § 1908(b), 102 Stat. 1107, 1315 (codified as amended at 19 U.S.C. § 1555(b)). The act does not identify any quantity amount with respect to personal use but defines personal use quantities as quantities that are only suitable for uses other than resale and includes reasonable quantities for household or family consumption as well as for gifts to others. Discussion of this topic occurred as part of congressional consideration of Senate Bill 1420, the Omnibus Trade and Competitiveness Act of 1987. One senator highlighted the potential for economic harm to communities adjacent to the U.S.-Mexico border if the provision precluding duty-free sales in wholesale quantities were applied to land border stores. Congress passed the Omnibus Foreign Trade and Competitiveness Act of 1987, but the President vetoed the bill, and a vote to override the veto failed in the Senate. (See H.R. 3, 100th Cong. (1987). S. 1420 was incorporated into H.R. 3.) The provision limiting the concept of “personal-use quantities” to airports was in the Omnibus Foreign Trade and Competitiveness Act of 1988, which Congress passed and the President signed. Pub. L. No. 100-418, § 1908(b), 102 Stat. 1107, 1316. At Commerce, Census is responsible for collecting, compiling, and publishing export trade statistics. AES is the primary instrument for collecting export trade data. Census takes steps to ensure compliance by AES filers, including duty-free store operators, through training and follow-up on unusual transactions, according to Census officials. At DHS, CBP and ICE are the components with roles related to duty- free cigarette exports. CBP is responsible for oversight of duty-free stores, including the requirements for establishment of stores and ensuring stores’ compliance with various requirements for operations and lawful sales. Duty-free stores are regulated as a type of bonded warehouse. CBP port directors ensure that duty-free stores establish operating procedures. According to CBP officials, the agency is also responsible for enforcing the Foreign Trade Regulations, including consideration of enforcement action when an AES filer submits incorrect information regarding a shipment of merchandise being exported. Such enforcement action may include the issuance of penalties or the seizure of the merchandise intended for export. ICE enforces U.S. laws related to tobacco smuggling for cases in which it has investigative jurisdiction, including related offenses such as money laundering. According to agency officials, ICE also coordinates with CBP on enforcement efforts, such as seizures of merchandise due to violations of U.S. laws or customs regulations. At DOJ, ATF investigates trafficking in cigarettes that have illegally entered U.S. commerce and enforces federal antitobacco smuggling laws under Title 18 of the U.S. Code, particularly the Prevent All Cigarette Trafficking Act and Contraband Cigarette Trafficking Act (CCTA).17 As we previously reported, by enforcing the CCTA, ATF seeks to reduce illegal cigarette trafficking, divest criminal and terrorist organizations of money derived from this activity, and significantly reduce tax revenue losses to the affected states. At Treasury, TTB is responsible for administering and enforcing the federal tax laws relating to tobacco products. Federal law requires that every person, prior to commencing business as a manufacturer or importer of tobacco products or establishing a TTB-regulated export warehouse for the storage of nontax-paid tobacco products pending export, obtain a permit from TTB. According to TTB officials, among the regulations that TTB enforces are those governing the export of tax-exempt tobacco products, under which only tobacco product manufacturers and export warehouse proprietors may remove tobacco products for export without payment of tax. TTB officials also stated that a manufacturer of tobacco products or an export warehouse proprietor is relieved of the liability for tax on tobacco products upon providing evidence satisfactory to TTB of exportation or proper delivery, including delivery to a customs bonded warehouse operating as a duty-free facility. TTB may audit TTB permit holders to confirm such deliveries or exports. See Prevent All Cigarette Trafficking Act of 2009, Pub. L. No. 111-154, 124 Stat. 1087 (2010) and Contraband Cigarette Trafficking Act, Pub. L. No. 95-575, 92 Stat. 2463 (1978) (codified as amended at 18 U.S.C. §§ 2341-2346). The Contraband Cigarette Trafficking Act makes it unlawful (a felony) for any person to ship, transport, receive, possess, sell, distribute, or purchase contraband cigarettes. Contraband cigarettes are cigarettes in a quantity of more than 10,000 sticks (currently, 50 cartons) that bear no evidence of applicable state or local cigarette tax payment in the state or locality in which the cigarettes are found, if such state or local government requires a stamp or other indicia to be placed on the packages or other containers of cigarettes to evidence payment of cigarette taxes and which are in the possession of any person other than specified persons, including permit holders under the Internal Revenue Code, common carriers transporting cigarettes with proper bills of lading, or individuals licensed by the state where the cigarettes are found. Purchasers and Exporters of Duty- Free Cigarettes at the Southwest Border Are Subject to U.S. and Mexican Requirements; Agencies Identified Schemes and Practices That Facilitate Illicit Trade Duty-free stores may sell tax-exempt cigarettes in any quantity to passengers departing the United States for Mexico at a port on the land border; agencies have identified schemes and practices associated with duty-free sales that are used to evade U.S. and Mexican taxes. U.S. regulations require duty-free stores to have procedures to provide reasonable assurance that duty-free merchandise sold will be exported and requires the exporter to report information on the export of commercial cargo, which CBP considers to be transactions valued at more than $2,500. Census data indicate that about 18,500 such transactions occurred from 2010 through 2015 at duty-free stores on the southwest border. According to information from CBP and a Mexican customs official, Mexican requirements dictate that, depending on place of residence, some adult travelers to Mexico can bring in one carton of cigarettes tax-exempt, and some residents can bring in an additional three cartons if they pay taxes on them. Bringing in any quantity above four cartons would require an individual to register as an importer with the Mexican government, according to the same Mexican official. U.S. agencies identified three schemes used to evade cigarette-related taxes and other legal requirements in the United States and Mexico: (1) diverting cigarettes from the store directly into U.S. commerce; (2) smuggling duty-free cigarettes into Mexico through U.S. ports of entry by concealing them, while potentially also bribing Mexican customs officials to evade payment of Mexican taxes; and (3) smuggling duty-free cigarettes back into the United States after first smuggling them into Mexico. U.S. Regulations Require Duty-Free Stores on the Southwest Border to Provide Reasonable Assurance of Export of Cigarettes and Report Transactions Over $2,500 Duty-Free Stores Sell Tax- Exempt Cigarettes to Passengers Departing the United States Cigarettes manufactured in the United States and labeled for export may be shipped, without payment of federal or state tax, to duty-free stores for export and consumption beyond the jurisdiction of U.S. internal revenue laws. In addition to U.S.–manufactured cigarettes, duty-free stores can sell cigarettes imported from overseas. Duty-free cigarettes, which are cigarettes labeled for export, are considered to be in violation of U.S. law if sold for domestic consumption in the United States.24 According to CBP officials, in the duty-free retail environment, the individual purchasing the merchandise is the exporter. Cigarettes sold at duty-free stores are generally distributed to duty-free retail outlets from warehouses maintained by the duty-free operator. Figure 2 outlines potential steps in the lawful export of duty-free cigarettes, according to U.S. and Mexican agency officials. Tobacco products manufactured in the United States and labeled for exportation may not be sold or held for sale for domestic consumption in the United States unless such articles are removed from their export packaging and repackaged by the original manufacturer into new packaging that does not contain an export label. 26 U.S.C. § 5754(a)(1)(C). TTB regulates export warehouses. Duty-Free Stores at the Southwest Border Must Provide Reasonable Assurance of Export of Cigarettes to Mexico CBP requires duty-free stores to have procedures designed to provide reasonable assurance that duty-free merchandise is exported.25 For duty- free stores along the southwest border, such procedures are designed to ensure export by pedestrians and passengers in vehicles crossing into Mexico. The four operating procedures for duty-free stores that we reviewed require that they assure that individuals and their merchandise depart the United States for Mexico under escort or observation. Figure 3 shows the procedures at a duty-free store in Laredo, Texas, that is located at the border. This duty-free store sells cigarettes from a drive- through window; at the time of purchase, a store employee puts a numbered red cone on the roof of the vehicle. A private security guard employed by the duty-free store removes the red cone at the border crossing to verify that the vehicle exits the United States. In other ports, duty-free stores may be located farther from the U.S. border crossing, and the procedures designed to assure export of duty-free goods could entail having a store employee in a van or other vehicle escort purchasers to the crossing. According to the procedures of one duty-free store we visited, refusal by a pedestrian customer to cross into Mexico should typically result in that customer returning to the store and being given a refund for the duty-free goods purchased, and if the customer refuses to return to the store for a refund and does not cross into Mexico, that individual is not allowed to purchase in the facility again. In the case of a customer in a vehicle, the store should notify CBP, and that customer should not be allowed to purchase in that facility again. U.S. laws and a customs regulation stipulate that duty-free stores shall establish procedures to provide reasonable assurance that duty-free merchandise sold by the store is exported. 19 U.S.C. § 1555 and 19 C.F.R. § 19.36. Customs regulations further specify conditions for delivery of such items at land border locations, meaning an exit point from which individuals depart to a contiguous country by vehicle or on foot by bridge, tunnel, highway, walkway, or by ferry across a boundary lake or river. 19 C.F.R. § 19.39. Duty-Free Cigarette Transactions Valued at More Than $2,500 Are Subject to Reporting; Operators of Duty- Free Stores Reported About 18,500 Such Transactions in 2010–2015 For every duty-free store transaction in which the value of the goods is more than $2,500, the Foreign Trade Regulations generally require that the U.S. principal party in interest (USPPI) or its agent file electronic export information through AES. (In this report, we use “exporter” to refer to the USPPI.) According to CBP officials, this requirement extends to purchases of duty-free cigarettes. The export information includes 28 mandatory data elements such as the value, quantity, name of exporter, name of the person receiving the shipment, and method of transportation. AES data from Census showed a total of 18,504 such transactions from 2010 through 2015 from duty-free stores on the southwest border, with almost 70 percent exported from Texas (see fig. 4 and table 1). The number of duty-free cigarette transactions valued at over $2,500 peaked in 2012 at 4,685 and declined to a level about 45 percent lower in 2014 and 2015. According to CBP officials in Laredo and the San Diego area, while it is not possible to determine the exact cause, duty-free stores may have reported greater numbers of these transactions in 2012 due to an increase in enforcement actions at those ports that encouraged greater compliance with export data filing requirements. These officials said stores may have reported fewer transactions valued at over $2,500 in subsequent years due to CBP’s continued enforcement actions. CBP officials said that they have no way of systematically knowing the full scale of exports that occur through transactions valued at under $2,500. Those transactions are not captured in data that are required to be reported to the U.S. government. Mexican Customs Regulations Limit the Amount of Cigarettes Individuals Crossing the Southwest Border May Bring with Them According to information provided by officials from CBP and the Mexican customs agency, Mexican residents above the age of 18 are allowed to bring up to four cartons of cigarettes into Mexico, depending on where they live. Specifically, officials from CBP and the Mexican customs agency provided the following details: The Mexican customs agency allows each adult who is a resident of the interior of Mexico (not living in towns adjacent to the border) crossing from the United States to bring up to four cartons of cigarettes into Mexico; the first would be exempt from Mexican taxes, and the remaining three would be taxed at a 573-percent rate. Mexican border-zone residents, defined as those who live in towns along the U.S.-Mexico border such as Ciudad Juarez and Tijuana, are subject to different rules and are not permitted to bring cigarettes into Mexico. A Mexican customs official said that bringing in any quantity of cigarettes above these amounts would require an individual to register as an importer with the Mexican government, including both the customs agency and health ministry, and obtain a health authorization in advance of the importation. This official also said that commercial cigarettes are charged a 67-percent import duty, a 16-percent value-added tax, and other special duties or taxes that may be applicable. U.S. Agencies Identified Three Schemes and Related Purchasing Practices by Which Duty- Free Cigarette Traffickers Evade Taxes in the United States and Mexico For purposes of this report, we use “divert” and “diversion” to refer to the unlawful introduction of duty-free cigarettes into U.S. commerce using a scheme that does not involve the crossing of the southwest border. We use “smuggle” and “smuggling” to refer to the surreptitious exporting or importing of duty-free cigarettes that involves the crossing of an international border. Diverting Duty-Free Cigarettes Directly into the United States In this scheme, cigarettes are purchased from a land border duty-free store and diverted into the United States without paying applicable taxes. According to ICE officials, individuals diverting cigarettes use methods that include bribery of a duty-free store official to allow a vehicle to stay in the United States without informing CBP instead of observing its crossing into Mexico. CBP and ICE officials also reported instances of individuals loading cigarettes into a car after the duty-free store had closed. CBP officers in the San Diego area also identified the following deceptive practices in the course of a 2010 operation, some of which were carried out with the complicity of store employees who took actions such as escorting vehicles using unapproved exit routes, allowing purchasers of large quantities to leave the store unescorted, assisting purchasers in their efforts to conceal goods in the door panels and engine compartments of their vehicles. In April 2013, ICE received information of a pending large purchase of cigarettes from a duty-free store in Nogales, Arizona. ICE agents were surveilling the store when they observed an individual loading cigarettes into a van and leaving without an escort from the store. The van did a U- turn just before reaching the crossing into Mexico. ICE seized 840 cartons of cigarettes purchased from a duty-free store after pursuing the van in which the purchaser drove north away from the border into the United States instead of traveling across the border into Mexico. Smuggling Duty-Free Cigarettes into Mexico across the Land Border, Contrary to That Country’s Laws In this scheme, according to CBP and ICE officials, individuals legally purchase cigarettes from duty-free stores in the United States and smuggle them into Mexico through U.S. ports of entry by concealing these goods in their vehicles or on their person. The individuals may attempt to bribe Mexican customs officials to evade payment of Mexican taxes, according to CBP and ICE officials. CBP and ICE officials reported that they observed individuals in the parking lots of duty-free stores near the port of San Diego loading cigarettes into concealed compartments in personal vehicles to smuggle them into Mexico. An ICE officer in California told us that smugglers had posted Internet advertisements online to recruit runners to move cigarettes across the border from the United States. ICE officials provided data that they obtained from the government of Mexico on cigarette seizures its officials conducted from 2012 through 2015 at numerous locations along the border, including entry points in Mexico directly opposite El Paso, Texas, and San Diego, California, as well as in other parts of Mexico. The data indicate that the Mexican government seized 1.2 million cartons of cigarettes in 2012; the number of cartons seized steadily decreased to about 320,000 cartons in 2015. At least one of the brands among those seized is associated with the operator of multiple duty-free stores on the southwest border. (See fig. 6 for photographs of duty-free cigarettes concealed in vehicles and discovered by Mexican customs officials.) CBP officials in Laredo told us that they had conducted joint operations with Mexican officials at the passenger crossings but that counter surveillance by smugglers often limited their effectiveness. Typically, a short time after initiating an operation, they would observe that smugglers had ceased activities temporarily and that every vehicle CBP officers examined contained only one or two cartons of cigarettes, an amount that, according the CBP officials, complies with Mexican import restrictions. Smuggling Duty-Free Cigarettes Back into the United States after First Smuggling Them into Mexico In this scheme, duty-free cigarettes that are smuggled into Mexico are brought back across the border and introduced into U.S. commerce without declaring the goods to CBP upon reentry, thus avoiding relevant U.S. taxes. Smugglers might bring these goods back into the United States in small amounts, to avoid detection, and take them to rented storage facilities along the border, according to CBP officials at the port of San Diego. The smuggled cigarettes are bundled into larger quantities and subsequently transported for sale at locations in the interior of the United States. During our fieldwork at the port of San Ysidro, CBP officials identified warehouses where such cigarettes had been stored in the past. Traffickers Can Facilitate Diversion and Smuggling by Avoiding the $2,500 Threshold for Reporting Transactions or by Moving to Another Port According to agency officials, traffickers engaged in diversion and smuggling schemes minimize their visibility to the U.S. government by dividing a large purchase of duty-free cigarettes into smaller ones to avoid the AES reporting threshold of $2,500. Such structured transactions can be carried out by individual buyers or by multiple individuals making purchases on behalf of the holder of an account at a duty-free store. As part of a 2012 enforcement operation, CBP officials reviewed receipts for cigarette sales from three duty-free stores in San Ysidro and identified six people who made multiple purchases during the same day at one of the stores. One of these six individuals made 14 consecutive purchases of cigarettes valued at $200 and then a final purchase of $100 for a total of $2,900 which, as a single transaction, would have exceeded the $2,500 threshold for reporting such exports. In addition, CBP officials in Laredo described a 2010 scenario in which U.S. citizens moved $100,000 worth of tobacco products into Mexico over the course of a month by making repeated crossings on foot with under $2,500 in merchandise each time so that no reporting was required. Further, CBP officials at the port of San Diego said that following a series of CBP operations related to duty-free stores from 2010 through 2012, they reviewed the stores’ sales records and noticed a decrease in high-value sales. An ICE official said, however, that cigarette smuggling operations may have moved eastward in response to CBP operations in California. In addition, according to these CBP officials, a 2010 operation discovered multiple store operators maintaining two sets of accounts to link cash outlays upfront for multiple purchases. Smuggling of Duty- Free Cigarettes across the Southwest Border Is Reportedly Linked to Organized Crime and Supplies the Illicit Tobacco Market in Mexico; U.S. Efforts to Counter This Illicit Activity Face Challenges U.S. agency officials said that some smuggling of duty-free cigarettes across the southwest border has links to organized crime, supplies the illicit tobacco market in Mexico, and poses oversight challenges. ICE officials told us that transnational criminal organizations use smuggled, duty-free cigarettes to launder money and generate revenue. Furthermore, a Mexican customs official noted that relatively inexpensive cigarettes manufactured in the United States, which cannot legally be sold in the United States or in Mexico, are routinely sold for export from duty-free stores on the southwest border; such cigarettes are then smuggled across to supply Mexico’s illicit tobacco market. One brand of such cigarettes has been cited in recent studies as a significant part of the illicit tobacco trade in Mexico. U.S. officials reported that their efforts to counter the illicit movement of duty-free cigarettes face challenges related to the purchaser’s ability to buy duty-free cigarettes in unlimited quantities and to use passenger, not commercial, crossings from the United States into Mexico. According to U.S. officials, while U.S. agencies do not have the authority to seize exports that violate Mexico’s laws related to these cigarettes, U.S. officials reported working with Mexican officials on activities to enforce the customs laws and regulations of both countries. Criminal Organizations Reportedly Use Duty-Free Cigarettes to Launder Money and Generate Revenue The term “black market” refers to trade in goods or commodities in violation of laws and regulations. method of generating funds. In addition to U.S.– manufactured cigarettes, foreign cigarettes are also smuggled into Mexico. According to ICE officials, transnational criminal organizations launder money by depositing illicit funds into client accounts at duty-free stores along the southwest border. They then make withdrawals from these accounts, just as they would from a bank account, to purchase duty-free tobacco and alcohol.37 According to ICE officials, transnational criminal organizations purchase in quantities such that some duty-free stores give them substantial discounts on the stores’ in-house cigarette brands. Subsequently, these goods are smuggled either by concealment or bribery of Mexican customs officials, according to ICE officials. According to an official from the Mexican customs agency, some drug cartels add their own product identification codes onto packs of cigarettes from duty- free stores for sale in areas that they control. Certain Duty-Free Cigarettes from the United States Comprise a Large Share of the Illicit Mexican Market ICE defines trade-based money laundering as the use of trade to legitimize, conceal, transfer, and convert large quantities of illicit cash into less conspicuous assets. ICE officials in San Diego explained that, in Southern California, criminals use other commodities more frequently than cigarettes for trade-based money laundering. According to a public health warning issued by a federal commission of the Mexico health secretariat, this particular brand of U.S.-made cigarettes for duty-free sale is among those cigarettes “which can be counterfeit, adulterated, and even made with unknown ingredients, increasing the possibility that they contain potentially toxic non-tobacco chemicals.” the illicitly trafficked cigarettes that the Mexican government confiscated at various locations in the country from 2012 through 2015. In addition, in 2013, the Mexican customs agency executed a number of seizures of this brand of duty-free cigarettes that were undeclared at ports of entry on the U.S.-Mexico border (see fig. 7).This brand of cigarettes has been cited in recent studies as a significant part of the illicit tobacco trade in Mexico. ICE officials provided a November 2015 report issued by the National Cyber-Forensics & Training Alliance, a public-private partnership, which stated that this U.S.-made brand of cigarettes was recognized as the largest illegal brand being sold in Mexico. The report also stated that this brand of cigarettes was being diverted into Mexico through various duty-free stores in Laredo, Texas, and San Diego, California.39 Another study reported that, as of June 2014, 64 percent of the inflow of tobacco into Mexico from the United States consisted of this brand of cigarettes manufactured and trademarked in the United States and sold at duty-free stores on the southwest border. The study also noted that this brand of cigarettes accounted for about 13 percent of the overall illicit cigarette market in Mexico. National Cyber-Forensics & Training Alliance (NCFTA), Southern Border Illicit Tobacco Activity (Pittsburgh, Penn.: November 2015). The NCFTA is funded by private sector entities, including tobacco firms. ICE has a partnership agreement with the NCFTA and assigns agents there through the National Intellectual Property Rights Coordination Center that it leads. Mexico and that they did not have an obligation to know since the company is not the exporter of the cigarettes. U.S. Officials’ Efforts to Counter Illicit Trade in Duty-Free Cigarettes Face Challenges Agencies Cite the Ability to Buy Unlimited Quantities of Duty- Free Cigarettes at the U.S. Land Border as a Factor That Facilitates Smuggling CBP and ICE officials in Laredo said that the ability to buy unlimited quantities of duty-free cigarettes at the land border facilitates large shipments of these goods to be clandestinely smuggled into Mexico. CBP officials acknowledge that duty-free stores on the southwest border are functioning as wholesale suppliers of cigarettes. During congressional consideration of duty-free store legislation, a senator raised the issue of the potential for economic harm to communities adjacent to the U.S.- Mexican border if a provision precluding duty-free sales in wholesale quantities were applied to land border stores. Congress later enacted the Omnibus Foreign Trade and Competitiveness Act of 1988, which required duty-free stores located in airports to restrict the sale of duty-free merchandise to any one individual to “personal use quantities,” a requirement that does not apply to land border stores. According to CBP officers in San Diego, duty-free store representatives told them in 2010 that the stores at the port of San Ysidro were some of the most profitable in the country and that merchandise sold in wholesale quantities was an important part of their business. Use of Passenger Crossings to Export Large Quantities of Duty-Free Cigarettes Creates Oversight Challenges; CBP Officials Said They Are Reviewing Proposed Options at One Port U.S. officials said that the ability to use passenger crossings to export wholesale quantities of duty-free cigarettes enables these goods to enter Mexico with less scrutiny and oversight than if they were processed through a commercial crossing. U.S. ports on the land border may have multiple crossings, some designated for passenger traffic and others for commercial traffic. CBP officials said that duty-free cigarettes are treated as noncommercial goods that exit via passenger crossings and, therefore, are not subject to the same requirements and potential for CBP oversight as commercial exports. Requirements for commercial cargo leaving the United States include submission of electronic information to CBP in advance of departure. CBP and ICE officials in Laredo noted that CBP does not define what constitutes a commercial export, enabling the use of passenger crossings by purchasers of “commercial-type” quantities.45 CBP officials in Laredo and San Diego said that individuals purchasing large quantities of duty-free cigarettes would likely be less able to evade Mexican taxes if the goods were to exit from a commercial crossing. Officials said that CBP-enforced regulations also do not provide a definition for what would constitute a commercial quantity and that the agency has not adopted its own definition or guidelines in part because commercial transactions can have different quantities and varying price points. CBP officials said that they view commercial exports to be merchandise for business resale or for profit, rather than for individual use, such as for personal or household consumption. In the San Diego area, which has one of the highest concentrations of duty-free stores among ports on the southwest border and has multiple crossings into Mexico, CBP took steps to try and address the challenge of large quantities of duty-free cigarettes moving through passenger crossings. In 2010, CBP in San Diego prepared a draft notice for members of the area trade community, including duty-free stores, announcing that the Port Director had decided more controls were necessary to ensure the export of duty-free merchandise purchased for resale. The draft notice identified four scenarios that would meet the definition of a commercial purchase and identified appropriate exit procedures for any commercial purchases to include exit from a commercial (or cargo) export facility, instead of from the passenger crossing. In July 2017, CBP officials indicated that no change in exit procedures for duty-free tobacco products had taken place; previously, they had stated that CBP had not issued the notice because it was still undergoing review. Officials at CBP headquarters in Washington, D.C., informed us that the agency was planning to engage with port officials in San Diego to plan appropriate next steps in assessing the type of crossing through which duty-free cigarettes should be exiting. U.S. Agency Officials Report That They Are Not Authorized to Seize Exports That Violate Mexico’s Laws but Have Taken Steps to Work with Mexico on Enforcement CBP officials said the agency does not have the authority to seize goods that are being smuggled into Mexico contrary to that country’s laws. Officials at CBP headquarters said that enforcing Mexican laws is not the responsibility of U.S. agencies, but officials at two different ports of entry also described efforts to work with Mexican counterparts on activities related to enforcing customs laws and regulations of both countries. In addition, CBP in Laredo instructed duty-free store operators to discourage customers from concealing duty-free items by including procedures about this in their employee manuals. We reviewed the procedures manual for one of these operators and found that it directed employees to inform customers that they were not allowed to hide or conceal duty-free items. CBP and ICE officials told us they are able to take some actions in concert with their Mexican counterparts related to coordination and information sharing at both the border and headquarters levels. Specifically, CBP officials in Laredo told us that they conduct joint enforcement operations with Mexican officials to inspect passenger vehicles as they exit the United States and enter Mexico. ICE and CBP officials in Laredo also said that the issue of cigarette smuggling has been raised at bilateral security cooperation meetings that are routinely held with Mexican customs and law enforcement counterparts. Additionally, according to officials there, ICE’s National Intellectual Property Rights Coordination Center, under terms of the U.S.-Mexico Customs Mutual Assistance Agreement, has obtained information from the Mexican customs agency on that country’s seizures of cigarettes nationwide to advance related investigations in the United States. An ICE official said that the agency has also worked concurrently with its counterparts in Mexico to advance an investigation related to the smuggling of cigarettes from U.S. bonded warehouses that were destined for duty-free stores but were being smuggled directly into Mexico and possibly diverted back into the United States. According to the ICE official, ICE has continued to keep Mexico abreast of developments through its attaché in Mexico City. Selected Export Data Reported by Duty- Free Stores Show Irregularities, Which CBP Has Taken Some Steps to Address Multiple Duty-Free Stores Are Filing Some Noncompliant Information on Cigarette Exports According to CBP, in many cases duty-free stores on the southwest border are filing some noncompliant information that they are required to report on cigarette exports valued at more than $2,500. Our analysis of export data from Census also showed that many transactions include some noncompliant information. Specifically, we identified the following three compliance issues: According to CBP, in most instances, the duty-free store should identify the purchaser of the cigarettes as the exporter, and subsequently, report the purchaser’s name and also provide a unique numerical identifier for the purchaser such as a passport or border crossing card number. In our analysis of reported data for 18,504 transactions involving cigarettes at duty-free stores on the southwest border from 2010 through 2015, we found that 99 percent of these transactions indicate that the duty-free store operator sold the merchandise to an individual purchaser but identified itself as the exporter through use of its Internal Revenue Service employer identification number (EIN). According to CBP officials, these transactions pose potential compliance concerns. Duty-free stores on the southwest border owned by one operator commonly used the operator’s postdeparture filing privilege for cigarette transactions while also reporting them as routed export transactions. However, the Foreign Trade Regulations specify that postdeparture filings cannot be made for routed export transactions. This duty-free store operator incorrectly used its postdeparture filing privilege and marked transactions as routed exports in 16,384 of the 16,387 transactions it reported during 2010 through 2015. In response to our inquiries, CBP reviewed AES filings for this duty- free store operator and found additional compliance concerns related to filings showing Otay Mesa, California, as the port of exit. Specifically, according to CBP, the duty-free store operator was filing information indicating that the cigarettes were leaving the country through the port of Otay Mesa, although CBP officials had previously observed the sales leaving through the port of San Ysidro, California. CBP Has Provided Information to One Duty- Free Store Operator Reporting Inaccurate Data on Cigarette Transactions but Has Not Taken Steps to Finalize Any Guidance CBP has acted to address its compliance concerns with one duty-free store operator, but other possible actions remain, including the issuance of final instructions and guidance to all operators on the border and the public. According to CBP, one of the ways it fosters adherence to rules and regulations in the trade community is through “informed compliance,” the idea that, in order to maximize voluntary compliance with trade laws and regulations, the trade community needs to be clearly and completely informed of its legal obligations. We have previously found that information programs are a key part of CBP’s informed compliance strategy at both headquarters and the ports. For example, CBP issues directives, handbooks, and a series of “informed compliance publications” that provide guidance on various trade-related matters. In 2012, CBP informed the duty-free store operator with the largest number of AES transactions we reviewed that its transactions incorrectly identified its stores as the exporter when in fact the purchaser was the exporter. Regulations state that knowingly failing to file or knowingly submitting false or misleading export information through AES is a violation subject to penalties. CBP is authorized to enforce the Foreign Trade Regulations, which include regulations on reporting through AES. With regard to the compliance issue that CBP raised in 2012, CBP did not take action until after April 2014, when a CBP assessment of export transactions found that the problem with the operator’s cigarette export filings continued. In August 2014, CBP issued a penalty to the duty-free store operator, and the operator requested that CBP give it time to arrive at an agreement with the agency and remove the penalty, noting that a change to current practices might have adverse consequences on its business and further emphasizing that its practices had been widely known for years. According to CBP officials, due to the operator’s confusion over correct procedure, the penalty was canceled, and officials decided to take steps to ensure proper filing of AES through informed compliance. In October 2015, CBP provided the operator with interim instructions on how to comply with its requirements under the Foreign Trade Regulations. Those instructions included scenarios illustrating both compliant and noncompliant export data filings for transactions involving cigarettes. CBP officials also told us that a planned meeting with the duty-free store operator to finalize instructions never took place and that CBP never provided final instructions to that operator. According to CBP officials, this duty-free store operator continues to identify itself as the exporter and to use its postdeparture filing privilege. CBP officials said that duty-free stores assert that they are working to be compliant, but it is challenging for them in part because the cigarette purchasers are often unaware of their role and do not have accounts established to file the electronic export data. Additionally, one CBP official said that purchasers may be reluctant to provide a verifiable numeric identifier, such as a passport number or border crossing card, if they are involved in smuggling operations. Furthermore, CBP and ICE officials said that employees working at land border duty-free stores may not be fully trained and aware of proper filing procedures. In response to our inquiries, Census re-sent the 2015 interim instructions to the duty-free store operator in March 2017, after confirming that the operator was still using its postdeparture privilege when it should not. CBP officials indicated in July 2017 that they plan to conduct outreach to duty-free stores on the southwest border and provide guidance to the ports there to ensure proper data submission and appropriate use of postdeparture filing. CBP headquarters officials informed us that they had recently held initial discussions on this topic with agency officials in Laredo, but they had not issued any further information to the duty-free store operators and to the public; they said further discussions were planned. CBP officials did not identify instances of providing similar information to, or having discussions with, the other duty-free store operators. The Foreign Trade Regulations state that the filer of export information in AES is responsible for transmitting accurate data as known at the time of filing. An ICE official said that properly completed export data with purchasers’ verifiable identification numbers would allow ICE to corroborate that information against other databases, such as the Automated Targeting System (ATS), during an investigation. ATS compares traveler, cargo, and conveyance information against law enforcement, intelligence, and other enforcement data to assess risk. In addition, ICE sought data from Mexico, such as names and dates of birth of individuals arrested in connection with cigarette seizures in that country, to keep that information on file in the event the individuals were associated with cases in the United States. Agency officials said that verifiable identification information, such as the type that is collected in AES filings, would further help ICE corroborate and identify individuals participating in the illicit trade of duty-free cigarettes. CBP officials said that accurate data on the identity of the exporter would benefit law enforcement and intelligence operations. Without accurate data, including correct and complete information on the exporter, agencies may lack the information they need to enhance their enforcement and intelligence efforts related to the illicit trade of duty-free cigarettes on the southwest border. Conclusions Unlike duty-free stores at U.S. airports, duty-free stores associated with U.S. land borders may sell tax-exempt cigarettes in any quantity. Since Congress legislatively adopted this policy in 1988, changes on both the U.S. and Mexican sides of the southwest border have affected this trade. Agencies have cited a number of schemes used by individuals to divert these products into Mexico and into U.S. commerce, despite efforts by CBP to enforce relevant regulations and procedures. Agencies have noted that, as smuggling has become potentially more lucrative, an existing linkage may grow stronger between cigarette smuggling and organized crime on the southwest border, where they believe that criminal organizations have created distribution networks to illicitly move cigarettes in both countries. CBP officials also state that the agency does not have the authority to seize goods that are being smuggled into Mexico contrary to that country’s laws. CBP has made efforts to utilize available data collected on transactions valued at over $2,500 to evaluate duty-free store compliance with regulations. However, despite various outreach and enforcement efforts, agency officials said that inaccurate filings by one large operator— comprising nearly 89 percent of the transactions we reviewed—continue, and other store operators are still potentially out of compliance. Until steps are taken to ensure that duty-free store operators and exporters fully comply with reporting requirements, U.S. agencies will lack the accurate, complete information that can help them conduct their enforcement and intelligence efforts. Recommendation for Agency Action The Commissioner of the U.S. Customs and Border Protection should take steps to strengthen compliance with export reporting requirements related to duty-free cigarette sales on the southwest border, such as issuing guidance to all duty-free store operators. (Recommendation 1) Agency Comments We provided a draft of this product to Commerce, DHS, DOJ, and Treasury for comment. DHS provided substantive comments that are reproduced in appendix III. Commerce and DHS also provided technical comments, which we incorporated as appropriate. DOJ and the Treasury provided no comments. In its comments on our draft report, DHS concurred with our recommendation. DHS stated that CBP’s Office of Field Operations will issue guidance and engage field personnel to strengthen compliance with export requirements. In addition, DHS stated that ports would be instructed to provide guidance to all duty-free store operators on correct filing procedures for electronic export information (EEI), including use of the correct port of export and identifying the party responsible for filing the EEI. DHS gave an estimated completion date for these actions of October 31, 2017. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Commerce, the Secretary of Homeland Security, the Attorney General, the Secretary of the Treasury, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines (1) requirements that govern the lawful sale and export of cigarettes from duty-free stores on the southwest border and schemes for illicit trade in such cigarettes that agencies identified, (2) U.S. agency observations about these duty-free cigarette exports and efforts to counter illicit trade, and (3) the extent to which selected cigarette transaction data submitted by duty-free stores indicate compliance issues with reporting requirements. To obtain background information on duty-free stores, we reviewed documents related to the legislative history of duty-free stores, including those from the Congressional Record and U.S. laws and customs regulations. To describe relevant agency roles related to duty-free cigarette exports, we reviewed documents from the agencies and utilized information from interviews with their officials. To address the first two objectives, we collected and analyzed information through several methods. We reviewed relevant federal laws and regulations. We also interviewed officials from the Department of Commerce’s U.S. Census Bureau (Census); the Department of Homeland Security’s U.S. Customs and Border Protection (CBP) and U.S. Immigration and Customs Enforcement (ICE); the Department of Justice’s Bureau of Alcohol, Tobacco, Firearms, and Explosives; the Department of the Treasury’s Alcohol and Tobacco Tax and Trade Bureau; and tax-collection officials from the state of California. We also interviewed representatives from private sector tobacco and duty-free firms. We conducted field work in California in the areas around San Diego, including Otay Mesa and San Ysidro, and Los Angeles. We selected these locations based on the presence of duty-free stores or reports of cigarettes being diverted from duty-free stores into the United States, supplemented by insights from agency officials. We also used information gathered from field work in Laredo, Texas, that we conducted under a related review. We spoke with U.S. agency officials in Nogales, Arizona, and in the Washington, D.C., area. Lastly, we spoke with and obtained data from an official from the Mexican customs agency, the Tax and Customs Administration Service. To describe how cigarettes are sold and exported from duty-free stores on the southwest border, we also reviewed relevant U.S. laws and customs regulations and collected information from U.S. and Mexican officials on allowances and requirements for duty-free cigarettes imported into Mexico. In addition, to describe the views that agency officials have expressed with regard to cigarette exports from duty-free stores on the southwest border, we reviewed CBP documents that described operating procedures at the ports of Laredo, Texas, and San Diego, California; a draft port information notice from the port of San Diego; and reports from the private sector and a public-private partnership, the National Cyber- Forensics & Training Alliance, on the illicit tobacco market in Mexico. We also analyzed data on seizures from the Mexican Tax and Customs Administration Service and information from interviews with officials from CBP, ICE, and the Mexican government. We also analyzed Automated Export System (AES) data from Census for 2010 through 2015 on recorded transactions at the duty-free stores CBP identified as being adjacent to the U.S.-Mexico border, also referred to as the southwest border, spanning Texas, New Mexico, Arizona, and California. We determined that value and quantity data for those transactions were not reliable for the purposes of this report; we based our assessment on a review of related documentation and on interviews with Census officials about the agency’s procedures to ensure the quality of the data and with CBP officials to discuss relevant aspects of how transaction data might be entered in AES. According to Census officials, it is not possible to identify from AES whether or not an export came from a duty-free store, as such information is not required when filers submit electronic export information. We used an alternative method to identify the AES data associated with transactions at duty-free stores on the southwest border: We obtained the employer identification numbers (EIN) for those duty-free stores from CBP, which identified 88 duty-free stores on the southwest border that in some cases used the same EIN because some stores owned by the same proprietor used the same EIN. We obtained 54 EINs covering the 88 border stores. In one instance, a single EIN applied to 7 duty-free stores. Census provided us with the export transactions recorded in AES that corresponded to the 54 EINs provided by CBP. Census protects the confidential data contained in export transaction records it receives from firms but may disclose the data to other government agencies if the agency determines it is in the national interest to do so. For each transaction record, we requested the data for 24 of the 28 mandatory fields in AES for which exporters must provide information. In addition, we asked Census to filter the information by several fields to include country of destination (Mexico) and the Harmonized Tariff Schedule codes associated with cigarettes. Census identified 19,101 transaction records in response to our request. After removing those records that fell outside of our parameters (e.g., entries from 2009 and entries for which the value was $2,500 or less), 18,504 export transaction records remained. To identify the schemes related to the illicit trade in duty-free cigarettes, we reviewed court documentation from criminal cases at the state and federal levels. We also reviewed Federal Register notices for historical references to cases of smuggling in addition to interviewing officials from the U.S. and Mexican governments. To evaluate the extent to which duty-free cigarette export data presented potential compliance issues with reporting requirements, we reviewed such data from AES and compared select data elements to reporting requirements as stated in the Foreign Trade Regulations. We also reviewed summaries of events that CBP provided relating to a specific penalty issued by the port of Laredo to a duty-free store operator for failure to comply with AES reporting requirements. We examined a document Census provided to us that was submitted to that agency and CBP from the operator’s lawyers as well as the interim document provided to that operator by CBP and Census. We also analyzed a subset of our data concurrently with agency officials to evaluate the compliance of the specific transaction records we received from Census with a requirement in the Foreign Trade Regulations. Additionally, we reviewed documents from the Commercial Customs Operations Advisory Committee to contextualize one of the largest duty-free store operator’s use of its postdeparture filing privilege. We conducted this performance audit from November 2016 to October 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Assessment of Duty-Free Cigarette Export Data Reliability Background We analyzed U.S. Census Bureau (Census) transaction-level data from the Automated Export System (AES) on sales of duty-free cigarettes purchased at stores located on the U.S. southwest border from 2010 through 2015. Census collects electronic export information in AES to report trade statistics, including the export of duty-free cigarettes. Stores that are principal parties to a sale of duty-free cigarettes for export generally self-report the transaction through AES. Some duty-free operators integrate point-of-sales systems to AES for automatic entry, and some enter the data manually or into software programs that are approved by Census, according to U.S. Customs and Border Protection (CBP) officials. Self-reported data captured in AES include transaction- related variables such as date of export, port of export, value, quantity, weight, method of transportation, and ultimate consignee. Census then uses the AES data to compile and publish export trade statistics. CBP and Census share responsibility for monitoring compliance with trade law, including the data reporting requirements that duty-free stores must meet. According to CBP officials, CBP officers regularly review duty-free store operators’ inventory control and recordkeeping systems during unannounced spot checks and compliance assessments. However, according to these officials, CBP’s compliance reviews of inventory control systems do not generally include an examination of how store operators report data in AES. AES is built to include automated electronic checks of stores’ AES submissions as the data are entered; these data-entry validation checks produce alerts when required information is invalid or missing. Census also sometimes sends staff to meet with companies that have a high rate of submission errors, such as reporting shipments late. If they identify problems with the accuracy of the information that store operators are filing in AES, CBP and Census can take appropriate steps to enforce compliance with the law. CBP is responsible for the enforcement of the Foreign Trade Regulations. When data are incorrectly entered in AES, CBP can take enforcement action, including issuing penalties or seizing merchandise, according to CBP officials. Census can also respond to noncompliant reporting of electronic export information by operators by revoking special privileges granted to some, such as permission to file export information after a shipment has been exported, among other actions. In compiling and analyzing AES data, Census makes corrections to some data that appear erroneous, but CBP officials said that Census does not flag or report the data corrections it makes to CBP. Census officials stated that, while they reach out to some filers to suggest corrective action, the scale of the trade data program and the number of transactions processed every month precludes comprehensive outreach. Duty-Free Cigarette Data Collected through AES Are Not Reliable for Analysis of Value and Quantity of Exports or Associated Trends Evaluating Unprocessed, Transaction-Level AES Data on Duty-Free Cigarettes Our testing found that the unprocessed transaction-level AES data on duty-free cigarettes for 2010 through 2015 are not reliable for use in describing the value and quantity of duty-free cigarettes, and associated trends, exported from the southwest border. For that time period, we received data on 18,504 transactions of duty-free cigarettes that had a reported value of $2,500 or above, in keeping with AES reporting requirements. To examine the data on value and quantity, we evaluated the reasonableness of the ratio of these variables, the unit price (value divided by quantity), and the consistency and stability of reported prices. We found that many of these transactions’ reported unit prices are far below reasonable price levels. For example, 2.3 percent of transactions in these unprocessed data are associated with a unit price of under $4.42 per 1,000 cigarette sticks—the cost of tobacco on commodity markets as of calendar year 2015, which excludes necessary costs of cigarettes such as paper costs and manufacturing costs. However, these transactions with extremely low unit prices account for more than 98 percent of the quantity of trade in duty-free cigarettes as reported in the AES data we obtained. Moreover, 39 percent of the reported transactions (accounting for more than 99.6 percent of the total reported quantity sold) were associated with unit prices lower than what we conservatively estimate to be the price at which duty-free stores could procure cigarettes from manufacturers, as discussed in the section below. We also found high levels of reported price variation in the data, with reported median unit sales prices frequently doubling or halving from year to year, even within the same port location. Evaluating Census’s Data- Processing Methodology and Assumptions Census is responsible for collecting, compiling, and publishing AES trade data for duty-free cigarettes, and Census officials said that they clean and correct (process) these data by changing value entries to equal a “price adjustment factor” when the unit price of transactions falls outside of an expected range, as explained below. For cigarette exports as of February 2017, including those transactions exempt from taxes and duties, these officials said that this range includes a minimum of $11 per 1,000 cigarette sticks, a price adjustment factor of $40 per 1,000 sticks, and a maximum of $75 per 1,000 sticks. According to these officials, Census sets its price range and adjustment factor by examining the data and identifying outlier levels based on judgment. Census officials stated that they updated this expected price range in February 2017. Census officials stated that price adjustment factors are not updated on a fixed schedule and do not automatically adjust for inflation. Instead, Census may choose to update factors when it believes there have been significant changes in an industry’s trade patterns. According to these officials, prior to February 2017, the price range for cigarettes was last updated in 2007. From 2007 through January 2017, the price adjustment factor for cigarettes was $11.46—about one-fourth of its current value—with a minimum of $8.87 and a maximum of $27.39. Census’s current price range for cigarettes is not appropriate for cleaning data to analyze trends in duty-free cigarette exports because it may significantly underestimate a reasonable expected price range for cigarettes. Approximately 39 percent of the observations in the unprocessed, duty-free cigarette data are associated with sales prices below Census’s minimum price or above Census’s maximum price. We estimated minimum and maximum expected prices for cigarettes that are substantially greater than Census’s current price adjustment factor range for cigarettes. To estimate a minimum expected price for cigarettes, we examined commodity prices, production costs, and revenue data from a large, publicly traded cigarette manufacturer. We found that the manufacturing cost of cigarettes exceeded Census’s estimated minimum sales price by 30 percent, $14.26 per 1,000 cigarette sticks instead of $11. Thus, even if the manufacturer sold its cigarettes directly to a duty-free store, and neither the manufacturer nor the duty-free store made a profit, we would still expect a price greater than Census’s lower bound. This expected minimum retail price increases significantly if we account for cigarette manufacturers’ revenue. Using revenue data from the public accounting statements of the same manufacturer, and again conservatively assuming direct sales to a duty-free store that itself sells for no profit, we would expect to see a price of $43.65 per 1,000 cigarette sticks, which is nearly 300 percent greater than Census’s lower bound of $11 per 1,000 cigarette sticks and about 9 percent larger than Census’s current price adjustment factor of $40 per 1,000 sticks. To estimate a maximum expected price for cigarettes, we examined the price of a premium cigarette brand listed for sale on a duty-free store’s website. We found that this price was 163 percent higher than the upper bound in Census’s price range, $197.50 per 1,000 cigarette sticks instead of $75 per 1,000 sticks. For any observed prices in trade data outside of this expected range for a given tariff code, Census officials said that they attempt to correct these observations by adjusting the reported quantity such that the reported price is equal to the price adjustment factor—$40 per 1,000 cigarette sticks. For example, if a reported sale is $80 per 1,000 cigarette sticks, Census will adjust the reported quantity to 2,000 sticks while leaving the reported value unchanged, so the reported price (value divided by quantity) becomes $40 for each unit of 1,000 sticks. Census officials stated that this data cleaning process is sufficient for their use in producing aggregated trade statistics because of the volume of transactions they must review and the ease with which Census analysts can apply this method to clean trade data. Census’s process of correcting missing or outlying data (unreliable data) with its price adjustment factor is not appropriate for our use because it would significantly alter the relationships among subgroups within our data, distorting trends that we otherwise would intend to analyze. For example, in a hypothetical dataset where the average sales price is $40 per 1,000 cigarette sticks across exports from the United States, Census’s replacement of missing and outlying data using a price adjustment factor of $40 would not change this overall average. But if one state in the data has an average sales price lower than the national average, reflecting lower costs of doing business, any missing or outlying data replaced with the same price adjustment factor as other higher costing states would increase the state’s reported average sales price. The distinction between high-price states and low-price states would thus become less clear. Moreover, we cannot determine the appropriateness of Census’s decision to preserve reported value and adjust reported quantity when processing data to manage the relationship between value, quantity, and price. This is because we cannot determine whether the unprocessed value or the unprocessed quantity data are reliable. Applying our minimum expected price for cigarettes, discussed above, excludes many transactions in the unprocessed data, indicating problems with value, quantity, or both. Census officials stated that they believe the value data are more reliable than the quantity data and so change the reported quantity data when processing the data, though they also stated that this is a general assumption without specific insight as to whether or why this method may be valid for cigarettes. While CBP officials stated that high-level postaudit checks can be used to ensure that a store’s AES system is working properly, they said that these checks are rare, and the inventory control system compliance review does not otherwise provide assurance that data self-reported into AES are reliable. CBP officials stated that they were not confident about which transaction data in AES were more reliable: value or quantity. Because neither agency’s control process provides strong assurance that either the value or the quantity data are reliable for our use, we cannot appropriately use value, quantity, or price as a benchmark to correct the other variables. Evaluating Alternative Data-Processing Techniques Alternative methods for determining appropriate replacement values for outlying data, referred to as imputation, would not make the duty-free cigarette data reliable for our intended use. For example, stochastic regression imputation replaces a missing or excluded variable value within an observation by drawing randomly from within the error distribution of a best-fit model. Correctly specifying such a model allows data processing to occur while preserving the dataset’s overall average values, correlations, and variation. However, identifying the observations that require correction remains a challenge. As discussed above, we can estimate the approximate manufacturer’s sales price for cigarettes. In the absence of additional proprietary data, we are unable to determine a price range that accounts for retail store costs and profit. Without this information, and given that the duty-free cigarette data include significant and questionable variation of reported prices even within our estimated price band, it is not possible to identify which observations require correction or deletion with appropriate levels of confidence. Lacking a clear basis for finding either the value data or quantity data reliable, we also cannot appropriately determine how to manage the relationship between value and quantity if we were to impute replacement price levels for these observations. Appendix III: Comments from the Department of Homeland Security Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Emil Friberg (Assistant Director), Farhanaz Kermalli (Analyst-in-Charge), Giff Howland, David Dayton, Neil Doherty, Andrew Kurtzman, and Grace Lui made key contributions to this report. Pedro Almoguera, Ming Chen, Jill Lacey, and Mary Moutsos provided technical assistance.
Why GAO Did This Study Since the 1970s, U.S. agencies have recognized that high-volume cigarette sales at duty-free stores near the U.S.–Mexico land border, although lawful, could be related to illicit activity. In 1988, U.S. law limited the quantity of duty-free tobacco products an individual can purchase at stores located in airports, restricting the sale of tobacco products to quantities consistent with personal use. This requirement, however, does not apply to land border duty-free stores. GAO was asked to review information on sales of cigarettes at duty-free stores along the southwest border. CBP identified 88 such stores and warehouses. This report describes (1) requirements that govern the lawful sale and export of cigarettes from duty-free stores on the southwest border and schemes for illicit trade in such cigarettes, (2) U.S. agency observations about these exports and efforts to counter illicit trade, (3) the extent to which selected cigarette transaction data submitted by duty-free stores indicate compliance issues. GAO analyzed Census data on these exports; reviewed CBP, ICE, and Department of the Treasury documents; and interviewed agency officials in Washington, D.C., and in several ports along the southwest border, including Laredo, Texas, and the San Diego, California, area. What GAO Found Duty-free stores at the southwest border may sell tax-exempt cigarettes in any quantity to passengers departing the United States for Mexico; agencies have identified schemes associated with duty-free cigarette sales used to evade U.S. and Mexican taxes. U.S. Customs and Border Protection (CBP), an agency within the Department of Homeland Security (DHS), regulates duty-free stores. U.S. regulations require the stores to have procedures to provide reasonable assurance of export of cigarettes and the exporter to report export information on transactions valued at over $2,500. U.S. Census Bureau (Census) data show that about 18,500 such transactions involving cigarettes occurred from 2010 to 2015. According to information from U.S. and Mexican officials, the Mexican government limits the amount of duty-free cigarettes that can be brought into Mexico (see figure). U.S. agencies identified three schemes to evade U.S. and Mexican cigarette-related tax and other laws: (1) diversion from a duty-free store into U.S. commerce; (2) smuggling into Mexico through U.S. ports; and (3) smuggling back into the United States after export to Mexico. U.S. agency officials said that some smuggling of duty-free cigarettes across the southwest border has links to organized crime, supplies the illicit tobacco market in Mexico, and poses oversight and enforcement challenges. U.S. Immigration and Customs Enforcement (ICE) officials said they have identified links between the smuggling of large quantities of duty-free cigarettes and transnational criminal organizations that use the smuggled cigarettes to launder money and generate revenue. Inexpensive cigarettes made in the United States are part of the trade in duty-free cigarettes along the southwest border, including brands that a Mexican official stated are prohibited for sale in Mexico. U.S. officials reported that their efforts to counter the illicit trade in duty-free cigarettes face challenges, primarily due to the ability to buy unlimited quantities of duty-free cigarettes at the land border. According to CBP, in many cases, duty-free stores on the southwest border are filing noncompliant information that they are required to report on cigarette exports valued at more than $2,500. For example, officials had compliance concerns with filings in which stores identify themselves, and not the purchaser, as the exporter. CBP and Census have met with representatives of one of the largest operators of duty-free stores on the southwest border to clarify regulatory requirements. However, CBP officials said that this duty-free store operator continues to make incorrect filings. CBP has not issued guidance to all operators to clarify the correct procedure. Without accurate export data, agencies may lack the information they need to enhance their enforcement and intelligence efforts. What GAO Recommends CBP should take steps to strengthen compliance with export reporting requirements for duty-free cigarette sales on the southwest border, such as issuing guidance to all duty-free store operators. DHS agreed and noted CBP plans to address the recommendation.
gao_GAO-19-167
gao_GAO-19-167_0
Background SNAP is intended to help low-income households obtain a more nutritious diet by providing them with benefits to purchase food from authorized retailers nationwide. SNAP is jointly administered by FNS and the states. FNS pays the full cost of SNAP benefits and shares the costs of administering the program with the states. FNS is responsible for promulgating SNAP program regulations, ensuring that state officials administer the program in compliance with program rules, and authorizing and monitoring stores from which recipients may purchase food. States are responsible for determining applicant eligibility, calculating the amount of their benefits, issuing the benefits on EBT cards—which can be used like debit cards to purchase food from authorized retailers—and investigating possible program violations by recipients. Participation in SNAP has generally increased among recipients and retailers in recent years. Specifically, participation in SNAP increased from about 26 million recipients in fiscal year 2007 to 42 million in fiscal year 2017, leading to a corresponding increase in the amount of SNAP benefits redeemed. The number of stores FNS authorized to participate in SNAP also increased, from about 162,000 nationwide in fiscal year 2007 to more than 250,000 in fiscal year 2017. Although there was particular growth in the number of small grocery and convenience stores, as well as “other” stores (which include independent drug stores, general merchandise stores like dollar stores, and farmers’ markets), the majority of SNAP benefits were redeemed at large grocery stores and supermarkets in each year (see fig. 1). Retailer Trafficking According to FNS, most SNAP benefits are used for the intended purpose; however, as we have reported in prior work, FNS has faced challenges addressing trafficking—one type of program fraud. In general, trafficking occurs when retailers exchange recipients’ SNAP benefits for cash, often taking a fraudulent profit. For example, a retailer might charge $100 to a recipient’s SNAP EBT card and give the recipient $50 in cash instead of $100 in food. The federal government reimburses the retailer $100, which results in a fraudulent $50 profit to the retailer. While this type of trafficking is a direct exchange of SNAP benefits for money, trafficking also can be done indirectly. For example, a retailer might give a recipient $50 in cash for the use of $100 in benefits on that recipient’s EBT card. The retailer could then use the EBT card to purchase $100 in products at another SNAP retailer (see fig. 2). In this instance, the retailer would profit because they paid $50 for $100 worth of products, and the retailer might also increase their profit by reselling the products at a higher price in their own store. Retailer Management and Oversight Among other things, FNS is responsible for authorizing and monitoring retailers who participate in SNAP to ensure program integrity. In order to participate in SNAP, a retailer applies to FNS and demonstrates that they meet program requirements, such as those on the amount and types of food that authorized stores must carry. FNS verifies a retailer’s compliance with these requirements, for example, through an on-site inspection of the store. If the retailer meets requirements, FNS generally authorizes it to participate for a period of 5 years. FNS then monitors retailers’ participation by analyzing data on SNAP transactions and conducting undercover investigations, among other activities. If FNS suspects a retailer is trafficking, it generally must notify the USDA OIG—which is responsible for investigating allegations of fraud and abuse in all of USDA’s programs, including SNAP—before opening a case. The OIG may choose to open its own investigation of the retailer for possible criminal prosecution, or allow FNS to pursue the case. If FNS determines that a retailer has engaged in trafficking, FNS sanctions the store. Generally, stores found to have engaged in trafficking are permanently disqualified from SNAP, but in limited circumstances, the owner may instead receive a civil monetary penalty. Retailers who do not agree with the sanction assessed by FNS can file a written request to have FNS’s Administrative Review Branch review the decision, and, if not satisfied, file a complaint in the appropriate U.S. District Court. In 2013, FNS consolidated its retailer management functions, including those for authorizing stores and analyzing SNAP transaction data, into a single national structure known as the Retailer Operations Division (see fig. 3). Estimating Retailer Trafficking Since 1995, FNS has published periodic reports estimating the extent of trafficking in SNAP as part of its efforts to monitor program integrity. These trafficking estimates are the most commonly cited measure of SNAP fraud, including in the news media and congressional testimony. FNS estimates retailer trafficking by adjusting a sample of stores known or suspected of trafficking to reflect the total population of SNAP- authorized stores. For each report, FNS uses 3 years of data on stores and SNAP transactions to estimate the amount and percentage of benefits that were trafficked and the percentage of stores engaged in trafficking (see fig. 4). For example, the most recent report—published in September 2017—analyzes data from 2012 through 2014. FNS Estimates Suggest Retailer Trafficking Has Increased in Recent Years, but the Estimates Have Limitations FNS Estimates Indicate an Increase in Retailer Trafficking, but the Actual Extent of Trafficking Is Uncertain FNS’s data indicate an increase in the estimated rate of retailer trafficking in recent years. FNS reported in March 2011 that approximately $330 million in SNAP benefits (or 1 percent of all benefits redeemed) were trafficked annually from 2006 through 2008, and that approximately 8.2 percent of all authorized stores engaged in trafficking. In its most recent report from September 2017, FNS reported that approximately $1 billion in SNAP benefits (or 1.5 percent) were trafficked annually from 2012 through 2014, and that approximately 11.8 percent of all authorized stores engaged in trafficking. Although FNS produces the trafficking estimates with accepted statistical methods, its reports do not clearly convey the level of uncertainty introduced by the approach used to calculate the estimates. Throughout each report, FNS presents its estimates as precise numbers. However, uncertainty is introduced when extrapolating from a smaller sample—in this case, an investigative sample that solely includes stores known to have trafficked or suspected of trafficking—to the full population of SNAP- authorized stores because the extent to which the sample reflects the broader population of stores is unknown (see sidebar). According to the Office of Management and Budget’s (OMB) statistical standards for federal agencies, possible variation in estimates should be noted, such as by reporting the range of each estimate. While FNS discusses some limitations of its trafficking estimates in the body of each report, only the report’s appendices include information that can be used to assess the level of uncertainty around the estimates. Using information contained in these appendices, we found widely varying trafficking estimates. For example, although FNS reported that approximately $1 billion in SNAP benefits were trafficked annually from 2012 through 2014, information in the report’s appendices indicates that the amount trafficked could have ranged from about $960 million to $4.7 billion. In other words, the total value of SNAP benefits that were trafficked each year from 2012 through 2014 could have been approximately $40 million less or $3 billion more than FNS reported (see fig. 5). FNS officials stated the agency has not considered and does not intend to consider changes to how it reports its trafficking estimates in the next report. According to an FNS official, FNS would like the reports to continue to provide non-technical information that is comparable to prior years. However, as shown in the figure above, it is possible to compare estimates over time when estimates are presented with ranges. Further, reporting the level of uncertainty with each estimate increases transparency. According to a recent Congressional Research Service report, these estimates are the most-often cited measure of fraud in SNAP. The estimates have been cited in the news media and congressional testimony, and FNS officials stated the estimates can help quantify the outcomes of FNS’s efforts to prevent, detect, and respond to retailer trafficking. By not clearly reporting the level of uncertainty around these commonly cited estimates of SNAP retailer trafficking, FNS’s reports are potentially providing misleading information to Congress and the public regarding the extent of fraud in SNAP. FNS Evaluated Ways to Address Some Limitations in the Trafficking Estimates, but Does Not Plan to Take Further Steps FNS has acknowledged limitations with its current approach to estimating retailer trafficking and evaluated ways to address some of those limitations. As previously noted, FNS selects a non-random sample of stores known to have trafficked or suspected of trafficking when calculating its estimates, which may introduce bias into those trafficking estimates (see sidebar). For example, the sample could overestimate the extent of retailer trafficking if the stores in the sample that are targeted for investigation are more likely to traffic. Conversely, if FNS’s detection methods do not capture all instances of trafficking—such as retailers who only traffic with people they know—the sample could lead to an underestimate of trafficking among all SNAP-authorized stores. Recognizing that the trafficking estimates provide important information on program fraud, FNS evaluated ways to address limitations in the estimates. In 2013, FNS convened a technical working group of experts to discuss alternate ways to estimate retailer trafficking. That group made various recommendations to improve the estimates, some of which FNS pursued through additional analyses. For example, to address limitations introduced by the sample FNS uses to estimate trafficking, the agency conducted a study to assess the feasibility of calculating its estimates using a national random sample of stores. However, FNS determined it would be infeasible to use a random sample because of the costs and resources that would be involved. According to FNS officials, it cost the agency approximately $67,000 to produce the September 2017 trafficking estimates report. By comparison, FNS estimated that using a national random sample could cost between $11.5 million and $38 million, depending on the specific sample selection method. This is because, among other factors, taking this approach would require investigative staff to visit stores suspected of trafficking as well as those that are not suspected of trafficking. Doing so would require a significant number of additional investigators, according to the feasibility study. Also in response to a recommendation by the technical working group, FNS contracted for a study in November 2017 that reviewed the five factors the agency uses to make adjustments to reduce the bias in its sample of stores (see sidebar). FNS began using these five factors—such as the type of store—more than 20 years ago when it initially developed its approach to estimating trafficking. The study evaluated the continued relevance of the five factors, as well as the relevance of additional factors related to store characteristics and neighborhood demographics. The study did not make recommendations, and the expert who conducted the study told us that based on the analysis, the original five factors remain relevant. As a result, FNS officials stated the agency would continue to use these factors to reduce bias in the sample. However, FNS has not evaluated whether factors the agency currently uses to identify stores for possible investigation could help reduce bias in the sample and improve the trafficking estimates. Specifically, FNS analyzes data on SNAP transactions and looks for suspicious patterns or other indications of potential trafficking. Based on the results of these analyses, FNS assigns a numeric score to each store, and stores with scores above a certain threshold are added to FNS’s Watch List for further review. Several experts have suggested to FNS that including this score or other related factors when adjusting the investigative sample could help reduce the bias in the sample and improve the trafficking estimates, yet FNS has not evaluated the use of these factors for this purpose. FNS officials said that stores’ numeric scores and the factors related to the Watch List are not public information, and the agency’s preference is to be transparent about the methodology used to produce the trafficking estimates. However, FNS already describes its Watch List and the use of a numeric score threshold in an appendix to its trafficking reports. According to OMB’s statistical standards, federal agencies should take steps to maximize the objectivity of the statistical information they produce. Objectivity refers to whether the information is accurate, reliable, and unbiased. Without evaluating the usefulness of the Watch List factors for adjusting the sample, FNS may miss an opportunity to reduce the bias in the sample and improve the accuracy of its trafficking estimates. In addition, FNS has not evaluated the accuracy of its assumption of the percentage of SNAP benefits trafficked by different types of stores, which FNS developed over 20 years ago from anecdotal information. Among stores that engaged in trafficking, FNS assumes that 90 percent of benefits redeemed in small stores and 40 percent of benefits redeemed in large stores were trafficked (see sidebar). A former FNS official who helped develop the agency’s approach for estimating trafficking stated that the assumption was based on conversations with investigators in the 1990s—deemed to be the best source of information at the time. He noted that the investigators who were consulted generally disagreed on the percentage of benefits that were trafficked, as the actual percentage could vary widely based on whether, for example, one employee had trafficked or the entire store was a front for trafficking. However, the investigators generally agreed that 90 percent and 40 percent would overestimate trafficking by retailers in small and large stores, respectively. According to FNS officials, in the absence of other data, they preferred to use an overestimate, rather than an underestimate, of the percentage of benefits trafficked in stores found to have trafficked. Despite an increase in the availability of data on retailer trafficking over the last 20 years, FNS officials have not evaluated the accuracy of this key assumption and said that they have no plans to do so. FNS officials noted that they do not believe there are available data that indicate whether the assumption is accurate and, as such, any evaluation would require new data collection. However, according to contractors and a former official we spoke with who had studied the methodology as well as USDA OIG officials, data are currently available that may help FNS evaluate the accuracy of this assumption. For example, they suggested FNS could analyze the transaction data of stores that trafficked to identify the percentage of all redeemed SNAP benefits that were consistent with known indicators of trafficking. Currently, OIG officials told us that they use a similar approach to calculate the amount of benefits trafficked for a store whose owner is being prosecuted. According to OMB’s statistical standards, assumptions should be reviewed for accuracy and validated using available, independent information sources. Without FNS evaluating its key assumption of the percentage of SNAP benefits trafficked, the estimates it reports on the extent of program fraud are potentially inaccurate. FNS Has Taken Some Steps That Generally Align with Leading Practices to Prevent, Detect, and Respond to Retailer Trafficking, but Has Not Pursued Others FNS Has Taken Steps to Address Retailer Trafficking, but Has Not Pursued Certain Prevention and Response Activities Preventing Trafficking in the Retailer Authorization Process FNS has taken some steps to prevent retailer trafficking that align with leading fraud risk management practices and our prior recommendations, but has not pursued some opportunities for early oversight. As we note in our Fraud Risk Framework, while fraud control activities can be interdependent and mutually reinforcing, preventative activities generally offer the most cost-effective investment of resources. FNS officials told us that the agency tries to prevent trafficking through its policies and procedures for authorizing stores to participate in SNAP. Since our 2006 report, FNS has taken some steps to amend retailer authorization policies to address vulnerabilities that we identified. For example: Increasing requirements for food that retailers must stock to participate in SNAP: In 2006, we found that FNS had minimal requirements for the amount of food that retailers must stock, which could allow retailers more likely to traffic into the program. At that time, FNS officials said that they authorized stores with limited food stock to provide access to food in low-income areas where large grocery stores were scarce. In 2006, retailers were generally required to stock a minimum of 12 food items (at least 3 varieties in each of 4 staple food categories), but FNS rules did not specify how many items of each variety would constitute sufficient stock. We recommended that FNS develop criteria to help identify the stores most likely to traffic, using information such as the presence of low food stock. In 2016, FNS promulgated a final rule increasing food stock requirements and, in January 2018, issued a policy memorandum to clarify these requirements. FNS officials told us that the new requirements are designed to encourage stores to provide more healthy food options for recipients and discourage trafficking. According to the memorandum, retailers are now generally required to stock at least 36 food items (including stocking at least 3 varieties in each of 4 staple food categories, and 3 items of each variety). See figure 6 for a comparison of the previous (as of 2006) and current (reflecting the January 2018 memorandum) requirements. Assessing retailer risk levels: Also in 2006, we found that FNS had not conducted analyses to identify characteristics of stores at high risk of trafficking and to target its resources accordingly. For example, we reported that some stores may be at risk of trafficking because one or more previous owners had been found to be trafficking at the same location. At that time, FNS did not have a system in place to ensure that these stores were quickly targeted for heightened attention. We recommended that FNS identify the stores most likely to traffic and provide earlier, more targeted oversight to those stores. In 2009, FNS established risk levels for stores: high, medium, and low. For example, high-risk stores are those with a prior permanent disqualification at that location or nearby. In January 2012, FNS revised its policy for authorizing high-risk stores. The policy requires high-risk retailers to provide specific documentation to ensure that the owners listed on the application have not been previously disqualified or do not have ties to a previously disqualified owner, such as a letter from the bank listing the authorized signers on the store’s accounts. Although FNS took these steps to identify risk levels for stores and target its initial authorization activities accordingly, the agency is not currently using this information to target its reauthorization activities to stores of greatest risk. During reauthorization, FNS reviews previously approved stores for continued compliance with program requirements. Although the agency’s policy and website both state that certain high-risk stores will be reauthorized annually, FNS is currently reauthorizing all stores on the same 5-year cycle, regardless of risk. FNS reauthorized most high-risk stores under this policy one time in fiscal year 2013, but officials told us that they then discontinued annual reauthorizations after an in-depth assessment of the benefits and costs of this practice. For example, FNS staff reported collecting more than 150,000 documents as part of the fiscal year 2013 reauthorization cycle and found that collecting these documents annually is ineffective and burdensome to FNS and the retailer. FNS instead decided to annually reauthorize a sample of high-risk retailers as a result of its assessment of the fiscal year 2013 cycle, but did not follow through with those plans. Specifically, the agency decided to pursue annual reauthorizations of a sample of stores at the greatest risk of program violations—those at the same address as a store that had been previously permanently disqualified. However, FNS officials did not have documentation that the approach was ever implemented or that they assessed the benefits and costs of reauthorizing this sample of high-risk retailers. More frequent reauthorization of certain high-risk stores is consistent with federal internal control standards, which suggest that agencies should consider the potential for fraud when determining how to respond to fraud risks. Considering the benefits and costs of control activities to address identified risks is a leading practice in GAO’s Fraud Risk Framework. By not assessing the benefits and costs of reauthorizing certain high-risk stores more frequently than other stores, FNS may be missing an opportunity to provide early oversight of risky stores and prevent trafficking. Detecting Retailer Trafficking The steps FNS has taken to improve how it detects retailer trafficking generally align with fraud risk management leading practices for designing and implementing control activities to detect fraud. For example, FNS’s website shows how to report SNAP fraud, including retailer trafficking, through the USDA OIG’s fraud hotline. According to our Fraud Risk Framework, reporting mechanisms help managers detect instances of potential fraud and can also deter individuals from engaging in fraudulent behavior if they believe the fraud will be discovered and reported. Increasing managers’ and employees’ awareness of potential fraud schemes can also help managers and employees better detect potential fraud. To that end, FNS has developed fraud awareness training for staff in each of the branches in its Retailer Operations Division—the office primarily responsible for oversight of SNAP-authorized retailers. This includes training related to retailer trafficking for new staff and refresher training for experienced staff. Some of the training materials employ identified instances of trafficking to improve future detection and response activities. See figure 7 for photographs from a store investigation that were featured in an April 2017 training session. FNS also uses data analytics, another leading practice in our Fraud Risk Framework, to identify potential trafficking and prioritizes its investigative resources to the stores most likely to be trafficking. Specifically, FNS scans about 250 million SNAP transactions per month through its Anti- Fraud Locator using EBT Retailer Transactions (ALERT) system to identify certain patterns indicative of trafficking. ALERT assigns a numeric score to each store based on the likelihood of trafficking. Stores with scores above a certain threshold are added to FNS’s Watch List, and FNS analysts and investigators prioritize the stores on this list for review based on factors such as average transaction amounts that are excessive for that type of store. In addition, FNS’s analysts conduct their own data mining and review complaints and fraud tips from the OIG’s hotline to add stores to the Watch List. FNS also has explored ways and taken steps to improve its data analytics through internal workgroups and external studies. Using the results of monitoring and evaluations to improve fraud risk management activities is a leading fraud risk management practice. For example, staff in the Retailer Operations Division participate in a workgroup that uses findings from FNS’s trafficking investigations to improve the Division’s detection efforts. This collaborative effort has led to improvements such as using store ZIP codes to compare transactions at stores suspected of trafficking with similar stores nearby. According to FNS, its staff can use this information to substantiate charges against retailers by establishing what typical transaction patterns look like, compared to trafficking patterns, for similar stores. In addition, FNS commissioned studies in fiscal years 2014 and 2015 to evaluate the effectiveness of its data analytics to monitor stores and identify areas for improvement. For example, one of the studies identified and recommended new ways that FNS could analyze SNAP transaction data to detect emerging trafficking schemes—such as indirect trafficking at super stores and supermarkets, where more than 80 percent of SNAP benefits are redeemed. FNS officials reported in August 2018 that they examined the recommendations and implemented those they determined were feasible with current resources and would add value to their efforts. For example, they decided to analyze data over shorter periods of time (monthly instead of a 6-month period) to more quickly identify stores that may be trafficking. Officials also reported that they are continuing to assess the effectiveness of their data analytics. Responding to Retailer Trafficking FNS’s efforts to respond to retailer trafficking generally align with leading practices for fraud risk management. Consistent with our Fraud Risk Framework, FNS has established collaborative relationships with external stakeholders to respond to identified instances of potential fraud. For example, to amplify its own efforts, FNS has agreements (known as state law enforcement bureau, or SLEB, agreements) with 28 states. Through these agreements, FNS allows state and local law enforcement agencies to use SNAP EBT cards in their own undercover investigations of retailers. According to the most recent available FNS data, participating states opened 1,955 cases from fiscal year 2012 to fiscal year 2018 under SLEB agreements. These cases resulted in a total of 139 retailers being permanently disqualified from the program. Within USDA, FNS and the OIG also said they recently updated a memorandum of understanding (MOU) that outlines, among other things, how the two entities will coordinate on retailer trafficking investigations. Under the MOU, FNS investigates retailers with average monthly SNAP redemptions below a certain dollar threshold without first obtaining clearance from the OIG to pursue the case. FNS and OIG officials said that this provision of the MOU allows FNS to more quickly investigate suspicious behavior and pursue administrative action, such as permanent disqualification, against retailers found to be trafficking. Previously, according to OIG officials, FNS needed to clear most cases against retailers suspected of trafficking through the OIG. As we noted in our 2006 report, due to the time it takes to develop an investigation for prosecution and the costs associated with doing so, a natural tension exists between the goal of disqualifying a retailer as quickly as possible to prevent further trafficking and seeking prosecution of the retailer to recover losses and deter other traffickers. The MOU is also designed to strengthen collaboration between FNS and the OIG in identifying the situations that warrant criminal investigations. Since our 2006 report, OIG and FNS both generally increased the number of actions taken against SNAP retailers found to be trafficking. Specifically, the OIG reported an increase in the number of trafficking cases that it successfully referred for federal, state, or local prosecution (see fig. 8). The OIG also reported increases in the number of convictions resulting from its investigations, from 79 in fiscal year 2007 to 311 in fiscal year 2017. FNS also generally increased the number of retailers sanctioned for trafficking, though few received a monetary penalty. From fiscal year 2007 to fiscal year 2017, the number of permanent disqualifications resulting from FNS’s trafficking investigations nearly doubled (see fig. 9). In lieu of a permanent disqualification, FNS sometimes imposes a monetary penalty on a retailer found to be trafficking. However, FNS imposed few monetary penalties for trafficking in lieu of permanent disqualification during this period. From fiscal year 2007 to fiscal year 2017, FNS assessed a total of 40 such penalties, totaling $1.5 million (for an average of about $38,000 each). In our 2006 report, we found that FNS’s penalties for retailer trafficking may be insufficient to deter traffickers. We noted that trafficking will continue to be lucrative for retailers as long as the potential rewards outweigh the penalties and recipients are willing to exchange their benefits for cash. We recommended that FNS develop a strategy to increase penalties for trafficking. Using the results of evaluations, such as audits, to improve fraud risk management activities is a leading practice in GAO’s Fraud Risk Framework. The Food, Conservation, and Energy Act of 2008 (known as the 2008 Farm Bill) gave USDA authority to impose higher monetary penalties, as well as authority to impose both a monetary penalty and program disqualification on retailers found to have violated relevant law or regulations (such as those found to be trafficking). Although USDA was granted this authority a decade ago, the department has not finalized regulatory changes to strengthen penalties against retailers found to be trafficking. In August 2012, FNS proposed regulatory changes to implement this authority from the 2008 Farm Bill, including assessing a new trafficking penalty in addition to permanent disqualification. The penalty would have been based on the store’s average monthly SNAP redemptions and was intended to recoup government funds diverted from their intended use. In proposing these changes, FNS stated that they were necessary to improve program integrity and deter retailers from committing program violations. FNS also estimated that it would assess an additional $174 million per year in these new trafficking penalties—a significant increase from the amounts FNS currently assesses in penalties for trafficking (less than $100,000 in fiscal year 2017). However, FNS did not finalize this rule, and, as of spring 2018, the rule was considered “inactive.” At that time, FNS officials told us that they had not finalized the rule because other rulemaking had taken priority in the intervening 6 years. More recently, in August 2018, FNS officials told us that they plan to revise the previously proposed rule to increase penalties and submit it for the spring 2019 regulatory agenda. In November 2018, FNS officials indicated that they were beginning to draft the proposed rule but could not provide us with documentation of this effort because the regulatory action was still pending. Increasing penalties for retailer trafficking would serve as an important tool to deter trafficking and safeguard federal funds. FNS Has Not Established Performance Measures to Assess its Retailer Trafficking Prevention Activities FNS measures the effectiveness of many of its trafficking detection and response activities, but lacks measures to evaluate its prevention activities. Measuring outputs, outcomes, and progress toward the achievement of fraud risk objectives is a leading practice in our Fraud Risk Framework. At the agency level, FNS has a priority plan for fiscal year 2018 that includes a goal of reducing the SNAP trafficking rate through retailer- and client-focused activities. At the program level, FNS’s Retailer Operations Division has an internal scorecard that tracks performance measures related to retailer oversight activities, but none of these focuses on prevention of trafficking. For example, the scorecard measures the outputs and outcomes of activities designed to detect and respond to trafficking, such as the total number of sanctions implemented against retailers and the percentage of undercover investigations that result in a permanent disqualification. However, the scorecard does not have any measures related to preventing trafficking through the retailer authorization process—a key area for prevention activities. The scorecard includes one output measure related to this process, but the measure (the percentage of retailer authorization requests processed within 45 days) focuses on how quickly retailers gain access to the program, rather than preventing trafficking. Although FNS officials have acknowledged that their program compliance efforts begin with the retailer authorization process, they said that they had not considered establishing measures related to preventing trafficking. They added that their supervisory review process may help ensure that staff who process retailer applications in the Retailer Operations Branch do not overlook evidence of potential fraud, but this review includes a small sample of approved store applications (typically 5 cases per staff member monthly). Although FNS has not established measures to assess its trafficking prevention activities, the agency has data that it could leverage for this purpose. For example, FNS collects data on the number of applications that were denied because FNS found that the retailer lacked business integrity, such as applicants previously found to be trafficking or with ties to a prior owner who had trafficked. Such data could be used to develop measures related to the number and percentage of retailer applications denied for business integrity. FNS officials acknowledged that these data could be used to develop performance measures for its trafficking prevention activities. Establishing such measures would enable FNS to more fully assess the effectiveness of its retailer oversight activities and better balance retailer access to the program with preventing retailer fraud. Conclusions FNS must continue to balance its goal of program integrity with its mission to provide nutrition assistance to millions of low-income households. During a period in which SNAP retailer participation has markedly increased, FNS has made progress in addressing SNAP retailer trafficking by identifying high-risk stores and increasing the number of stores disqualified for trafficking. It is critical that FNS maintain progress and momentum in these areas, particularly since FNS’s own data suggest that trafficking is on the rise. To its credit, FNS has already evaluated some ways to improve how the agency measures and addresses retailer trafficking, yet, at the same time, the agency has missed opportunities to strengthen these areas. For example, since FNS has not taken steps to clarify and improve its retailer trafficking estimates—one of the only available SNAP fraud measures— questions remain regarding the accuracy of the estimates and the extent of fraud in SNAP. In addition, prevention and early detection of retailer trafficking are particularly important and deserve continued attention, especially since retailers can quickly ramp up the amount they redeem in federal SNAP benefits, potentially by trafficking. However, because FNS is reauthorizing all stores once every 5 years, the agency may be missing an opportunity to prevent trafficking through more frequent oversight of risky stores. Further, until FNS strengthens its response to trafficking by increasing penalties, the agency will continue to miss an opportunity to improve program integrity and deter retailers from committing program violations. Finally, FNS directs a significant amount of staff resources to authorizing and monitoring retailers who participate in SNAP. Ensuring that those staff understand the importance of addressing fraud is key for program integrity. FNS has taken steps to make that clear through the inclusion of relevant performance measures for the branches responsible for fraud detection and response, yet the agency has not developed such measures for its trafficking prevention activities. Until FNS establishes performance measures for these activities, it will be unable to fully assess the effectiveness of its overall efforts to address retailer trafficking. In addition, such measures would assist FNS in balancing its efforts to ensure retailer access with those to prevent retailer fraud. Recommendations for Executive Action We are making the following five recommendations to FNS: The Administrator of FNS should present the uncertainty around its retailer trafficking estimates in future reports by, for example, including the full range of the estimates in the report body and executive summary. (Recommendation 1) The Administrator of FNS should continue efforts to improve the agency’s retailer trafficking estimates by evaluating (1) whether the factors used to identify stores for possible investigation could help address the bias in its sample, and (2) the accuracy of its assumption of the percentage of SNAP benefits that are trafficked by different types of stores. (Recommendation 2) The Administrator of FNS should assess the benefits and costs of reauthorizing a sample of high-risk stores more frequently than other stores, use the assessment to determine the appropriate scope and time frames for reauthorizing high-risk stores moving forward, and document this decision in policy and on its website. (Recommendation 3) The Administrator of FNS should move forward with plans to increase penalties for retailer trafficking. (Recommendation 4) The Administrator of FNS should establish performance measures for its trafficking prevention activities. (Recommendation 5) Agency Comments and Our Evaluation We provided a draft of this report to USDA for review and comment. On December 3, 2018, the Directors of the Retailer Policy & Management Division and the Retailer Operations Division of FNS provided us with the agency’s oral comments. FNS officials told us that they generally agreed with the recommendations in the report. Officials also provided technical comments, which we incorporated as appropriate. Regarding the recommendation to present the uncertainty around the retailer trafficking estimates, FNS officials told us that they plan to include the estimate intervals and results of sensitivity analyses in the body of their next report, rather than in appendices. This is the information we used to determine the range around the trafficking estimates. Making this change would address our recommendation, as we continue to believe that reporting the level of uncertainty around each estimate would increase transparency and provide Congress and the public with better information on the extent of fraud in SNAP. In addition, regarding the recommendation to assess the benefits and costs of reauthorizing a sample of high-risk retailers more frequently, FNS officials noted that while reauthorizations currently occur at least once every 5 years, monitoring for potential violations occurs on an ongoing basis regardless of risk level. Low-, medium-, and high-risk stores are continually scanned by FNS’s ALERT system. FNS officials added that, in fiscal year 2017, FNS imposed sanctions (e.g., fines or temporary disqualifications) on 862 stores found to be violating program rules, and disqualified permanently 1,661 stores for trafficking SNAP benefits or falsifying an application. FNS officials noted that this is a 26 percent increase in the number of stores sanctioned, compared to fiscal year 2013. We agree that ongoing monitoring is important, and we discussed these and other FNS efforts to detect and respond to retailer trafficking in our report. We nevertheless believe, and FNS officials agreed, that assessing the value of earlier oversight of risky stores through the reauthorization process is warranted, and could enhance efforts to prevent trafficking. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of the USDA, congressional committees, and other interested parties. In addition, this report will be available at no charge on the GAO website at www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. Appendix I: GAO Contact and Staff Acknowledgments GAO Contact Kathy Larin, (202) 512-7215 or [email protected]. Staff Acknowledgments In addition to the contact named above, Rachel Frisk (Assistant Director), Rachael Chamberlin (Analyst-in-Charge), and Swati Deo made significant contributions to this report. Also contributing to this report were James Bennett, Thomas Cook, Alex Galuten, Lara Laufer, Olivia Lopez, Jean McSween, Jessica Orr, Philip Reiff, Almeta Spencer, Jeff Tessin, Matthew Valenta, and Erin Villas.
Why GAO Did This Study SNAP is the largest federally funded nutrition assistance program, providing about $64 billion in benefits to over 20 million households in fiscal year 2017. FNS oversees SNAP at the federal level and is responsible for authorizing and overseeing retailers. While most benefits are used as intended, some retailers have engaged in trafficking, which represents fraud and diverts federal funds from their intended use. GAO was asked to review FNS's efforts to address SNAP retailer trafficking since GAO's last report in 2006. This report examines (1) what is known about the extent of SNAP retailer trafficking, and (2) the extent to which FNS has taken steps intended to improve how it prevents, detects, and responds to retailer trafficking. GAO reviewed relevant federal laws and regulations, FNS policies, and studies related to retailer trafficking; assessed FNS's use of statistical standards for federal agencies and selected leading practices in GAO's Fraud Risk Framework ; and interviewed FNS and USDA Office of Inspector General officials and key stakeholders. What GAO Found The U.S. Department of Agriculture (USDA) Food and Nutrition Service's (FNS) estimates of retailer trafficking—when a retailer exchanges Supplemental Nutrition Assistance Program (SNAP) benefits for cash instead of food—have limitations, though they suggest trafficking has increased in recent years, to $1 billion each year from 2012 to 2014. One key limitation of the estimates is that FNS has not evaluated the accuracy of its assumption about the percentage of SNAP benefits trafficked. FNS assumes that, among stores that trafficked, 90 percent of the benefits redeemed in small stores, and 40 percent in large stores, were trafficked. A former FNS official stated that this assumption is based on discussions with investigators in the 1990s when FNS first developed its approach to estimate trafficking, and that they have not since evaluated it for accuracy. However, there are options available for evaluating this assumption, such as reviewing SNAP transaction data from stores that are known to have trafficked. Statistical standards for federal agencies indicate that assumptions should be reviewed for accuracy and validated using available, independent information sources. By not evaluating this key assumption, FNS's commonly cited estimates of SNAP fraud are potentially inaccurate. FNS has generally taken steps to address retailer trafficking that align with leading fraud risk management practices, but the agency has not pursued additional actions to prevent and respond to trafficking. For example: Although FNS assigns a risk level to each store when it applies to participate in SNAP, it is not currently using this information to target its reauthorization activities to stores of greatest risk. During reauthorization, FNS reviews previously approved stores for continued compliance with program requirements. FNS currently reauthorizes all stores on the same 5-year cycle, regardless of risk, although its policy states that it will reauthorize certain high-risk stores annually. FNS officials planned to reauthorize a sample of high-risk stores each year, but said they did not follow through with those plans. Officials also stated that they did not document an analysis of the benefits and costs of this practice, which would be consistent with leading fraud risk management practices. As a result, FNS may be missing an opportunity to provide early oversight of risky stores and prevent trafficking. The Food, Conservation, and Energy Act of 2008 gave USDA the authority to strengthen penalties for retailers found to have trafficked, but as of November 2018, FNS had not implemented this authority. FNS proposed a related rule change in 2012 and indicated the change was necessary to deter retailers from committing program violations, but the rule was not finalized. By failing to take timely action to strengthen penalties, FNS has not taken full advantage of an important tool for deterring trafficking. What GAO Recommends GAO is making five recommendations, including that FNS improve its trafficking estimates by, for example, evaluating the accuracy of its assumption of the percentage of benefits that are trafficked; assess the benefits and costs of reauthorizing a sample of high risk stores more frequently than others; and move forward with plans to increase penalties for trafficking. FNS generally agreed with GAO's recommendations.
gao_GAO-18-256
gao_GAO-18-256_0
Background Financial Regulators Regulators for the Banking Industry All depository institutions that have federal deposit insurance have a federal prudential regulator, which generally may issue regulations and take enforcement actions against institutions within its jurisdiction (see table 1). The securities and futures markets are regulated under a combination of self-regulation (subject to oversight by the appropriate federal regulator) and direct oversight by the Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC), respectively. SEC regulates the securities markets, including participants such as corporate issuers, securities exchanges, broker-dealers, investment companies, and certain investment advisers and municipal advisors. SEC’s mission is to protect investors; maintain fair, orderly, and efficient markets; and facilitate capital formation. SEC also oversees self-regulatory organizations—including securities exchanges, clearing agencies, and the Financial Industry Regulatory Authority—that have responsibility for overseeing securities markets and their members; establishing standards under which their members conduct business; monitoring business conduct; and bringing disciplinary actions against members for violating applicable federal statutes, SEC’s rules, and their own rules. CFTC is the primary regulator for futures markets, including futures exchanges and intermediaries, such as futures commission merchants. CFTC’s mission is to protect market users and the public from fraud, manipulation, abusive practices, and systemic risk related to derivatives subject to the Commodity Exchange Act, and to foster open, transparent, competitive, and financially sound futures markets. CFTC oversees the registration of intermediaries and relies on self-regulatory organizations, including the futures exchanges and the National Futures Association, to establish and enforce rules governing member behavior. CFTC and SEC jointly regulate security futures (generally, futures on single securities and narrow-based security indexes). CFTC and SEC serve as primary regulators for certain designated financial market utilities. In addition, Title VII of the Dodd-Frank Act expands regulatory responsibilities for CFTC and SEC by establishing a new regulatory framework for swaps. The act authorizes CFTC to regulate swaps and SEC to regulate security-based swaps with the goals of reducing risk, increasing transparency, and promoting market integrity in the financial system. CFTC and SEC share authority over mixed swaps—that is, security-based swaps that have a commodity component. Consumer Financial Protection Bureau The Dodd-Frank Act transferred consumer financial protection oversight and other authorities over certain consumer financial protection laws from multiple federal regulators to the Consumer Financial Protection Bureau (CFPB). The Dodd-Frank Act charged CFPB with responsibilities that include the following: ensuring that consumers are provided with timely and understandable information to make responsible decisions about financial transactions; ensuring that consumers are protected from unfair, deceptive, or abusive acts and practices and from discrimination; monitoring compliance with federal consumer financial law and taking appropriate enforcement action to address violations; identifying and addressing outdated, unnecessary, or unduly burdensome regulations; ensuring that federal consumer financial law is enforced consistently, in order to promote fair competition; ensuring that markets for consumer financial products and services operate transparently and efficiently to facilitate access and innovation; and conducting financial education programs. Furthermore, the Dodd-Frank Act gave CFPB supervisory authority over certain nondepository institutions, including certain kinds of mortgage market participants, private student loan lenders, and payday lenders. Regulatory Flexibility Act The uniform application of new or revised regulations can have a comparatively greater impact on smaller entities than on larger entities because the smaller entities have small staffs with which to face expanded demands and a smaller asset and income base with which to absorb increases in compliance costs. RFA was enacted in 1980 in part to address this disparity. The act requires that federal agencies, including the financial regulators, engaged in substantive rulemaking analyze the impact of proposed and final regulations on small entities and, when there may be a significant economic impact on a substantial number of small entities, to consider any significant regulatory alternatives that will achieve statutory objectives while minimizing any significant economic impact on small entities. RFA defines “small entity” to include small businesses, small governmental jurisdictions, and certain small not-for-profit organizations. RFA does not seek preferential treatment for small entities, require agencies to adopt regulations that impose the least burden on small entities, or mandate exemptions for small entities. Rather, it requires agencies to examine public policy issues using an analytical process that identifies, among other things, barriers to small business competitiveness and seeks a level playing field for small entities, not an unfair advantage. Unless the head of the agency certifies that the proposed regulation would not have a significant economic impact upon a substantial number of small entities, RFA requires regulators to prepare an initial regulatory flexibility analysis for each draft rule that requires a notice of proposed rulemaking. These analyses must contain an assessment of the rule’s potential impact on small entities and describe any significant alternatives to the rule that would reduce its burden on small entities while achieving statutory objectives (see table 2 for more information). RFA requires that regulators publish in the Federal Register their initial regulatory flexibility analysis, or a summary, with the proposed rule. Following a public comment period, RFA requires regulators to conduct a similar analysis when they promulgate the final rule—the final regulatory flexibility analysis. This analysis must address any comments received on the initial regulatory flexibility analysis and include a description of the steps the agency took to minimize the rule’s significant economic impact on small entities, consistent with statutory objectives. Agencies then must publish the final analysis, or a summary, with the final rule. If the head of the agency certifies in the Federal Register that the rule would not have a significant economic impact on a substantial number of small entities, agencies do not have to conduct the initial or final analysis. Certifications must include a statement providing a factual basis for the certification. Agencies may make a certification in lieu of the initial or final analysis, and can choose to certify at both points. Figure 1 illustrates the decision process that agencies must follow to comply with RFA. Section 610 of RFA requires agencies to review, within 10 years of a final rule’s publication, those rules assessed as having a significant economic impact on a substantial number of small entities to determine if they should be continued without change, amended, or rescinded (consistent with statutory objectives) to minimize any significant economic impact on small entities. Section 610 requires that agencies publish in the Federal Register a list of the rules that have a significant economic impact on a substantial number of small entities and are to be reviewed pursuant to section 610 during the upcoming year. These notices alert the public to the upcoming review and permit interested parties to submit their comments on the rule’s impact on small entities. The Dodd-Frank Act, which established CFPB, amended RFA to impose additional rulemaking requirements for CFPB for certain proposed rules. Specifically, when CFPB conducts rulemakings it expects will have a significant economic impact on a substantial number of small entities it must convene Small Business Review Panels, comprising employees from CFPB, the Small Business Administration’s Chief Counsel for Advocacy, and Office of Management and Budget’s (OMB) Office of Information and Regulatory Affairs. The panels must seek direct input from a representative group of small entities that would be affected by CFPB’s rulemakings. The panels must be conducted before publication of an initial regulatory flexibility analysis (in effect, before the proposed rule is issued for public comment). RFA designates certain responsibilities to the Small Business Administration’s Chief Counsel for Advocacy, including monitoring agency compliance with RFA and reviewing federal rules for their impact on small businesses. Executive Order 13272 requires the Office of Advocacy to provide notifications about RFA requirements and training to all agencies on complying with RFA. The Office of Advocacy published guidance on complying with RFA in 2003 (updated in 2012 and August 2017), which was designed to be a step-by-step guide for agency officials. The Small Business Administration publishes size standards to determine eligibility for classification as a small entity. Generally, to qualify as a small entity the annual asset threshold for banks is $550 million in assets; for financial investment and related activities, the annual revenues threshold is $38.5 million. Most agencies rely on these size standards; however, RFA also sets forth a procedure that permits agencies to formulate their own definitions of small entities. Many Rules Were Not Subject to RFA Requirements and Regulators Concluded Many Would Not Significantly Affect Small Entities Regulators Determined That Almost 40 Percent of Recent Rules Were Not Subject to RFA Requirements Rules that do not have a proposed rule are not subject to RFA requirements, such as analyzing the rule’s effects on small entities and considering alternatives. Financial regulators promulgated 520 rules (483 final and 37 interim final) during calendar years 2010–2016. Of those, RFA requirements were not applicable in 39 percent (204 rules) because the regulators did not publish a proposed rule. The regulators published a proposed rule for the other 316 final rules. This result is consistent with our prior analysis of rulemaking government wide. In December 2012, we found that about 35 percent of major rules and about 44 percent of nonmajor rules published during calendar years 2003–2010 did not have a proposed rule. The percentage of rules finalized without a proposed rule and therefore not subject to RFA requirements varied by regulator. As shown in figure 2, CFPB had the largest percentage (53 percent) of rules not subject to RFA requirements and CFTC the smallest percentage (16 percent). In their rulemakings, the regulators gave several reasons for not publishing a proposed rule. The Administrative Procedure Act (APA), which outlines the process for informal rulemaking, includes six broad categorical exceptions to publishing a proposed rule (for example, rules dealing with agency organization and procedure). Additionally, APA provides that an agency may forgo a notice of proposed rulemaking when it finds for “good cause” that such notice is “impractical, unnecessary, or contrary to the public interest.” We found that the regulators used such exceptions for a number of the rules we reviewed. For example, in December 2015, the Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (Federal Reserve), and the Federal Deposit Insurance Corporation (FDIC) used the good- cause exception to publish a joint rule to adjust the asset-size thresholds for small and intermediate banks and savings associations related to performance standards under the Community Reinvestment Act without a proposed rule. According to the Federal Register notice, the agencies had no discretion on the computation or timing of the changes, which were based on a regulation that previously had been published for public comment before being finalized. In another rule published in October 2013, SEC made changes to the filer manual for its Electronic Data Gathering, Analysis, and Retrieval System based on updates to the system and did not publish a proposed rule because the rule changes related solely to agency procedures or practice. According to CFPB officials, the majority of final CFPB rules issued during this time period without a proposed rule involved technical—and in many cases non- discretionary—adjustments of statutory or regulatory thresholds to account for inflation. While RFA requirements do not apply when an agency does not publish a proposed rule, all the financial regulators (except OCC) occasionally performed some RFA evaluation in rules without a proposed rule. For example, each agency, except for OCC, certified that at least one of the final rules they promulgated without publishing a proposed rule (within our time frame) would not have a significant economic impact on a substantial number of small entities. The Federal Reserve most frequently performed some RFA analyses in these rules, although such analyses were not required. Of 51 rules without a proposed rule, the Federal Reserve certified in 7 rules and performed an initial or final regulatory flexibility analysis in 10 rules. Most Recent Rules Subject to RFA Requirements Were Certified as Not Producing Significant Impacts on Small Entities, but There Were Differences among Agencies For the 316 rules subject to RFA requirements from 2010 through 2016, regulators certified that most would not have a significant economic impact on a substantial number of small entities, although the frequency with which individual regulators certified varied. Such certifications may be made at either the proposed rule or final rule stage, and a certification in a final rule may be preceded by an initial regulatory flexibility analysis in the proposed rule. When certifying, the regulators most often made such certifications in both the proposed and final rules (63 percent of analyses in rules subject to RFA requirements) and did not perform regulatory flexibility analyses. Certifications of final rules made after performing an initial regulatory flexibility analysis accounted for another 4 percent. As shown in figure 3, CFPB, CFTC, FDIC, and OCC certified most-to-nearly- all of their final rules that were subject to RFA requirements, while the Federal Reserve rarely certified final rules, and SEC certified almost half. According to Federal Reserve officials, the agency generally performed a full regulatory flexibility analysis for almost all rulemakings regardless of the rule’s impact on small entities. This pattern was generally consistent across our time period (see fig. 4). The Federal Reserve usually performed an initial and final regulatory flexibility analysis, while the other agencies, except SEC, rarely did. SEC’s RFA analyses were the most variable over our time period. The spikes in analyses were generally due to the small number of rules promulgated each year. For example, in 2013, OCC promulgated three rules subject to RFA requirements, performing an initial and final regulatory flexibility analysis in one (33 percent) and certifying in two (67 percent). SEC published seven rules in 2013, completing an initial and final regulatory flexibility analysis in all of them. While the Federal Reserve usually performed initial and final regulatory flexibility analyses, it concluded that almost all of its rules would not significantly affect small entities. In 86 percent of its analyses (54 of 63), the Federal Reserve concluded that the rule would not have a significant economic impact on a substantial number of small entities (see fig. 5). In addition, FDIC concluded that almost all of its rules (5 of 6) in which it performed a final regulatory flexibility analysis would not significantly affect small entities, although as previously mentioned, FDIC certified almost all its final rules subject to RFA requirements. (We discuss the Federal Reserve’s and FDIC’s RFA analyses in more detail later in this report.) SEC, CFPB, and CFTC also concluded that at least one of their rules would not significantly affect small entities after performing a final regulatory flexibility analysis. For the CFPB rule, the Federal Reserve first proposed the rule and performed the initial regulatory flexibility analysis before certain rulemaking authorities were transferred to CFPB for the final rule. Certifications We Reviewed Were Not Always Consistent with Office of Advocacy Guidance and Other Best Practices We reviewed Federal Register notices and the regulators’ internal workpapers for all certifications made in the final rule (66 certifications) in calendar year 2015 and 2016 to determine the basis for the certifications and the extent to which the analyses were consistent with RFA requirements and Office of Advocacy’s guidance and other best practices. As previously discussed, RFA requires that agencies provide the factual basis for their certifications in the Federal Register. In most certifications, the agencies provided a factual basis and concluded the rule would not apply to small entities or have any economic impact. In others, the agencies found the rule would have some economic impact on small entities, but concluded that the impact would not be significant for a substantial number of small entities. In those instances, we found that the factual basis provided for most certifications across all regulators lacked key components recommended by the Office of Advocacy for understanding the analyses regulators used to support their conclusion. We also found that while most agencies relied on the Small Business Administration’s definitions of small entities for use in their RFA analyses, two agencies relied on alternative definitions of small entities, some of which have not been updated in more than 35 years. Most Certifications in 2015 and 2016 Concluded the Rule Would Not Apply to Small Entities or Have Any Economic Impact In almost half of the certifications (31 of 66) we reviewed, regulators concluded the rule would apply to no or few small entities (see table 3). According to the regulators, these rules generally regulated activities in which small entities do not engage, pertained to the internal processes of the agency, or applied only to entities that were not small as defined by the Small Business Administration or the agency. For example, in a rule on recovery planning, OCC determined that the rule did not have an impact on small entities because it applied only to banks with $50 billion or more in assets, which are not small entities based on the Small Business Administration’s definition. In 12 certifications, the agencies concluded the rules would have no economic impact regardless of whether small entities were affected and therefore did not require regulatory flexibility analyses. According to the regulators, most of these certifications applied to rules that did not create new regulatory requirements, eliminated duplicative rules, or established optional specifications. For example, FDIC published a rule in October 2015 that consolidated into a single part Fair Credit Reporting regulations for all institutions FDIC regulates. According to the Federal Register notice, the rule eliminated redundant requirements and aligned FDIC’s definitions with CFPB rules that were substantively similar. Regulators generally used the current state of regulations as the baseline for these determinations. For example, when analyzing the economic effects of a new rule that consolidated duplicative regulations, the regulator compared the compliance costs of the new rule with the costs small entities already incurred to comply with the duplicative regulations. Additionally, regulators concluded in 5 of 66 certifications that the rule would have a beneficial impact on small entities. For these rules, agencies concluded they reduced regulatory burden, eliminated regulations, or exempted certain entities. In almost a third (18 of 66) of the certifications, the agencies found that the rule would have some economic impact on small entities, but determined that the impact would not be significant for a substantial number of small entities. For example, in a rule that required specified entities to become members of an association, CFTC identified as an economic impact the costs of membership dues and attorney fees related to completing registration filings and preparing for required audits. But it determined that the costs were not significant for a substantial number of the specified small entities. In the seven joint rules we reviewed, we determined regulators conducted their own certification analysis independent of other agencies, although they generally reached the same conclusion to certify (except for the Federal Reserve, which generally treated RFA analysis differently, as discussed later). Two Agencies Used Alternative Definitions of Small Entities That May Be Outdated As previously noted, the Federal Reserve, FDIC, and OCC rely on the Small Business Administration’s definition of small banks for RFA purposes. CFPB also relies on the Small Business Administration’s definitions of small entities; for example, a business engaged in automobile financing is considered small if its revenues are $38.5 million or less. In contrast, CFTC and SEC previously established alternative definitions of small entities for the purposes of RFA that the agencies used to conclude that most of their rules (10 of 15 for CFTC and 9 of 12 for SEC) would not apply to small entities. But some of these small entity definitions have not been updated in more than 35 years. In a 1982 policy statement, CFTC published its first set of RFA definitions, which covered designated contract markets, futures commission merchants, and commodity pool operators, among others. In subsequent years, CFTC modified its definitions of small entities to exclude several other groups of entities that it regulates, such as eligible contract participants and major swap participants. SEC originally established definitions for small entities through a rule published in the Federal Register in 1982 after consulting with the Office of Advocacy. The agency subsequently updated some of its definitions in 1986 and 1998, although others have not been updated at all. In an October 2017 report to the President, the Department of the Treasury recommended CFTC and SEC review and update their small entity definitions for RFA purposes to ensure their RFA analyses appropriately consider small entities. According to CFTC officials, the agency has been reviewing its small entity definitions since April 2017 as part of its working group to update the agency’s RFA practices. SEC staff told us they had no comment on Treasury’s recommendation. Analyses in Some Certifications Lacked Key Components Recommended by Office of Advocacy For the 18 certifications in which regulators determined rules would have some economic impact on small entities, they conducted additional analyses to determine that the impact was not significant for a substantial number of small entities. We found that the factual basis provided for many of these certifications lacked key information (discussions of data sources or methodologies and of broader economic impacts, or definitions for key criteria) for understanding the analyses regulators used to support their conclusion. The Office of Advocacy interprets RFA’s factual basis requirement to mean that a certification should include, at a minimum, why the number of entities or the size of the economic impact justifies the certification. In its RFA guide, the Office of Advocacy details the components regulators should include in their certification discussion to obtain meaningful public comment and information on the rule’s impact on small entities. These components include a description and estimate of the economic impact, criteria for “significant economic impact” and “substantial number,” and a description of any uncertainties in the analysis, including sensitivity analysis when appropriate. The Office of Advocacy guidance states that agencies’ reasoning and assumptions underlying the analyses used to support their certifications, including data sources, should be explicit in the Federal Register notices. Additionally, when estimating significant economic impact, the guidance states agencies should not view impact in absolute terms, but relative to the size of the business, the size of the competitor’s business, and the impact on larger competitors. According to the Office of Advocacy, broader economic impacts (such as a disparity in impact on small entities that affects their ability to compete) could be significant. Data sources or methodologies. In most of these certifications (15 of 18), regulators did not describe or did not fully describe their methodology or data sources for their conclusions. In addition to the Office of Advocacy’s RFA guide, OMB guidance on regulatory analysis—regulatory agencies’ evaluation of the likely consequences of rules—states that agencies should clearly set out the basic assumptions, methods, and data underlying the analysis and discuss the uncertainties associated with the estimates. While independent regulatory agencies, including those in our review, are not required to follow the OMB guidance, it provides a strong set of analytical practices relevant to agency rulemakings. For these certifications, regulators generally provided partial sources and methodology for their conclusions. Examples of incomplete discussions include the following: In its rule requiring specified entities to become members of an association, CFTC detailed its source and methodology for estimating the hourly labor costs of retaining a lawyer, as mentioned above, but did not provide the reasoning for its estimate of the number of hours that a lawyer would spend counseling entities with respect to the rule’s requirements. In a joint rule related to homeowner flood insurance, OCC provided the source for the estimated number of affected small entities, but provided no source or methodology for its estimated economic impact of $6,000. In a rule amending reporting requirements for the dissemination of security-based swap information, SEC said that it partially relied on its “own information” without explanation for declaring that small entities do not participate in security-based swap markets. In a joint rule implementing the minimum requirements in the registration and supervision of appraisal management companies, the Federal Reserve estimated a range of small entities that might be affected but did not provide the source or methodology for how it approximated the number. CFPB fully discussed sources and methodology in some of its certifications but not others. In three of five certifications that required additional analysis, CFPB provided thorough descriptions of its methodology and data sources for its conclusions. The agency detailed its assumptions and uncertainties in these rules and performed a sensitivity analysis to ensure the rules would not significantly affect small entities. However, in the other two certifications, CFPB did not discuss all of the data sources on which it relied. Broader economic impacts. The regulators’ certifications generally did not address broader economic impacts such as cumulative effects, competitive disadvantage, or disproportionality of effects and focused most of the analysis on specific compliance costs. In addition to the Office of Advocacy’s guidance on analyzing broader economic impacts, Executive Order 13563 requires agencies to consider the cumulative economic impacts of regulations during the rulemaking process, which reinforces the agencies’ obligations under RFA. While this executive order is not binding on independent regulatory agencies, such as those in our review, it represents a best practice for rulemaking. Of the 18 certifications that contain additional analysis, agencies discussed some aspect of broader economic impacts in 3. CFPB considered future changes in market share for small entities because of new requirements in one rule and whether the regulation placed small entities at a competitive disadvantage in another rule. OCC also examined a rule’s impact on small entities’ competitiveness and profitability in one certification. None of the regulators discussed cumulative effects in their certifications. Defining key criteria. Regulators generally did not define the criteria they used for “substantial number” and “significant economic impact” in their certifications. RFA does not define these terms. The Office of Advocacy has left it up to agencies to determine their own criteria, which it recommends that agencies discuss in their certifications. None of the regulators defined what would constitute a substantial number of small entities for the rule in the Federal Register notices. OCC was the one agency to define its criteria for a significant economic impact in its rulemaking, although it did not include this definition in all of its certifications. The other agencies did not define significant economic impact for the rule in the Federal Register notices. While CFPB did not disclose its criteria in the Federal Register notices, it defined these criteria in its internal workpapers for two certifications. Additionally, many of the analyses (13 of 18) did not discuss the significance of the rule’s costs relative to the size of the business, such as profits, revenues, or labor costs. Limited information. In addition, three of the certifications we reviewed included none of the Office of Advocacy’s suggested components. The factual basis provided for these certifications did not include a description of the number of affected entities, the size of the economic impacts, or the justification for the certification. Two FDIC rules related to revisions of the treatment of financial assets transferred in connection with a securitization provided no additional information beyond the declarative statement that the agency certified that the rule would not have a significant economic impact on a substantial number of small entities. Additionally, an OCC certification in a joint rule that formalized the calculation method for mortgage loans exempted from certain requirements provided little information, although an internal agency workpaper detailed the number of small entities affected and the estimated economic impact that supported the certification. OCC officials said that the agency will comply with instructions from its rulemaking procedure guide, which was updated in August 2016. According to the guide, certifications should include additional information beyond the certification statement, such as number of affected small entities, size of the economic impact, and reason for the certification. The regulators’ guidance for complying with RFA generally does not include policies and procedures for helping to ensure consistent and complete RFA analyses. (We discuss the regulators’ guidance later in this report.) Without policies and procedures that would help ensure that key components were incorporated in certification assessments—including disclosing the methodology and data sources of economic analyses and considering potential broad economic impacts—regulators may be limiting the effectiveness of their reviews. In turn, such reviews hinder the achievement of RFA’s goal. For example, incomplete disclosure of methodology and data sources could limit the public and affected entities’ ability to offer informed comments in response to regulators’ certification assessments in proposed rules. Many RFA-Required Analyses Had Weaknesses In many recent regulatory flexibility analyses, the evaluation of key components—potential economic effects and alternative regulatory approaches—was limited. Many final rules described changes to limit burden, and few regulatory flexibility analyses concluded rules would have a significant impact on small entities. For most rules we reviewed, regulators were unable to provide documentation supporting their regulatory flexibility analyses. Regulatory Flexibility Analyses Often Included Limited Evaluation of Costs and Alternatives Our review of recent regulatory flexibility analyses found that in many cases, the evaluation of key components—potential economic effects and alternative regulatory approaches—was limited, although the extent varied by regulator. RFA requires the initial and final analyses to include information to assist the agency, regulated entities, and the public in evaluating the potential impact of rules on small entities (see sidebars). The most important components include the assessment of a rule’s potential economic effects on small entities—such as compliance costs— and the identification and evaluation of alternative regulatory approaches that may minimize significant economic effects while achieving statutory objectives. The Office of Advocacy’s guide on RFA compliance explains that an agency principally should address these components in an initial regulatory flexibility analysis. feasible—of the number of small entities to which the rule will apply. Description of the projected reporting, recordkeeping, and other compliance requirements of the rule, including the type of necessary professional skills. Identification—to the extent practicable— of all relevant federal rules that may duplicate, overlap, or conflict with the proposed rule. goal of RFA. See appendixes V–XII for a summary of findings for each of the six regulators. We reviewed regulatory flexibility analyses for recent rulemakings to assess the extent to which they included these and other elements and to examine the outcome of the analyses. For each regulator, we selected all final rules published in 2015 and 2016 for which the agency performed an initial and final regulatory flexibility analysis. For regulators with fewer than three such rules, we included rules published in prior years (on a full- year basis) until we reached three rules or 2013. See table 4 for the number of rules selected for each regulator. For each rule, we reviewed Federal Register notices for the proposed and final rules and supporting documentation on the initial and final regulatory flexibility analyses. small entities to which the rule will apply or explanation of why no such estimate is available. the significant economic impact on small entities consistent with statutory objectives, including the reasons for selecting the alternative adopted in the final rule and why each of the other alternatives was rejected. In meeting the requirements, agencies may provide either a quantifiable or numerical description of the rule’s effects or alternatives or more general descriptive statements if quantification is not practicable or reliable. Many of the Federal Reserve’s regulatory flexibility analyses lacked some required components and contained limited information and analysis. As previously discussed, the Federal Reserve generally performed regulatory flexibility analyses for its rulemakings regardless of the rule’s potential impact on small entities. The majority (11 of 17) of the Federal Reserve’s analyses stated that the rules either did not apply to small entities or lacked compliance requirements. Nevertheless, the Federal Reserve conducted regulatory flexibility analyses in which nearly all of the initial (14 of 17) and final analyses (15 of 17) concluded that the rule would not have a significant economic impact on small entities, which generally is a basis for certification. Examples included rules on capital and liquidity requirements applicable only to large banking organizations and rules that amended official regulatory interpretations or repealed regulations. None of the regulatory flexibility analyses performed by other regulators indicated that a rule did not apply to small entities or lacked compliance requirements. For additional information, see appendix V. More specifically, the regulatory flexibility analyses for the 11 rules that did not apply to small entities or impose compliance requirements were minimal. The analyses did not describe or estimate compliance costs, identify alternatives, or include other items. In the case of alternatives, the analyses either stated that there were no alternatives that would further minimize economic impact on small entities or requested comments on any alternatives. The analyses did not include some other information that could be available and relevant such as the reasons or need for the rule. Because the purpose of a regulatory flexibility analysis is to evaluate a rule’s potential effects on small entities, key components of the analysis may not be relevant or meaningful in such cases. For example, there may be no compliance costs to estimate, alternatives to consider, necessary professional skills to describe, or actions that could minimize impact on small entities. With their focus largely on explaining why the rule would not affect small entities rather than examining effects of compliance requirements and potential alternatives to limit such effects, such cases resemble certifications more than regulatory flexibility analyses. See appendix V for further information on the Federal Reserve’s regulatory flexibility analyses. The Federal Reserve’s regulatory flexibility analyses for six rules that might impose compliance requirements on small entities also had limitations. Specifically, most of the analyses (both initial and final) contained limited evaluation of the potential economic impact on small entities and lacked other components. RFA directs agencies to provide a quantifiable or numerical description of the effects of a proposed rule and allows a qualitative description in lieu of a numerical evaluation in instances when quantification is not practicable or reliable. Most of the analyses for rules that might impose compliance requirements on small entities did not include a description of potential compliance costs. Nearly all (five of six) did not quantify compliance costs in either the initial or final analyses or explain why such assessments were not possible. For two rules, the Federal Reserve’s assessments of economic effects and compliance costs generally consisted of descriptive statements on the rule’s provisions and coverage. For example, the final analysis for a rule on margin and capital requirements for participants in financial swap transactions stated that, among other things, all financial end users would be subject to the variation margin requirements and documentation requirements of the rule but that the Federal Reserve believes such treatment is consistent with current market practice and should not represent a significant burden on small financial end users. Although containing minimal information, analyses in three of the six rules indicated that the rules would have a largely beneficial impact for small entities by reducing burden or offering positive economic effects. These analyses generally lacked clear descriptions of any compliance requirements that would apply to small entities. For example, the initial and final analyses for a rule involving the Federal Reserve’s emergency lending authority stated that participants at a minimum likely would be required to pay interest on loans extended to them and to keep records, but that the positive economic impact of receiving a loan likely would outweigh any economic burden. The initial analysis for another rule stated that the projected reporting, recordkeeping, and other compliance requirements were expected to be minimal but did not describe the requirements or any associated costs. Alternatives. Few of the Federal Reserve’s initial regulatory flexibility analyses identified alternatives to the proposed rule and some did not explain why there were no alternatives. Although most of the rules’ analyses (10 of 17) described alternatives, all but 2 stated that there were no alternatives that would have less economic impact on small entities. Of the 6 rules that might impose compliance requirements on small entities, 2 included such a statement, 1 had no mention of alternatives, and another solicited comments on any significant alternatives that would reduce burden associated with the proposed rule. Analyses for the other two rules described alternative approaches included in the proposed rule to limit economic impact on small entities. For example, one of the rules incorporated an applicability threshold for certain compliance requirements and the other exempted small entities from some of the rule’s provisions and applied a longer transition period. Other Components. Several of the final regulatory flexibility analyses also lacked other RFA-required components. In particular, only three of the six rules described steps taken to minimize economic impact on small entities and reasons for selecting the alternative adopted in the final rule. The other three rules did not include either component. The reasons cited for selecting the approach in the final rule generally reflected the actions taken by the agency to mitigate the rule’s economic impact on small entities. Other Regulators’ Regulatory Flexibility Analyses Generally Included Most Required Components but Some Analyses Had Weaknesses For the other financial regulators (FDIC, CFPB, CFTC, OCC, and SEC), most of the regulatory flexibility analyses we reviewed included the components required by RFA, but the extent of the analyses varied among regulators, with some lacking required information or having other limitations. For the majority (three of four) of FDIC’s analyses, the agency indicated that the rules were not subject to RFA but that it voluntarily undertook the analyses to help solicit public comments on the rules’ effects on small entities. For these three rules, FDIC’s analyses described and quantified each of the rule’s compliance costs and concluded that each rule would not have a significant economic impact on small entities, but other components were missing. For example, these rules’ analyses focused on illustrating how the rule would not have an economic impact on small entities and did not include other required components including a description and assessment of regulatory alternatives. The initial and final analyses for each of the rules were nearly identical and did not include statements about alternatives, any issues raised in public comments, or steps to minimize impact on small entities, among other missing components. In that regard, FDIC’s analyses for these rules—similar to many of the Federal Reserve’s analyses—resembled a certification. The regulatory flexibility analyses for the fourth FDIC rule that we reviewed included all required components. CFPB’s regulatory flexibility analyses generally included all required components. However, for three of the seven rules neither the initial nor final analyses estimated compliance costs for small entities. In some cases, the analyses stated that costs likely would be minimal or described difficulties in estimating costs such as a lack of information about the current practices of subject entities. Of the analyses that included cost estimates, several did not quantify all identified costs or explain why such estimates were not available. Unlike other regulators we reviewed, CFPB is required to seek input from small entities during the rulemaking process (through Small Business Review Panels) when proposed rules are expected to have a significant economic impact on a substantial number of small entities. CFPB’s regulatory flexibility analyses often incorporated information received from these panels in its assessment of potential economic effects and regulatory alternatives. For example, several analyses that estimated compliance costs relied on information from small entities that participated in the panel process as well as data from other sources. The description of regulatory alternatives often reflected comments received from small-entity representatives. Although each of CFPB’s initial analyses described alternatives, in some cases, it was not clear whether CFPB had identified alternatives of its own. CFTC performed initial and final analyses for one rule during the period we reviewed and the analyses had limited evaluation of potential effects on small entities. The analyses did not estimate the number of affected entities or compliance costs, but indicated that the rule’s compliance requirements would be minimal while concluding the rule likely would have a beneficial impact on small entities. The discussion of compliance requirements in the final analysis stated only that the rule would relieve affected entities from certain compliance requirements, although the initial analysis stated that the proposed rule would impose a new requirement on certain entities—which could include small entities—to annually provide CFTC with a notice about certain trading activity. In other sections of the final rule, CFTC discussed its decision to address concerns raised in public comments by not adopting the notice requirement. OCC also had one rule with initial and final regulatory flexibility analyses, and it included nearly all required components. The rule revised capital requirements for banking organizations and was issued jointly with the Federal Reserve. The initial analysis described multiple alternative approaches that it stated were included in the proposed rule to incorporate flexibility and reduce burden for small entities. However, other than listing the alternatives and requesting comment, the analysis does not discuss or evaluate how the options minimize economic impact on small entities. The regulatory flexibility analysis in the final rule notes that the Small Business Administration’s Chief Counsel for Advocacy submitted a comment letter in which it encouraged the agencies to provide more detailed discussion of the alternatives and the potential burden reductions associated with them. SEC’s regulatory flexibility analyses also included most components, but some rules’ assessment of compliance costs and alternatives had limitations. Specifically, although all of the rules described compliance requirements, some did not describe (four of nine) or estimate (five of nine) the costs they might impose on subject entities. For example, in December 2015, SEC published a proposed rule requiring resource extraction issuers to disclose certain payments. The proposed rule’s initial regulatory flexibility analysis described requirements for the disclosures. However, the regulatory flexibility analysis did not discuss or evaluate potential compliance costs and concluded with statements on alternatives and a request for comments. Many of the SEC rules we reviewed focused on reasons why alternatives were not appropriate and did not discuss specific options for minimizing economic impact on small entities. As part of describing any significant alternatives to the proposed rule which accomplish statutory objectives while minimizing any significant economic impact on small entities, RFA requires that initial regulatory flexibility analyses discuss alternatives such as the establishment of differing compliance or reporting requirements or timetables that take into account the resources available to small entities; the clarification, consolidation, or simplification of compliance and reporting requirements under the rule for such small entities; the use of performance rather than design standards; and an exemption from coverage of the rule, or any part thereof, for such small entities. For five of the nine rules, the initial analysis discussed the general types of alternatives listed in RFA but did not describe specific options for implementing them in the proposed rule. For example, the initial regulatory flexibility analyses did not identify how compliance or reporting requirements might be altered for small entities or in what ways requirements could be simplified. One of the rules involved changes to SEC’s requirements for the reporting and disclosure of information by registered investment companies. The initial analysis stated that the agency had considered (1) establishing different reporting requirements or frequency to account for resources available to small entities, (2) using performance rather than design standards, and (3) exempting small entities from all or part of the proposal. However, the analysis lacked details about what different reporting requirements, frequencies, performance standards, or partial exemptions it considered for small entities. In addition, for seven of the rules—including the five rules considering only the general alternative types—the discussion was limited to describing the reasons why regulatory alternatives were not appropriate. The reasons cited typically included that the different regulatory approaches would not be consistent with the agency’s goals or statutory objectives. For example, the analysis for SEC’s rule on reporting and disclosure by registered investment companies concluded that the agency believed that establishing different reporting requirements or frequency for small entities would not be consistent with SEC’s goal of industry oversight and investor protection. However, for this and the other proposed rules, the analyses generally did not examine the extent to which the considered alternatives could limit the rule’s economic impact on small entities. In another case, a rule’s final analysis stated that one public commenter raised concerns that the initial analysis did not identify significant alternatives, including that it only considered alternatives related to exempting small business from the proposed rules. Several of the commenters suggested additional alternatives for reducing burden. The lack of specific details about potential alternatives may limit the usefulness of public comments on SEC’s regulatory flexibility analyses and its ability to identify alternatives that could reduce economic impacts on small entities while achieving a rule’s objectives. Many Analyses Did Not Disclose Information Sources or Methodology Most regulators (five of six) did not disclose the data sources or methodologies used for estimating the number of subject small entities or compliance costs for the regulatory flexibility analyses we reviewed. OMB guidance on regulatory analysis—regulatory agencies’ anticipation and evaluation of the likely consequences of rules—states that agencies should clearly set out the basic assumptions, methods, and data underlying the analysis and discuss the uncertainties associated with the estimates. While independent regulatory agencies, such as those in our review, are not required to follow the OMB guidance, it provides a strong set of analytical practices relevant to agency rulemakings that serves as best practices for all agencies. Many initial analyses (11 of 23) and final analyses (11 of 24) that estimated the number of subject small entities did not describe the data source used for the estimate. Each of the regulators except for CFPB (which included data sources) and CFTC (whose only rule did not include an estimate) had at least one rule that did not disclose the data source for the estimate of subject small entities. Furthermore, many analyses that estimated a rule’s compliance costs (5 of 12 initial and 5 of 14 final) did not describe the information sources used to calculate the projections. The analyses for several additional rules included data sources for some but not all cost estimates. Except for CFPB, each of the regulators that estimated compliance costs had at least one rule that lacked information on data sources for some estimates. For example, the regulatory flexibility analyses for a joint OCC and Federal Reserve rule discussed how the agencies estimated costs of implementing new capital requirements but did not disclose the data sources or methodology used to calculate the costs of creditworthiness measurement activities. A lack of information necessary to understand how an agency evaluated a rule’s economic impact on small entities may limit the extent to which the public and other interested parties can meaningfully comment on the analyses. Few Rules Found to Have a Significant Impact and Many Described Changes to Reduce Burden Although a regulatory flexibility analysis is required only for rules that may have a significant economic impact on a substantial number of small entities, few final analyses concluded that the rules would have such an impact. Specifically, the final analysis for only 4 of 39 rules that we reviewed stated that the rule likely would have a significant economic impact. Final analyses for the majority of rules (20 of 39) concluded there would be no significant impact and the remainder did not have a clear conclusion. The Federal Reserve accounted for 15 of those 20 analyses. As discussed previously, nearly all of the Federal Reserve’s regulatory flexibility analyses concluded a rule would not have a significant impact on small entities. About half of the regulatory flexibility analyses we reviewed (18 of 39) described changes to the proposed rule to limit economic impact on small entities and most were by regulators other than the Federal Reserve. Several rules (12 of 39) described changes attributable to comments on the regulatory flexibility analyses. Specifically, for regulators other than the Federal Reserve, the final analyses for about half of the rules (11 of 22) noted receiving public comments on the initial analysis and nearly all of those described changes resulting from the comments. A smaller number of rules described changes related to comments on the initial analysis received from the Office of Advocacy. Some rules also described other changes to the proposed rule, including changes in response to general public comments and the adoption of alternatives. For rules that identified alternatives to a proposed rule in the initial analysis, about half of the final analyses (10 of 21) described reasons for rejecting all the alternatives. An additional 2 rules noted reasons for rejecting some of the alternatives. For further information on the results of regulators’ regulatory flexibility analyses, see appendix XII. Regulators described taking various steps to minimize impact on small entities, although they did not all result from changes to the proposed rule and were not all clearly attributable to the agency’s consideration of alternatives. For example, some analyses described provisions that had been included as part of the proposed rule. For rules that disclosed actions to minimize effects on small entities, most regulators noted multiple actions that included reducing compliance requirements such as for reporting and disclosure, exempting small entities from certain requirements, increasing applicability or exemption thresholds, providing for flexibility in meeting compliance requirements, clarifying and simplifying compliance requirements, not adopting certain provisions of the proposed rule, and providing for delayed or gradual implementation of compliance requirements. Although some actions were specific to small entities, many applied more broadly, such as to all subject firms. Most Regulators Lacked Documentation of Regulatory Flexibility Analysis and Certifications for Most Rules For most rules we reviewed, regulators (five of six) were unable to provide documentation supporting their regulatory flexibility analyses or certification decisions, although the extent of documentation varied by regulator (see table 5). We requested supporting documentation for the 39 rules we reviewed for which the agency performed initial and final regulatory flexibility analyses and the 66 rules for which the agency made a certification determination. Staff from two regulators—CFPB and OCC—provided documentation for all or nearly all of the rules we reviewed. Many of these documents were formal analysis or decision memorandums on assessing a rule’s potential economic impact on small entities. For CFPB rules that had regulatory flexibility analyses, documentation included RFA-required reports summarizing the results of Small Business Review Panels. Staff from the other regulators produced documentation for fewer or no rules and the documents they provided were largely limited and informal. For example, other than for CFPB and OCC, RFA-related documentation generally consisted of emails between agency staff and data queries and output files on the number of affected entities and potential economic effects. OMB guidance on regulatory analysis states that agencies should prepare documentation of their economic analysis so that a qualified third party reading the analysis can understand the basic elements and the way in which the agency developed its estimates. The guidance also states that agencies are expected to document all the alternatives considered as part of their regulatory analysis and which alternatives were selected for emphasis in the main analysis. As previously discussed, independent regulatory agencies are not required to follow the OMB guidance, but it provides a strong set of analytical practices relevant to agency rulemakings. A lack of documentation of the analysis supporting regulators’ RFA implementation limits transparency and accountability. Regulatory Guidance Generally Does Not Include Policies or Procedures for Ensuring Consistent and Complete RFA Analyses Most of the Regulators Have Established General Guidance for Complying with RFA Requirements Most regulators (five of six) have established written guidelines that restate the statutory requirements for certification and for preparing the regulatory flexibility analyses and provide some additional guidance for staff conducting the analyses, as shown in table 6. However, they generally have not developed comprehensive policies and procedures to assist staff in complying with RFA, which may contribute to the weaknesses we identified in some certifications and regulatory flexibility analyses. The guidelines for FDIC, OCC, CFPB, and SEC discuss regulatory flexibility analyses as part of their general rulemaking guidance for staff. At a minimum, each of these regulators’ guidance describes the statutory requirements under RFA for certifications and for preparing the initial and final analyses, and, for CFPB, agency-specific RFA requirements. These four agencies also provide some additional information intended to be useful in complying with RFA requirements, such as excerpts from the Office of Advocacy’s RFA compliance guide. For example, some of the incorporated Office of Advocacy guidance covers considerations for determining whether a rule would have a significant economic impact on a substantial number of small entities. In addition, some regulators’ RFA guidelines include organizational information for coordinating with certain agency departments (such as offices responsible for economic analysis or legal review) and identifying staff responsible for completing RFA analyses. Until recently, CFTC and the Federal Reserve had not established any policies, procedures, or guidance for conducting regulatory flexibility analyses, except for a policy statement CFTC issued in 1982 that defines small entities and an informal Federal Reserve document listing RFA requirements. Since we started our review, CFTC announced a working group intended to enhance compliance with RFA. According to CFTC staff, the group began its work in April 2017 with a focus on updating CFTC’s small-entity definitions. Staff said that the group’s next task would be to formulate RFA policies and procedures with a goal of adopting them in spring 2018. Also during the course of our review, the Federal Reserve finalized a handbook covering guidelines and policies for RFA and small- entity compliance guides that it provided to us in November 2017. Previously, the Federal Reserve’s RFA guidance consisted of an informal resource document identifying RFA requirements that it made available to rulemaking staff. Regulators’ RFA Guidance Does Not Include Policies or Procedures for Helping Ensure Consistent and Complete RFA Analyses While the financial regulators’ guidance discusses RFA requirements for regulatory flexibility analyses and includes some information on how to approach these analyses, it generally does not address how each agency helps ensure that its rulemakings consistently and completely comply with RFA requirements. Federal internal control standards state the importance for agency management to establish through policies and procedures the actions needed to achieve objectives. In addition, Executive Order 13272 required agencies to establish policies and procedures to promote compliance with RFA. While this executive order is not binding on independent regulatory agencies, it represents a best practice for rulemaking. We found that the regulators’ guidance lacks specific details on the procedures by which the agency expects rulemaking staff to implement RFA requirements. Other than restating RFA requirements and identifying organizational responsibilities, regulators’ guidance documents largely are limited to offering suggestions for rulemaking staff to consider while preparing RFA sections of the rule. For example, in many cases, the guidance documents include recommendations and excerpts from the Office of Advocacy’s RFA compliance manual such as factors to consider about what constitutes a significant economic impact and a substantial number of small entities. In another case, guidance suggests staff refer to RFA statements included in previously issued rules to use as examples. In addition, some guidance documents described agency policies on certain RFA elements. For example, one regulator’s guidance states a preference for completing an initial regulatory flexibility analysis, rather than making a certification determination. Yet, while these types of guidance may be instructive and allow for necessary flexibility, they do not represent specific and comprehensive procedures for implementing RFA requirements. As illustrated in table 7, the extent to which regulators’ guidance includes policies and procedures varies but generally does not include policies or procedures for identifying definitions or criteria for assessing whether a rule will have a significant economic impact on a substantial number of small entities; evaluating a rule’s potential economic impact on small entities, including compliance costs and broad effects such as cumulative effects, competitive advantage, and disproportionality; identifying and assessing regulatory alternatives that could minimize impact on small entities while accomplishing statutory objectives; disclosing analytical methodology and data sources; and creating and maintaining documentation that supports analytical findings. Some regulators’ guidance, including CFPB and OCC, includes policies and procedures for certain elements—such as disclosing methodology and sources—but not for others, such as defining what constitutes significant economic impact or a substantial number of small entities. FDIC’s rule development guide includes guidance for certification determinations (largely from Office of Advocacy’s compliance guide) but not for initial and final regulatory flexibility analyses for which the guide restates RFA requirements. SEC’s handbook describes some policies and procedures on alternatives but it focuses on having RFA statements acknowledge consideration of each RFA alternative type even if unsuitable. It also includes some policies and procedures for assessing economic impact. However, the handbook was last revised in 1999, so it does not incorporate recommendations from the Office of Advocacy’s compliance guide, and two SEC divisions have developed their own manuals, which generally restate RFA requirements. As previously described, we found inconsistencies and weaknesses in financial regulators’ certifications and regulatory flexibility analyses that we reviewed, including for the key elements discussed in this section. The shortcomings are attributable in part to the regulators’ lack of comprehensive policies and procedures for RFA requirements. Our prior work on RFA implementation by federal agencies found that uncertainties about RFA’s requirements and varying interpretations of those requirements by federal agencies limited the act’s application and effectiveness. However, the Office of Advocacy subsequently published guidance on complying with RFA requirements that includes information to help agencies interpret and implement RFA requirements. Such guidance could help regulators develop comprehensive and specific policies and procedures. Without such policies and procedures, regulators’ ability to consistently and effectively meet RFA objectives may be limited. Financial Regulators Varied in Their Approach to RFA- Required Retrospective Reviews Federal Banking Regulators Relied on Other Retrospective Reviews to Meet RFA Section 610 Requirements As previously discussed, section 610 of RFA requires agencies to review, within 10 years of adoption, those rules assessed as having a significant economic impact on a substantial number of small entities to determine if they should be continued without change, amended, or rescinded to minimize any significant economic impact on small entities. During the last 10 years, the three federal banking regulators (Federal Reserve, FDIC, and OCC) used other retrospective reviews that they said fulfilled RFA requirements. Specifically, the banking regulators said that the retrospective reviews required under the Economic Growth and Regulatory Paperwork Reduction Act of 1996 (EGRPRA) also satisfied RFA section 610 requirements. EGRPRA requires the federal banking regulators to identify outdated or otherwise unnecessary regulatory requirements imposed on insured depository institutions every 10 years. We compared EGRPRA requirements for retrospective reviews to those of section 610 and found they do not fully align (see table 8). For example, the EGRPRA review process relies on public comments to identify rules that may be outdated, unnecessary, or unduly burdensome. The comments are solicited through public notices in the Federal Register and through public outreach meetings held across the country. In contrast, public comments are only one component of section 610 reviews. Following a public notice and comment period, section 610 requires agencies to evaluate rules found to have a significant economic impact on a substantial number of small entities to identify opportunities to reduce unnecessary burden. The section 610 reviews are to consider five specific factors, such as the degree to which technology and economic conditions have changed in the area affected by the rule. Section 610 reviews focus specifically on reducing unnecessary regulatory burden on small entities; EGRPRA reviews focus more broadly on reducing regulatory burden on all insured depository institutions. We reviewed the 2007 and 2017 EGRPRA reports, along with their preceding Federal Register notices, and found that the regulators solicited comment from the public on the burden of regulations on community banks and other smaller, insured depository institutions. However, we found that the final reports primarily focus on the issues identified through public comments and generally did not include independent agency consideration of the impact of regulations on small entities, as required by section 610. The public notice requirements for RFA section 610 and EGRPRA also differed. RFA requires agencies to publish in the Federal Register a list of the rules that have a significant economic impact on a substantial number of small entities and that are to be reviewed pursuant to section 610 during the upcoming year. This list must include a brief description of each rule and the need for and legal basis of each rule. The notices alert the public to specific rules that may affect small entities and request public comment on these rules. EGRPRA public notice requirements do not require agencies to specifically identify rules that have an impact on small entities. Rather, agencies must at regular intervals provide notice and solicit public comment on a particular category or categories of rules (such as consumer protection, safety and soundness) governing all insured depository institutions. The notices request commenters to identify areas of the regulations that are outdated, unnecessary, or unduly burdensome. Our searches of the Federal Register turned up no notices of section 610 reviews posted by the regulators in the last 10 years. In its RFA guide, the Small Business Administration’s Office of Advocacy stated that agencies may satisfy section 610 requirements through other retrospective reviews if these other reviews meet the criteria of section 610. To obtain credit for a section 610 review through another review process, the Office of Advocacy recommends that agencies adequately communicate with stakeholders and the Office of Advocacy. According to an official from the Office of Advocacy, the office has not yet made a determination on whether the EGRPRA review process satisfies the requirements of section 610. Although the agencies stated that they fulfill RFA requirements through EGRPRA, without confirming this with the Office of Advocacy, it is possible that they are not meeting the RFA section 610 requirements and therefore may not be achieving the small- entity burden reduction that the statute seeks to ensure. We found that the regulators lack policies and procedures for how to conduct section 610 reviews or provide rationale for meeting the section 610 review requirements through other retrospective review processes. SEC Conducted RFA Section 610 Reviews, but the Reviews Were Late and Not Fully Consistent with RFA Requirements or Office of Advocacy Guidance Our review of SEC’s section 610 reviews found that they were conducted late and were not fully consistent with RFA requirements or the Office of Advocacy’s guidance for such reviews. Although SEC staff have a process for tracking which rules are due for section 610 reviews, SEC conducted all but one of its reviews 12 years after the rules were published. According to RFA requirements, rules must be reviewed within 10 years of their publication as final rules. SEC staff told us that SEC conducted a broader review than required by RFA and recommended by the Office of Advocacy. Moreover, staff said that SEC conducted section 610 reviews for all rules previously published for notice and comment to assess the continued utility of the rules. Agency officials stated that when they prepare the agency’s annual Federal Register notice of rules to be reviewed during the succeeding 12 months, they consult a chronological list of final rules adopted by the agency to determine which rules are due for a section 610 review. However, when we reviewed documentation of 46 section 610 reviews SEC staff conducted in 2015 and 2016, we found that each of the reviews was conducted for a rule adopted in 2003 or 2004, with 45 rule reviews being conducted 12 years after their publication as final rules. By not conducting section 610 reviews within the time frame established by RFA, SEC may delay taking timely action to minimize significant economic impact of rules on small entities. In general, SEC did not follow Office of Advocacy’s guidance for conducting section 610 reviews. The Office of Advocacy recommends that to evaluate and minimize any significant economic impact of a rule on a substantial number of small entities, agencies may want to use an economic analysis similar to the initial regulatory flexibility analysis. Additionally, OMB guidance on regulatory analysis states that agencies should provide documentation that analysis is based on the best reasonably obtainable scientific, technical, and economic information available. As previously discussed, independent regulatory agencies are not required to follow the OMB guidance, but it provides a strong set of analytical practices relevant to agency rulemakings. To facilitate its section 610 reviews, SEC staff used a template that prompts staff to consider each of the five RFA-required section 610 considerations and to document the conclusion of the review (if the rule should be continued without change, amended, or rescinded). We reviewed the templates for all 46 reviews conducted between 2015 and 2016 and found that SEC staff consistently followed this template to document their conclusions. However, the reviews generally lacked substantive analysis and no rules were amended as a direct result of their section 610 review. Overall, of the 46 reviews, 7 identified comments or complaints from the public, 4 identified changes in technology, economic conditions, or other factors in the area affected by the rule, and 4 identified instances of overlap, conflict or duplication. The reviews generally provided no evidence of empirical analysis and no data to support the conclusions of the reviews, as recommended by the Office of Advocacy and OMB. Furthermore, in most cases, the reviews lacked a description of whether, or to what extent, the rule was affecting small entities. For example, when addressing the first RFA-required consideration, describing and evaluating the continuing need for a rule, most SEC section 610 reviews included language from the final rule as a description and included SEC’s conclusion that the rule continues to be necessary. The Office of Advocacy also suggests that useful section 610 reviews should evaluate potential improvements to the rule by going beyond obvious measures and evaluating factors such as the unintended market effects and distortions and widespread noncompliance with reporting and other paperwork requirements. We found no evidence that these factors were considered. The Office of Advocacy further recommends that agencies pay particular attention to changes in the cumulative burden faced by regulated entities. We did not find evidence that SEC considered the cumulative burden faced by regulated agencies in the reviews we examined. By not including these best practice elements as part of its section 610 reviews, SEC may not fully achieve RFA’s purpose of minimizing significant economic impact of rules on small entities. SEC does not have written policies or procedures for completing rule reviews pursuant to RFA section 610, potentially contributing to the weaknesses we identified on the timing of the reviews, and the lack of data and analysis to support the review findings. As previously mentioned, federal internal control standards state the importance for agency management to establish policies and procedures needed to achieve objectives. In addition, Executive Order 13272 requires agencies to establish policies and procedures to promote compliance with RFA. While this executive order is not binding on independent regulatory agencies, including SEC, it represents a best practice for rulemaking. SEC also does not publicly disclose the findings or conclusions of its section 610 reviews. Although RFA does not require that agencies publish the results of their 610 reviews, the Office of Advocacy recommends that to enhance transparency, agencies should communicate with interested entities about the status of ongoing as well as completed section 610 reviews. Several executive orders also highlight the importance of public disclosure of retrospective reviews. For example, Executive Order 13563 recommends that retrospective analyses, including supporting data, should be released online whenever possible. Executive Order 13610 reiterated this recommendation, stating that public disclosure promotes an open exchange of information. While these executive orders are not binding on independent regulatory agencies, we consider them a best practice for rulemaking. OMB guidance on regulatory analysis states that to provide greater access to regulatory analysis, agencies should post their analysis, along with supporting documents, on the Internet so the public can review the findings. Staff from SEC confirmed that they do not publish the results or summaries of their section 610 reviews, stating that they are not required to do so by law. Lack of public disclosure limits the transparency of section 610 reviews, hindering the public’s ability to hold agencies accountable for the quality and conclusions of their reviews. CFTC and CFPB Plan to Develop Policies and Procedures for Future Retrospective Reviews The other two regulators we reviewed, CFTC and CFPB, plan to put procedures in place for section 610 reviews. According to CFTC officials, the agency has not conducted any section 610 reviews in at least the last 10 years. CFTC officials confirmed that the agency currently has no policies or procedures in place to track which rules require reviews or to conduct the reviews. Furthermore, agency officials were unable to identify any final rules published by the agency from 1997 through 2007 that were found to have a significant economic impact on a substantial number of small entities and therefore would have required a section 610 review. According to CFTC officials, an agency working group has a goal to develop a process and criteria for conducting section 610 reviews. Additionally, agency officials stated an interest in establishing an automated system to develop a schedule for tracking which rules require section 610 reviews. CFPB has not yet been required to conduct any section 610 reviews. Section 610 reviews are required within 10 years of a rule’s adoption as a final rule; to date, none of the rules issued by CFPB, which was created in 2010, have met this deadline. CFPB officials confirmed that CFPB has conducted no section 610 reviews and stated that the agency currently has no formal plan or procedure in place to begin conducting these reviews. However, officials further stated that CFPB has had initial planning discussions about the section 610 review requirements and their role in a comprehensive regulatory review program. Conclusions RFA aims to have agencies tailor regulatory requirements to the scale of regulated entities in a manner consistent with the objectives of the rule and applicable statutes. To achieve this goal, RFA requires agencies to assess the impact of proposed rules on small entities, solicit and consider flexible regulatory proposals, and explain the rationale for their actions. While many of the regulators’ certification determinations and regulatory flexibility analyses incorporated RFA-required components, the weaknesses and inconsistencies we found—in the analyses and in documentation—could undermine the act’s goal. Some certification determinations lacked important information recommended by the Office of Advocacy and OMB, including data sources and methodologies, definitions, and consideration of broad economic impacts. Many evaluations of key components—potential economic effects and alternative regulatory approaches—in the regulatory flexibility analyses were limited. For most rules we reviewed, regulators were unable to provide documentation supporting the economic analysis underlying their regulatory flexibility analyses—including their certification decision. Moreover, regulators generally lacked comprehensive policies and procedures for RFA implementation, a potential contributing factor for many of the weaknesses we identified. By developing policies and procedures that provide specific direction to rulemaking staff, the regulators could better ensure consistent and complete implementation of RFA requirements and more fully realize the RFA goal of appropriately considering and minimizing impacts on small entities during and after agency rulemakings. The issues we identified with section 610 reviews included the use of a substitute review process as well as gaps or weaknesses in analysis and documentation. To fulfill section 610 requirements, the Federal Reserve, FDIC, and OCC used other retrospective reviews required under EGRPRA that do not fully align with requirements under section 610. SEC’s section 610 reviews are not fully consistent with RFA requirements and Office of Advocacy and OMB guidance (for example, not within the 10-year time frame, no evidence of empirical analysis, and no data to support the conclusions of the reviews). CFTC has not recently completed section 610 reviews and CFPB has not yet been required to do so. These regulators have begun or will soon begin developing policies and procedures for conducting the reviews. By meeting section 610 review requirements and using best practices, regulators will be in a better position to minimize any significant economic impact of a rule on small entities that the statute seeks to ensure. Additionally, for regulators that have not publicly issued their finding or for those that have yet to undertake the reviews, it will be important to adopt best practices for transparency and accountability. Recommendations for Executive Action We are making a total of 10 recommendations among the six financial regulators we reviewed: FDIC should develop and implement specific policies and procedures for how it will consistently comply with RFA requirements and key aspects of Office of Advocacy and OMB guidance that include the following three elements: processes for creating and maintaining documentation sufficient to support analysis of economic impact and alternatives; processes for disclosing the methodology—including criteria for assessing significant economic impact and substantial number of small entities—and data sources of economic analysis supporting certification determinations and regulatory flexibility analyses; and processes for considering to the extent practicable a rule’s potential economic impacts on small entities, including for evaluating broad economic impacts of regulations in certification determinations and assessing alternatives that could minimize impact on small entities. (Recommendation 1) FDIC should coordinate with the Office of Advocacy to determine whether the EGRPRA review process satisfies the requirements of section 610 and, if not, what steps should be taken to align the process with section 610 requirements. If additional actions are needed, FDIC should develop and implement specific policies and procedures for performing section 610 reviews, including processes for determining which rules require review, posting notices of upcoming reviews in the Federal Register, and maintaining documentation supporting the analysis and conclusions of RFA-required considerations; and publicly disclose the reviews, or summaries of the reviews, with the basis for any conclusions. Such disclosure could include publishing results as part of the EGRPRA report, in the Federal Register, or on the agency’s website. (Recommendation 2) OCC should develop and implement specific policies and procedures for how it will consistently comply with RFA requirements and key aspects of Office of Advocacy and OMB guidance that include the following three elements: processes for creating and maintaining documentation sufficient to support analysis of alternatives that could minimize impact on small entities; processes for disclosing the methodology—including criteria for assessing significant economic impact and a substantial number of small entities—and data sources of economic analysis supporting certification determinations and regulatory flexibility analyses; and processes for considering to the extent practicable a rule’s potential economic impacts on small entities, including for evaluating broad economic impacts of regulations in certification determinations and assessing alternatives that could minimize impact on small entities. (Recommendation 3) OCC should coordinate with the Office of Advocacy to determine whether the EGRPRA review process satisfies the requirements of section 610 and, if not, what steps should be taken to align the process with section 610 requirements. If additional actions are needed, OCC should develop and implement specific policies and procedures for performing section 610 reviews, including processes for determining which rules require review, posting notices of upcoming reviews in the Federal Register, and maintaining documentation supporting the analysis and conclusions of RFA-required considerations; and publicly disclose the reviews, or summaries of the reviews, with the basis for any conclusions. Such disclosure could include publishing results as part of the EGRPRA report, in the Federal Register, or on the agency’s website. (Recommendation 4) The Federal Reserve should develop and implement specific policies and procedures for how it will consistently comply with RFA requirements and key aspects of Office of Advocacy and OMB guidance that include the following three elements: processes for creating and maintaining documentation sufficient to support analysis of economic impact and alternatives; processes for disclosing the methodology—including criteria for assessing significant economic impact and a substantial number of small entities—and data sources of economic analysis supporting certification determinations and regulatory flexibility analyses; and processes for considering to the extent practicable a rule’s potential economic impacts on small entities, including for evaluating broad economic impacts of regulations in certification determinations and assessing alternatives that could minimize impact on small entities. (Recommendation 5) The Federal Reserve should coordinate with the Office of Advocacy to determine whether the EGRPRA review process satisfies the requirements of section 610 and, if not, what steps should be taken to align the process with section 610 requirements. If additional actions are needed, the Federal Reserve should develop and implement specific policies and procedures for performing section 610 reviews, including processes for determining which rules require review, posting notices of upcoming reviews in the Federal Register, and maintaining documentation supporting the analysis and conclusions of RFA-required considerations; and publicly disclose the reviews, or summaries of the reviews, with the basis for any conclusions. Such disclosure could include publishing results as part of the EGRPRA report, in the Federal Register, or on the agency’s website. (Recommendation 6) CFPB should develop and implement specific policies and procedures for how it will consistently comply with RFA requirements and key aspects of Office of Advocacy and OMB guidance that include the following three elements: processes for creating and maintaining documentation sufficient to support analysis of alternatives that could minimize the impact on small entities; processes for considering to the extent practicable a rule’s potential economic impacts on small entities, including for evaluating broad economic impacts of regulations in certification determinations and assessing alternatives that could minimize impact on small entities; and in developing policies and procedures for section 610 reviews, include processes for determining which rules require review, posting notices of upcoming reviews in the Federal Register, maintaining documentation supporting the analysis and conclusions of RFA- required considerations, and establishing procedures for publicly disclosing the review or summaries (such as in the Federal Register or on the agency’s website). (Recommendation 7) CFTC should develop and implement specific policies and procedures for how it will consistently comply with RFA requirements and key aspects of Office of Advocacy and OMB guidance that include the following four elements: processes for creating and maintaining documentation sufficient to support analysis of economic impact and alternatives; processes for disclosing the methodology—including criteria for assessing significant economic impact and a substantial number of small entities—and data sources of economic analysis supporting certification determinations and regulatory flexibility analyses; processes for considering to the extent practicable a rule’s potential economic impacts on small entities, including for evaluating broad economic impacts of regulations in certification determinations and assessing alternatives that could minimize impact on small entities; and in developing policies and procedures for section 610 reviews, include processes for determining which rules require review, posting notices of upcoming reviews in the Federal Register, maintaining documentation supporting the analysis and conclusions of RFA- required considerations, and establishing procedures for publicly disclosing the review or summaries (such as in the Federal Register or on the agency’s website). (Recommendation 8) SEC should develop and implement specific policies and procedures for how it will consistently comply with RFA requirements and key aspects of Office of Advocacy and OMB guidance that include the following four elements: processes for creating and maintaining documentation sufficient to support analysis of economic impact and alternatives; processes for disclosing the methodology—including criteria for assessing significant economic impact and a substantial number of small entities—and data sources of economic analysis supporting certification determinations and regulatory flexibility analyses; processes for considering to the extent practicable a rule’s potential economic impacts on small entities, including for evaluating broad economic impacts of regulations in certification determinations and assessing alternatives that could minimize the impact on small entities; and processes for performing section 610 reviews, including determining which rules require review, posting notices of upcoming reviews in the Federal Register, and maintaining documentation supporting the analysis and conclusions of RFA-required considerations. (Recommendation 9) SEC should publicly disclose its section 610 reviews, or summaries of the reviews, with the basis for any conclusions. Such disclosure could include publishing results in the Federal Register or on the agency’s website. (Recommendation 10) Agency Comments and Our Evaluation We provided a draft of this report to CFPB, CFTC, the Federal Reserve, FDIC, OCC, Office of Advocacy, and SEC for review and comment. CFPB, CFTC, the Federal Reserve, FDIC, and SEC provided written comments that we have reprinted in appendixes XIII–XVII, respectively. CFTC, the Federal Reserve, and FDIC also provided technical comments, which we have incorporated, as appropriate. We received technical comments from OCC too late to be incorporated in the final product. Although the comments were not incorporated, they do not significantly affect the facts or conclusions we presented. In their written comments, CFPB, CFTC, the Federal Reserve, FDIC, and SEC generally agreed with the report’s recommendations. CFPB recognized the importance of having specific policies and procedures to consistently comply with RFA requirements. CFPB noted the existence of formal guidance instructing staff on conducting and documenting analyses for substantive rulemakings, including following RFA, and stated its commitment to updating its policies and procedures—and developing them for section 610 reviews—to ensure it will consistently comply with RFA requirements. In written comments provided by CFTC, the agency stated its commitment to fully complying with RFA and described the formation and progress of its interdivisional working group for enhancing RFA implementation. CFTC noted that our recommendations are largely consistent with the planned efforts of the working group and that the group will use the recommendations as a guide in completing its work. CFTC also explained that while not a clear requirement of RFA, it will carefully consider making the public aware of the results of section 610 reviews in cases in which the review does not lead to proposed changes to a rule. In its written comments, the Federal Reserve noted that it strives for consistent and complete compliance with RFA requirements. Regarding our recommendation to develop and implement specific policies and procedures for complying with RFA requirements and key aspects of Office of Advocacy and OMB guidance, the Federal Reserve stated it plans to review existing policies and procedures to develop and implement, as appropriate, additional processes with respect to documentation, disclosing methodology and data sources, and considering a rule’s potential economic impact on small entities. Regarding our recommendation to coordinate with the Office of Advocacy and take steps to align the EGRPRA review process with section 610 requirements, the Federal Reserve stated that it will coordinate with the Office of Advocacy and noted that it also plans to conduct a broader review of processes for section 610 reviews to ensure they are comprehensive and transparent. In its written comments, FDIC stated it will consider our recommendations as it continues to enhance its policies and procedures for performing regulatory analyses, in particular compliance with RFA. Regarding our recommendation to develop and implement specific policies and procedures for complying with RFA requirements and key aspects of Office of Advocacy and OMB guidance, FDIC noted that although independent agencies are not required to follow certain guidance used as criteria in the report, it will continue to incorporate provisions from Office of Advocacy and OMB guidance where feasible. FDIC noted that GAO limited its review to analysis specifically included in the RFA sections of a rule and did not consider analysis published elsewhere in the preamble, as permitted by RFA. FDIC stated that it continues to look for ways to make its regulatory analysis more transparent. However, while RFA allows agencies to perform regulatory flexibility analyses as part of other required analysis if such other analysis satisfies RFA requirements, RFA also calls for initial and final regulatory flexibility analyses to contain or describe the required components. Including these components elsewhere in a rule’s preamble without referencing or describing them in the RFA section does not help promote transparency for the public or small entities the rule might affect. As the Office of Advocacy’s guidance notes, agencies can coordinate preparation of regulatory flexibility analyses with any other analyses accompanying a rule. But in doing so, agencies should ensure that such analyses describe explicitly how RFA requirements were satisfied. Otherwise, it may be unclear to small entities and others if relevant analysis appears elsewhere in a rule’s preamble, which could limit transparency and the ability of small entities to review and respond to relevant analyses. Regarding documentation supporting regulatory flexibility analyses and certification determinations, FDIC noted that it will ensure staff considers our recommendation. Regarding our recommendation to coordinate with the Office of Advocacy and take steps to align the EGRPRA review process with section 610 requirements, FDIC stated that it will consider the recommendation. FDIC noted that before this year, the last section 610 review for FDIC was part of the 2007 EGRPRA review process, and notices of that review were provided at that time. Since then, FDIC said that it issued one rule in 2014 that requires a section 610 review, which must be completed by 2024. In written comments, SEC’s chairman stated that he asked staff to identify additional ways to improve the quality of SEC’s rulemaking analysis and procedures. SEC noted that as an independent regulatory agency, it is not subject to the specific requirements for regulatory analysis in Executive Orders 12866 and 13563 and OMB Circular A-4, but that it will continue to strive to incorporate the principles and best practices in those documents into internal practices, where appropriate. SEC also noted that as part of its rulemaking, it engages in economic analyses of the likely costs and benefits of proposed and final rules along with other anticipated effects. SEC further explained that as permitted by RFA, relevant RFA analyses in SEC rulemaking releases often are found across several sections of the releases, and that it would therefore consider potential improvements to better communicate to the public about other analyses relevant to the RFA analyses. As we previously stated, although RFA allows agencies to perform regulatory flexibility analyses as part of other required analysis, it also requires the initial and final analyses to include or describe the required components. Including these components in different parts of a rule release without explicitly referencing or describing them in the RFA section may limit transparency and the ability of small entities to review and respond to relevant analyses. We are sending copies of this report to the appropriate congressional committees and members and financial regulators. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix XVIII. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to (1) analyze the trends in financial regulators’ application of Regulatory Flexibility Act (RFA) requirements in their recent rulemakings; (2) examine the extent to which financial regulators performed analyses for rules they certified would not have a significant economic impact on a substantial number of small entities; (3) examine the extent to which financial regulators performed regulatory flexibility analyses and the analyses’ effects on their rulemakings; (4) examine the extent to which financial regulators established policies, procedures, and criteria for complying with RFA requirements; and (5) examine the extent to which financial regulators performed retrospective reviews required by RFA. For the purposes of this report, financial regulators are the Consumer Financial Protection Bureau (CFPB), the Board of Governors of the Federal Reserve System (Federal Reserve), Federal Deposit Insurance Corporation (FDIC), Office of the Comptroller of the Currency (OCC), Commodity Futures Trading Commission (CFTC), and the Securities and Exchange Commission (SEC). To analyze the trends in financial regulators’ application of RFA requirements in their recent rulemakings, we reviewed all final rules published in the Federal Register from January 2010 through December 2016. Using the document search on the official Federal Register website, we downloaded all actions published in the Rules and Regulations section of the Federal Register for the financial regulators during our time period. The downloaded file had 744 actions and included a website link to each notice on the Government Printing Office’s website. We then reviewed each notice to remove actions that were not final rules, such as corrections, orders, and statements of policies. We also removed obvious duplicate rules, using the rule’s Regulation Identifier Number that we recorded from the notice or the title for rules without such an identification number. We considered rules to be duplicates if they were (1) a final rule confirming an interim rule or (2) an extension of the compliance date that did not make changes to the Code of Federal Regulations. We removed 181 actions that were not final rules and 43 duplicates, leaving 520 final rules promulgated by the financial regulators from 2010 through 2016. We then analyzed the Federal Register notices for these final rules, using a spreadsheet-based data collection instrument, to quantify how many rules (1) did not include a proposed rule, (2) included an initial regulatory flexibility analysis, (3) included a final regulatory flexibility analysis, (4) certified that RFA analyses were not required, and (5) had other characteristics, such as those rules that performed a final regulatory flexibility analysis but also certified that it was not required. In cases in which the RFA analysis performed in the proposed rule was not clear or present in the final rule, we used the Regulation Identifier Number or citations listed in the final rule to locate the proposed rule to make the determination. To examine the extent to which financial regulators performed analyses for rules they certified would not have a significant economic impact on a substantial number of small entities, we used the results from the trend review to select all final rules published in the Federal Register from January 2015 through December 2016 for which an agency published a notice of proposed rulemaking and certified in the final rule that the rule would not have such an economic impact. We identified a total of 66 final rules that included certifications. More specifically, CFPB had 11 rules that included certifications, CFTC had 15, FDIC had 18, the Federal Reserve had 1, OCC had 9, and SEC had 12. For these rules, we collected and reviewed internal workpapers from the financial regulators on their decisions to certify that regulatory flexibility analyses were not required because the rule would not have a significant economic impact on a substantial number of small entities (certifications). We then assessed the regulators’ certifications in Federal Register publications to determine the extent to which they reflected RFA requirements, guidance from the Small Business Administration’s Office of Advocacy on complying with RFA, and other best practices for rulemaking, specifically Office of Management and Budget (OMB) guidance on regulatory analysis and Executive Order 13563. Our analysis did not include an evaluation of other aspects of agency rulemaking, including regulatory analyses for purposes other than RFA, such as analyses for the Paperwork Reduction Act and other economic analyses in the preamble. We based our evaluation on the RFA sections of each Federal Register notice for proposed and final rules and did not review other rule sections unless the RFA section explicitly referenced them. We also reviewed the workpapers and notices of joint rules for coordination on the certification analysis or decisions between regulators. To examine the financial regulators’ initial and final regulatory flexibility analyses and the analyses’ effects on their rulemakings, we used the results from the trend review to select all final rules published in the Federal Register from January 2015 through December 2016 for which the agency performed an initial regulatory flexibility analysis in the proposed rule and a final regulatory flexibility analysis in the final rule. For any regulator that had fewer than three rules meeting these criteria, we selected all rules published in the prior year for which the agency performed an initial and final regulatory flexibility analysis until we reached three rules or a publication date of January 2013. For rules issued jointly by multiple financial regulators in our scope, we included the rule for each regulator that prepared an initial and final regulatory flexibility analysis. We included such rules even if they would not otherwise have been selected using the outlined criteria. This resulted in the inclusion of one additional rule for the Federal Reserve (a 2013 rule issued jointly with OCC). We selected a total of 39 final rules for which the agency performed an initial and final regulatory flexibility analysis. More specifically, we selected 7 CFPB rules, 1 CFTC rule, 4 FDIC rules, 17 Federal Reserve rules, 1 OCC rule, and 9 SEC rules. For these rules, we obtained and reviewed internal workpapers from the financial regulators related to the initial and final regulatory analyses. We assessed the regulators’ regulatory flexibility analyses contained in the RFA summary in the notices of proposed and final rules published in the Federal Register to determine the extent to which they reflected RFA requirements, the Office of Advocacy’s guidance on complying with RFA, and OMB guidance on regulatory analysis. Our analysis did not include an evaluation of other aspects of agency rulemaking, including regulatory analyses for purposes other than RFA. We based our evaluation on the RFA sections of each rule and did not review other rule sections unless the RFA section explicitly referenced them. We also analyzed the workpapers, notices, and interviews to identify the extent to which regulators revised draft and proposed rules as a result of regulatory flexibility analyses, the source of the changes, and the types and characteristics of changes that regulators made to draft and proposed rules as a result of regulatory flexibility analyses. We also reviewed the workpapers and notices of joint rules for coordination on the analyses. To examine financial regulators’ policies, procedures, and criteria for complying with RFA requirements, we obtained and reviewed internal agency policies, procedures, and guidance for conducting initial and final regulatory flexibility analyses or certifying that such analyses were not required. We then assessed the documents received to determine the extent to which they reflected RFA requirements and Office of Advocacy’s guidance on complying with RFA. We also assessed the extent to which the documents included comprehensive policies and procedures to assist staff in complying with RFA in accordance with best practices outlined in Executive Order 13272 and federal internal control standards. To examine the extent to which financial regulators performed retrospective reviews required by RFA, we searched the Federal Register for notices of upcoming section 610 reviews as well as results of section 610 reviews. We also obtained and reviewed documentation from the financial regulators of section 610 reviews performed from calendar year 2006 through 2016. We assessed the section 610 reviews we received against RFA requirements and other best practices for rulemaking, specifically OMB guidance on regulatory analysis and Executive Orders 13563 and 13610. For agencies that conducted other retrospective reviews in lieu of section 610 reviews, we compared the other retrospective review processes to RFA requirements for section 610 reviews to determine the extent to which they aligned. We also interviewed staff from each of the financial regulators to understand the process and analysis supporting their certification decisions, regulatory flexibility analyses, and retrospective reviews. We conducted this performance audit from January 2017 to January 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Extent of Coordination in Financial Regulators’ Certifications and Regulatory Flexibility Analyses In the seven joint rules we reviewed with a certification, financial regulators conducted their own certification analyses independently of the other agencies responsible for the rule. The Regulatory Flexibility Act (RFA) allows agencies to coordinate on their RFA analyses but does not require it. The Small Business Administration’s Office of Advocacy does not make any recommendation on coordination in its RFA guide. Because agencies regulate different small entities that could be affected differently by a rule, coordination would not necessarily result in efficiencies or other benefits. In joint rules, the regulators (except for the Board of Governors of the Federal Reserve System (Federal Reserve), which generally treated regulatory flexibility analyses differently) reached the same conclusion to certify, although their analyses sometimes differed. For example, in one joint rule, the Office of the Comptroller of the Currency (OCC) and the Federal Deposit Insurance Corporation (FDIC) concluded that the rule mainly imposes requirements on states and therefore affected no small entities. The Consumer Financial Protection Bureau agreed that the rule pertained mainly to states, but performed an analysis to assess the indirect impact on small entities, concluding that even indirectly the rule would not have a significant economic impact on a substantial number of small entities. The Federal Reserve found that some entities would be federally regulated but that the number was uncertain but not substantial (less than five). In another joint rule, FDIC concluded that the rule would not have a significant economic impact on a substantial number of small entities because banks with less than $1 billion in assets were exempted. The Small Business Administration defines a small bank as one with assets of $550 million or less; therefore, no small entities would be affected. However, OCC assumed that every bank subject to the rule would be required to comply regardless of the exemption and performed its analysis with that assumption. Under this premise, OCC found that a substantial number of small entities would be affected by the rule but that the economic impact would not be significant. Of the seven joint rules that we reviewed with initial and final regulatory flexibility analyses, the analyses for two rules indicated that regulators collaborated in preparing the analysis. For one rule, the Federal Reserve, FDIC, and OCC published a joint initial analysis but FDIC and OCC made a certification determination in the final rule. For the other rule, the Federal Reserve and OCC prepared separate initial analyses but published a joint final analysis that included separate sections evaluating the potential economic impact of the final rule. The remaining five joint rules included separate regulatory flexibility analyses for each regulator and all but the Federal Reserve reached a certification determination. None of the rules we reviewed with initial and final flexibility analyses that were issued by individual regulators indicated that the regulator had coordinated with other agencies. Appendix III: Commodity Futures Trading Commission Entities That Are Not Small Entities for Regulatory Flexibility Act Purposes The following table details the entities regulated by the Commodity Futures Trading Commission (CFTC) that the agency determined were not small entities for the purposes of the Regulatory Flexibility Act (RFA). RFA allows agencies to establish alternative definitions of small entities when appropriate by publishing the definition in the Federal Register and, in the case of small businesses, in consultation with the Small Business Administration’s Office of Advocacy. We reviewed CFTC’s small-entity definitions to assess the extent to which they met these requirements. We reviewed the Federal Register notices for the definition of those entities included in final rules in calendar years 2015 and 2016 where the agency certified that the rule would not have a significant economic impact on a substantial number of small entities. Appendix IV: Securities and Exchange Commission’s Small Entity Definitions for Regulatory Flexibility Act Purposes The following table compares the Securities and Exchange Commission’s definitions of small entities for the purposes of the Regulatory Flexibility Act (RFA) with the Small Business Administration’s size standards that RFA uses to define small entities. Appendix V: Assessment of Board of Governors of the Federal Reserve System’s Regulatory Flexibility Analyses, 2015–2016 The Board of Governors of the Federal Reserve System (Federal Reserve) generally performed regulatory flexibility analyses for its rulemakings regardless of the rule’s potential impact on small entities. As shown in table 11, nearly all of the Federal Reserve’s initial and final regulatory flexibility analyses concluded that the rule would not have a significant economic impact on a substantial number of small entities, which generally is a basis for certification. Furthermore, the majority of the Federal Reserve’s analyses stated that the rules either did not apply to small entities or lacked compliance requirements. Table 12 summarizes our findings on the Federal Reserve’s initial and final regulatory flexibility analyses for the 17 rules we reviewed. Table 13 summarizes our findings for the six rules we reviewed for which the Federal Reserve’s regulatory flexibility analysis indicated the rule might impose compliance requirements on small entities. Appendix VI: Assessment of Other Financial Regulators’ Regulatory Flexibility Analyses, 2013–2016 Appendix VII: Assessment of Federal Deposit Insurance Corporation’s Regulatory Flexibility Analyses, 2014–2016 Appendix VIII: Assessment of Consumer Financial Protection Bureau’s Regulatory Flexibility Analyses, 2013–2016 Appendix IX: Assessment of Commodity Futures Trading Commission’s Regulatory Flexibility Analyses, 2013–2016 Appendix X: Assessment of Office of the Comptroller of the Currency’s Regulatory Flexibility Analyses, 2013–2016 Appendix XI: Assessment of Securities and Exchange Commission’s Regulatory Flexibility Analyses, 2015–2016 Appendix XII: Outcomes of Financial Regulators’ Regulatory Flexibility Analyses on Final Rules, 2013–2016 Appendix XIII: Comments from the Consumer Financial Protection Bureau Appendix XIV: Comments from the Commodity Futures Trading Commission Appendix XV: Comments from the Board of Governors of the Federal Reserve System Appendix XVI: Comments from the Federal Deposit Insurance Corporation Appendix XVII: Comments from the Securities and Exchange Commission Appendix XVIII: GAO Contact and Staff Acknowledgments GAO Contact Lawrance L. Evans, Jr., (202) 512-8678, [email protected]. Staff Acknowledgments In addition to the contact named above, Stefanie Jonkman (Assistant Director), Kevin Averyt (Analyst in Charge), Bethany Benitez, Katherine Carter, Andrew Emmons, Marc Molino, Lauren Mosteller, and Barbara Roesmann made key contributions to this report. Other assistance was provided by Farrah Graham, Courtney LaFountain, and Tim Bober.
Why GAO Did This Study Since the 2007–2009 financial crisis, federal financial regulators have issued hundreds of rules to implement reforms intended to strengthen the financial services industry. Financial regulators must comply with rulemaking requirements such as RFA when drafting and implementing regulations. Congress included a provision in statute for GAO to study these financial services regulations annually. This annual report examines the extent to which and how financial regulators performed required RFA analyses and established policies and procedures for complying with RFA requirements, among other objectives. GAO reviewed the RFA section of financial regulators' Federal Register notices of rulemaking, related internal workpapers, and policies and procedures for conducting RFA analyses. GAO also determined the extent to which regulators' analyses reflected RFA requirements, guidance issued by the Office of Advocacy, and OMB guidance on regulatory analysis. GAO's review covered certifications in 66 final rules and regulatory flexibility analyses in 39 proposed and final rules. What GAO Found To comply with the Regulatory Flexibility Act (RFA), agencies generally must assess the rule's potential impact on small entities and consider alternatives that may minimize any significant economic impact of the rule (regulatory flexibility analyses). Alternatively, agencies may certify that a rule would not have a significant economic impact on a substantial number of small entities. GAO found several weaknesses with the analyses of six financial regulators (Board of Governors of the Federal Reserve System, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation, Securities and Exchange Commission, Commodity Futures Trading Commission, and Consumer Financial Protection Bureau) that could undermine the goal of RFA and limit transparency and public accountability, as shown in the following examples. Certifications. In certifications for rules that regulators determined may affect small entities, regulators conducted analyses to support their conclusions. GAO found many analyses across all regulators lacked key information the Small Business Administration's Office of Advocacy and the Office of Management and Budget (OMB) recommend. Missing information included discussions of data sources or methodologies, consideration of broader economic impacts of the rulemaking (such as cumulative economic impacts of regulations), and definitions of the criteria regulators used for “substantial number” and “significant economic impact.” Regulatory flexibility analyses. In many of the initial and final regulatory flexibility analyses that GAO reviewed, financial regulators' evaluation of key components required by RFA—potential economic effects and alternative regulatory approaches—was limited. Most regulators (five of six) also did not disclose data sources or methodologies used for their analyses, as OMB recommends. For most rules GAO reviewed, regulators (five of six) were unable to provide documentation supporting their regulatory flexibility analyses, as OMB recommends, including analyses supporting certification decisions. However, the extent of documentation varied by regulator. Federal internal control standards state the importance for agency management to establish policies and procedures to achieve objectives. All but one of the financial regulators have guidelines that restate RFA requirements for certification and for preparing regulatory flexibility analyses and provide some information on how to approach these analyses. However, these regulators generally have not developed specific policies and procedures to assist staff in complying with RFA, which may contribute to the weaknesses GAO identified in the analyses. For example, regulators' guidance generally did not include procedures for evaluating a rule's potential economic impact; identifying and assessing regulatory alternatives that could minimize impact on small entities; disclosing methodology and data sources; and creating and maintaining documentation that supports findings. By not developing and implementing comprehensive policies and procedures for RFA analyses, regulators' ability to consistently and effectively meet RFA objectives may be limited. What GAO Recommends GAO is making a total of 10 recommendations among the six financial regulators reviewed, including that regulators develop and implement specific policies and procedures for consistently complying with RFA requirements and related guidance for conducting RFA analyses. Five agencies generally agreed with the recommendations and one did not provide written comments.
gao_GAO-18-605
gao_GAO-18-605_0
Background HH-60G Pave Hawk Inventory According to Air Force officials, the Air Force has 82 HH-60G helicopters designated to meet its personnel recovery mission requirements. The remaining 14 HH-60Gs are designated for training and, development and testing. Figure 1 shows the Air Force’s inventory of HH-60G Pave Hawk helicopters as of May 2018. Command Structure and Locations The Air Combat Command is the lead command for personnel recovery helicopters and as such has responsibility for all requirements associated with the helicopters, and for program funding. Formal training of helicopter aircrews takes place at Kirtland and Nellis Air Force Bases. The formal training unit at Kirtland Air Force Base is the only integrated unit with both active and reserve component forces, but the unit’s helicopters are assigned to the active component. All other HH-60G Pave Hawk units consist solely of active or solely of reserve component forces. Figure 2 shows the locations and components of the HH-60G rescue squadrons. It also shows the numbers of helicopters at each location. HH-60G Helicopter Pilot Training It takes several years to fully train a helicopter pilot. Pilots spend about a year and half in their general introductory and specialized helicopter training. For Air Force HH-60 pilots, this initial qualification training occurs at Kirtland Air Force Base. Following that, the pilots continue their training at their assigned operational squadrons. According to weapons school officials, a few experienced HH-60 pilots are selected to attend the HH-60 weapons school at Nellis Air Force Base where the pilots assist in the development of tactics, techniques, and procedures for the HH-60 community. Figure 3 shows a typical training timeline for HH-60G pilots. Air Force’s HH-60G Helicopters Have Experienced Declines in Condition and Increases in Maintenance Challenges, Due in Part to Extensions beyond the HH-60G’s Designed Service Life The material condition of the Air Force’s HH-60G fleet has declined and maintenance challenges have increased, in part due to extensions beyond the initially designed service life of the helicopters. In November 2017, the Air Force’s HH-60Gs were about 5 percent below their desired “mission capable” rate of 75 percent, which refers to the material condition of a squadron’s possessed aircraft and their abilities to conduct their designed missions. Mission capable rates have shown some year- to-year fluctuations, without any clear trends. However, for each of the past 5 years, the helicopters’ mission capable rates have been below the Air Force’s goal, and for fiscal year 2017, 68 percent of the 96 helicopter fleet were mission capable. As the helicopters have aged, the amount of time spent conducting maintenance on them has increased. For example, according to Air Force officials, in fiscal year 2013 the fleet averaged about 21 maintenance manhours for every HH-60G flight hour. However, by fiscal year 2017, the maintenance time spent had increased to an average of more than 25 maintenance manhours for every flight hour. According to officials, the increased time conducting maintenance is a result of an aging helicopter that requires more intensive maintenance. Further, according to officials, in 2007 the average amount of time required to conduct more extensive depot-level maintenance was 233 days, but by fiscal year 2017 it was 332 days, more than a 40 percent increase. Air Force maintenance data for fiscal years 2013-2017 show that airframes, turboshaft engines, and flight controls (see fig. 4) were the HH-60G elements that failed most often. According to Air Force officials, these structural and major component failures can require time-consuming maintenance that negatively affects availability and mission capable rates. According to Air Force flight-hour data, the average flight hours across the HH-60G fleet have increased by nearly 20 percent from fiscal year 2013 through May 2018. Air Force officials stated that the HH-60G was initially designed to have a service life of approximately 6,000 flight hours. However, in May 2018, the fleet-wide average was approximately 7,100 flight hours, or about 18 percent more than their initial expected service life. Table 1 shows that, as of May 2018, HH-60G training aircraft averaged about 10,500 flight hours, while the primary mission and back up aircraft averaged about 6,600 flight hours. The Air Force’s two developmental and testing aircraft had an average of 5,500 flight hours. According to Air Force officials, this is because developmental and testing aircraft are flown to test specific aircraft elements and not on regular missions. As flight hours increase more maintenance is required and maintenance challenges increase, according to Air Force officials. Air Force Fielding Schedule Delivers Combat Rescue Helicopters First to High Flight-Hour Squadrons According to Air Force officials, the Combat Rescue Helicopter fielding schedule, which was included in the contract for the new helicopters, was designed to ensure that helicopters with the highest flying hours are generally replaced first. The officials told us that this is why the active component units, which have higher flying-hour averages, would begin receiving their new Combat Rescue Helicopters in fiscal year 2020. Based on the current Combat Rescue Helicopter fielding schedule, the Air Force Reserve is scheduled to receive its new helicopters beginning in fiscal year 2026. The Air National Guard is scheduled to receive refurbished Operational Loss Replacement helicopters in fiscal year 2019 and the new Combat Rescue Helicopters beginning in fiscal year 2027. The last Combat Rescue Helicopters are scheduled to be fielded to all three components in fiscal year 2029. Figure 5 shows the timeline for the transition to the new Combat Rescue Helicopters. On average, the active component helicopters had about 2,000 more flight hours per helicopter than the reserve component helicopters, in May 2018, as shown in figure 6. Specifically, the active component helicopters had on average 7,700 flight-hours, while the reserve component helicopters averaged 5,800 flight hours. The active component helicopters in figure 6 include the Kirtland training helicopters, which averaged about 10,600 flight hours per helicopter. According to Air Force officials, due in part to the high number of flight hours per aircraft, Kirtland is one of the first squadrons scheduled to receive the new Combat Rescue Helicopters. Specifically, Kirtland is scheduled to begin receiving its new helicopters in fiscal year 2020. Among the reserve component, the Air National Guard helicopters have an average of about 6,200 flight hours while the Air Force Reserve helicopters have an average of about 5,500 flight-hours per aircraft. However, the Combat Rescue Helicopter fielding schedule shows that the Air National Guard squadrons are last to receive the new Combat Rescue Helicopters. According to Air Force officials, to address the later fielding of the new Combat Rescue Helicopters to the Air National Guard, beginning in fiscal year 2019 the Air Force is replacing all of the Air National Guard’s helicopters with refurbished Army helicopters. These helicopters will be upgraded to the Air Force’s HH-60G configuration and will each have 3,000 or fewer flight hours. These refurbished helicopters are commonly referred to as the Operational Loss Replacement helicopters. According to Air Force officials the Operational Loss Replacement helicopters are expected to increase squadron helicopter reliability and are expected to reduce unscheduled maintenance until the Air National Guard squadrons receive their new Combat Rescue Helicopters. The Air Force Has Identified Potential Training Challenges, but Would Likely Incur Costs If It Adjusted the Fielding Schedule for Its Combat Rescue Helicopters Due to the Air Force fielding schedule for the Combat Rescue Helicopters, the Air Force may face a challenge in supporting formal training for its reserve component squadrons during fiscal years 2025 through 2028. The rescue squadrons at Kirtland and Nellis Air Force Bases conduct all formal HH-60G training, and by fiscal year 2025, are scheduled to transition to providing formal training for the new Combat Rescue Helicopters. Specifically, these formal training units are scheduled to completely transition to the Combat Rescue Helicopter and will have divested all of their legacy HH-60G aircraft, as shown in figure 7. However, other squadrons will continue to fly the HH-60G aircraft after fiscal year 2025. Specifically, seven rescue squadrons will fly the legacy HH-60Gs in fiscal year 2025, and some will continue flying the HH-60Gs until fiscal year 2028 and so will continue to need formal training to fly that helicopter throughout that period. According to the Combat Rescue Helicopter fielding schedule shown in figure 8, the reserve component squadrons will receive most of their Combat Rescue Helicopters between fiscal years 2026 through 2028. The Air National Guard squadrons will not receive their primary mission Combat Rescue Helicopters until fiscal year 2028. This is 3 years after the formal training units at Kirtland and Nellis will have stopped training students on the legacy HH-60Gs. The Air Force Reserve and Air National Guard did not concur with the Combat Rescue Helicopter fielding schedule. Reserve Component officials said they did not concur, in part, because the Air Force did not coordinate the fielding schedule prior to the contract’s approval in 2014. However, according to Headquarters Air Force officials, the Combat Rescue Helicopter fielding schedule was coordinated with and approved by all components prior to the 2014 contract being approved. Further, Air Force officials stated they plan to maintain the fielding schedule because changing it would require the renegotiation of the contract and would likely result in increased costs and possibly a delay in delivery of the new helicopters. The Combat Rescue Helicopter contract was developed as a fixed-price contract. According to Air Force officials, as part of this fixed- price contract, specific terms such as base locations and order of delivery were predetermined. According to Air Force officials, while the Combat Rescue Helicopter contract does allow for some variation in the quantity of helicopters procured each year, there is no location and order variation permitted without the renegotiation of price. According to the Air Force, any changes outside the included variation of the number of aircraft to be purchased in a given year (i.e. change in the order or location of the bases) would negate the firm-fixed prices in the year where the change occurred, and in all the remaining years of the contract. Specifically, if changes are made to the order or location of the bases, potential contract line items that could increase include base level spares, readiness spares packages, support equipment, interim supply support and field support representatives for both aircraft and training systems. According to Air Force officials, fielding schedule changes could also put at risk the ability to provide timely funding for the military construction projects necessary to house new simulators at the rescue squadrons’ bases. These officials stated that the current Combat Rescue Helicopter fixed-price contract is ahead of schedule and within budget, as of June 2018. Air Force officials said they expect to have new helicopters by March 2020, 3 months ahead of schedule. They also said that if changes are made to the order of deliveries under the contract, the contract would have to be renegotiated which would, in turn, likely slow the delivery of the new helicopters and increase contract costs. Air Force officials acknowledge that based on the current fielding schedule there is a potential training gap that will occur in fiscal years 2025 through 2028 when the formal training units will no longer have any HH-60Gs available to train the reserve component. As of June 2018, Air Force officials told us that the Air Force was considering a number of options to address future training issues, including the following: The Air Force would provide legacy HH-60G helicopters, for a limited time, to the Air National Guard squadron at Kirtland Air Force Base. This would allow the Air National Guard to continue providing initial and requalification training on the legacy HH-60G helicopters for several years after the active component portion of the formal training unit at Kirtland Air Force Base has divested its legacy HH-60G helicopters. The Air Force would require personnel that have completed training on the Combat Rescue Helicopter at Kirtland Air Force Base to then receive additional training for the legacy platform at their home stations if their squadrons are still flying the HH-60Gs. Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. DOD told us that they had no comments on the draft report. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense and the Secretary of the Air Force. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Appendix I: The National Commission on the Structure of the Air Force Report and Recommendation Implementation Following disagreements over the Air Force’s proposals to reduce aircraft and Air National Guard end strength, the National Defense Authorization Act for Fiscal Year 2013 established the National Commission on the Structure of the Air Force. The act required the commission to conduct a study to determine whether, and how, the Air Force structure should be modified to best fulfill mission requirements in a manner consistent with available resources. In January 2014, the commission issued its final report, which included 42 recommendations. The Air Force agreed with 41 of the 42 commission’s recommendations. The recommendations varied in size, scope, and duration, and they focused on a range of topics from personnel policies and systems to determining the appropriate balance between the active and reserve component. However, as we reported in 2016 many of the recommendations were interrelated and the Air Force grouped the recommendations into various lines of effort and assigned senior officials responsibility for tracking the implementation of each line of effort. The “Total Force Continuum” has half (21) of the commission’s 42 recommendations. Recommendation 11 is part of this line of effort and it states: As the Air Force acquires new equipment, force integration plans should adhere to the principle of proportional and concurrent fielding across the components. This means that, in advance of full integration, new equipment will arrive at Air Reserve Component units simultaneously with its arrival at Active Component units in the proportional share of each component. As the Air Force Reserve and Active Component become fully integrated, the Air Force should ensure that the Air National Guard receives new technology concurrent with the integrated units. The Air Force should no longer recapitalize by cascading equipment from the Active Component to the Reserve Components. In accordance with Section 1055 of the Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015, the Air Force provided the congressional defense committees with annual responses to the commission’s recommendations. In its initial response, the Air Force stated that it was embracing the commission’s intent and viewed the recommendations as a holistic approach to improving the service. With regard to recommendation 11, the Air Force stated that it agreed in principle with the recommendation and would make every attempt to concurrently and proportionally equip all components to be the most capable force within today’s constrained resources. In its 2017 response, the Air Force cited the Combat Rescue Helicopters as one of the examples of how it is implementing recommendation 11. Specifically, the Air Force reported that its future fielding of the CRH shows the Air Force’s commitment to concurrent and proportional fielding of equipment amongst its components. Headquarters, Air Force officials elaborated on this response in response to our request for clarification, stating that the Air Force was replacing all its personnel recovery helicopters—for both its active and reserve component units—under a single contract and that it would not cascade any of its active component helicopters to its reserve component units. As of August 2017, the Air Force stated it had completed its review of recommendation 11 and it updated its Air Force Policy Directive 10-3, Operational Utilization of the Air Reserve Component Forces in November 2017, to better reflect the intent of the recommendation. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Michael Ferren, Assistant Director; Vincent Buquicchio; Mae Jones; Leigh Ann Sheffield; Mike Silver; and Nicole Volchko made key contributions to this report.
Why GAO Did This Study Since the 1980s, the Air Force has used its HH-60G Pave Hawk helicopters to conduct life-saving missions, including for personnel recovery and medical evacuations. The aging HH-60G inventory has shrunk over the years as a result of mishaps. As the inventory was declining, the Air Force began efforts to replace its fleet with the new Combat Rescue Helicopter. The National Defense Authorization Act for fiscal year 2018 includes a provision for GAO to review HH-60G replacement programs. This report examines: (1) the maintenance condition and service life of the Air Force's HH-60G Pave Hawk helicopters; (2) the Air Force's schedule for fielding the new Combat Rescue Helicopter in the active and reserve components; and (3) any training challenges the Air Force has identified related to this schedule. GAO analyzed flight hour and availability data and contracts and fielding schedule for new and refurbished personnel recovery helicopters for the Air Force. GAO also analyzed documentation, and interviewed officials from the Air Force Headquarters, the Air Force major commands, including the Air National Guard and Air Force Reserve, and training and test and evaluation units to discuss challenges the Air Force expects to face as it fields its new helicopters. What GAO Found The material condition of the Air Forces' aging HH-60G fleet has declined and maintenance challenges have increased, in part due to extensions beyond the designed service life of the helicopters. About 68 percent of the 96-helicopter fleet were mission-capable as of fiscal year 2017, below the Air Force desired mission-capable rate of 75 percent. The fleet is experiencing maintenance challenges. For example, the helicopters undergoing depot-level maintenance spent an average of 332 days undergoing such maintenance in fiscal year 2017 compared with 233 days in fiscal year 2007, more than a 40-percent increase. Air Force officials attribute these challenges to the helicopters exceeding their initially planned service life. Currently, available helicopters across the fleet average about 7,100 flight hours about 18 percent more than their initial expected service life of 6,000 hours. According to Air Force officials, the schedule for fielding the new Combat Rescue Helicopters generally prioritizes the replacement of helicopters with the highest number of flight hours; as a result, the active component is scheduled to begin receiving its new helicopters in fiscal year 2020, 6 years before the reserve component. In May 2018, the Air Force's active component HH-60Gs averaged about 2,000 more flight hours per helicopter than the reserve component. Under the fielding schedule, the Air National Guard squadrons are to receive new Combat Rescue Helicopters beginning in 2027, at the end of the fielding period. According to officials, in the meantime, to address aging helicopters in the Air National Guard, the Guard is scheduled to receive refurbished Army helicopters beginning in 2019. According to Air Force officials, these helicopters will have 3,000 or fewer flight hours and will be upgraded to the Air Force's HH-60G configuration. The Air Force officials explained that these helicopters are expected to increase reliability rates, reduce the need for unscheduled maintenance, and bridge the gap until the Air National Guard receives the new Combat Rescue Helicopters. Due to the Air Force fielding schedule for the Combat Rescue Helicopters, the Air Force may face a challenge in supporting formal training for reserve component squadrons in fiscal year 2025 through 2028. The training squadrons at Kirtland and Nellis Air Force Bases conduct all formal HH-60G training for both the active and reserve components. By 2025, these training squadrons are scheduled to be completely transitioned to the new Combat Rescue Helicopters. Given the fielding schedule, the training squadrons will not have any legacy HH-60Gs for formal training for the reserve component. However, some squadrons in the reserve component are scheduled to continue flying HH-60Gs until 2028 and will still need formal training. Air Force reserve component officials did not concur with the new Combat Rescue Helicopter fielding schedule. However, Air Force officials said that they plan to maintain their fielding schedule because changing it would require renegotiation of the contract, likely increase costs, and possibly delay delivery of the new helicopters. Air Force officials acknowledged this potential training issue and told GAO that the Air Force was considering options to address it; including retaining some legacy HH-60Gs at a training squadron to provide training during any gap period. What GAO Recommends GAO is not making any recommendations in this report. GAO requested comments from the DOD, but none were provided.
gao_GAO-18-607
gao_GAO-18-607_0
Background In fiscal year 2016, Medicaid covered an estimated 72.2 million low- income and medically needy individuals in the United States, and Medicaid estimated expenditures totaled over $575.9 billion. The federal government matches most state expenditures for Medicaid services on the basis of a statutory formula. States receive higher federal matching rates for certain services or populations, including an enhanced matching rate for Medicaid expenditures for individuals who became eligible for Medicaid under PPACA. Of the $575.9 billion in estimated expenditures for 2016, the federal share totaled over $363.4 billion and the states’ share totaled $212.5 billion. The Centers for Medicare & Medicaid Services (CMS)—a federal agency within the Department of Health and Human Services (HHS)—and states jointly administer and fund the Medicaid program. States have flexibility within broad federal requirements to design and implement their Medicaid programs. States must submit a state Medicaid plan to CMS for review and approval. A state’s approved Medicaid plan outlines the services provided and the groups of individuals covered. While states must cover certain mandatory populations and benefits, they have the option of covering other categories of individuals and benefits. PPACA permitted states to expand coverage to a new population—non- elderly, non-pregnant adults who are not eligible for Medicare and whose income does not exceed 138 percent of the FPL. This expansion population comprised 20 percent of total Medicaid enrollment in 2017. (See fig. 1.) As of December 2017, 31 states and the District of Columbia had expanded Medicaid eligibility to the new coverage population allowed under PPACA and 19 states had not. Figure 2, an interactive map, illustrates states’ Medicaid expansion status. See appendix II for additional information on figure 2. Survey Estimates Showed 5.6 Million Uninsured, Low- Income Adults Had Qualifying Incomes for Expanded Medicaid Coverage According to the NHIS estimates, 5.6 million low-income adults were uninsured in 2016. Of these 5.6 million, an estimated 1.9 million uninsured, low-income adults resided in expansion states, compared with an estimated 3.7 million in non-expansion states. Estimates of uninsured, low-income adults comprised less than 1 percent of the total population for all expansion states and 3 percent of the total population for all non- expansion states. NHIS estimates also showed that over half of uninsured, low-income adults were male, over half were employed, and over half had incomes less than 100 percent FPL. For some demographic characteristics, there were some statistically significant differences between uninsured, low- income adults in expansion states compared with these adults in non- expansion states. For example, expansion states had significantly larger percentages of uninsured, low-income males than non-expansion states. (See table 1.) See table 6 in appendix III for additional demographic characteristics of uninsured, low-income adults. Estimates from the 2016 NHIS showed some statistically significant differences in the health status of uninsured, low-income adults in expansion and non-expansion states. In particular, expansion states had a larger percentage of these adults who reported that their health was “good” and a smaller percentage who reported their health as “fair or poor” than those in non-expansion states. However, the percentages of uninsured, low-income adults with responses of “excellent or very good” in both expansion and non-expansion states were large—47 percent or larger, and the differences between the two groups of states were not statistically significant. (See fig. 3.) See table 7 in appendix III for additional information about the health status for uninsured, low-income adults. Survey Estimates Showed Low-Income Adults in Expansion States and Those Who Were Insured Were Less Likely to Report Any Unmet Medical Needs The 2016 NHIS estimates showed that smaller percentages of low- income adults in expansion states reported having any unmet medical needs compared with those in non-expansion states; and smaller percentages of those who were insured reported having any unmet medical needs compared with those who were uninsured, regardless of where they lived, for example: Low-income adults in expansion and non-expansion states. Access to Health Care: Measuring Any Unmet Medical Needs The National Center for Health Statistics, the federal agency that conducts the National Health Interview Survey (NHIS), developed a composite measure on any unmet medical needs, which was based on six survey questions on respondents’ ability to afford different types of needed health care services. These questions asked whether in the past 12 months respondents could not afford medical care at any time; delayed seeking medical care due to worries about costs; or could not afford needed prescription drugs, mental health or counseling, dental care, or eyeglasses. percent or less of the low-income adults who had Medicaid or private health insurance in expansion or non-expansion states reported having any unmet medical needs, compared with 50 percent or more of those who were uninsured in expansion or non-expansion states. Further, among the uninsured, 50 percent of low-income adults living in expansion states reported any unmet medical needs, compared with 63 percent of those in non-expansion states. (See fig. 4.) See tables 8 and 9 in appendix IV for estimates of the composite measure we reviewed on any unmet medical needs. Survey Estimates Showed Low-Income Adults in Expansion States and Those Who Were Insured Were Less Likely to Report Financial Barriers to Health Care The 2016 NHIS estimates showed that smaller percentages of low- income adults in expansion states reported financial barriers to needed health care compared with those in non-expansion states; and smaller percentages of those who were insured reported financial barriers to needed health care compared with those who were uninsured, regardless of where they lived, for example: Low-income adults in expansion and non-expansion states. Nine percent of low-income adults in expansion states reported that they could not afford needed medical care, compared with 20 percent of low-income adults in non-expansion states. Low-income adults who were insured and uninsured. Twelve percent or less of low-income adults who had Medicaid or private health insurance in expansion or non-expansion states reported financial barriers to needed medical care, compared with 27 percent or more of those who were uninsured in expansion or non-expansion states. In addition, among low- income adults who were uninsured, a smaller percentage of those who lived in expansion states reported financial barriers to two of the six needed health care services compared with those who lived in non-expansion states. (See fig. 5.) See tables 10 through 13 in appendix V for estimates of all survey questions we reviewed on financial barriers to health care. The 2016 NHIS also collected information on non-financial barriers to health care. Specifically, the survey asked whether respondents had delayed health care due to non-financial reasons, such as they lacked transportation, were unable to get through on the phone, were unable to get a timely appointment, experienced long wait time at the doctor’s office, or were not able to get to a clinic or doctor’s office when it was open. The 2016 NHIS showed that the same or similar percentages of low-income adults in expansion and non-expansion states reported delaying care due to a lack of transportation or other non-financial reasons. Further, generally similar or larger percentages of low-income adults with insurance reported delaying care due to non-financial reasons, compared with those who were uninsured. See tables 14 and 15 in appendix V for estimates of low-income adults in expansion and non- expansion states and by insurance status on non-financial barriers to health care. Survey Estimates Showed Low-Income Adults in Expansion States and Those Who Were Insured Were Generally More Likely to Report Having a Usual Place of Care and Receiving Selected Health Care Services The 2016 NHIS estimates showed that a larger percentage of low-income adults in expansion states reported having a usual place of care compared with those in non-expansion states; and larger percentages of those who were insured reported having a usual place of care compared with those who were uninsured, regardless of where they lived, for example: Low-income adults in expansion and non-expansion states. Eighty-two percent of the low-income adults in expansion states reported having a usual place of care when they were sick or needed advice about their health, compared with 68 percent of those in non- expansion states. Access to Health Care: Having a Usual Place of Care The 2016 National Health Interview Survey (NHIS) asked respondents about whether they had a place they usually go when sick or need advice about their health. Low-income adults who were insured and uninsured. Seventy- eight percent or more of those who had Medicaid or private health insurance in expansion or non-expansion states reported having a usual place of care, compared with 46 percent or less of those who were uninsured in expansion or non-expansion states. Among the uninsured, similar percentages of low-income adults in expansion and non-expansion states reported having a usual place of care. (See fig. 6.) See tables 16 through 19 in appendix VI for estimates of all survey questions we reviewed on having a usual place of care. Survey Estimates Showed Low-Income Adults in Expansion States and Those Who Were Insured Were Generally More Likely to Report Receiving Selected Services The 2016 estimates showed that larger percentages of low-income adults in expansion states reported receiving selected health care services, such as a flu vaccine, compared with those in non-expansion states; and larger percentages of those with insurance reported receiving selected health care services compared with those who were uninsured, regardless of where they lived, for example: Low-income adults in expansion and non-expansion states. Thirty-one percent of low-income adults in expansion states reported receiving flu vaccinations, compared with 24 percent of those in non- expansion states. having their blood cholesterol checked by having their blood pressure checked by a doctor, nurse, or other health professional; visiting a hospital emergency department. percent or more of low-income adults who had Medicaid or private health insurance in expansion or non-expansion states reported receiving blood cholesterol checks, compared with 28 percent or less of low-income adults who were uninsured in expansion or non- expansion states. Among the uninsured, generally similar percentages of low-income adults in expansion and non-expansion states reported blood cholesterol checks, flu vaccines, and other selected services. (See fig. 7.) See tables 20 and 21 in appendix VI for estimates of all survey questions we reviewed on selected health care services. The 2016 NHIS also asked respondents whether they visited or had spoken to a health care professional about their health, including: a general doctor, such as a general practitioner, family doctor, and a nurse practitioner, physician’s assistant, or midwife; and a doctor who specializes in a particular disease, with the exception of obstetricians, gynecologists, psychiatrists, and ophthalmologists. See tables 22 and 23 in appendix VI for estimates of low-income adults in expansion and non-expansion states and by insurance status on contacting health care professionals. Agency Comments and Our Evaluation We provided a draft of this report to HHS for comment. HHS provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the appropriate congressional committee, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you are your staff members have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology To describe national survey estimates of (1) the number and demographic characteristics of uninsured, low-income adults in expansion and non-expansion states; (2) unmet medical needs for low-income adults in expansion and non-expansion states and by insurance status; (3) barriers to health care for low-income adults in expansion and non- expansion states and by insurance status; and (4) having a usual place of care and receiving selected health care services for low-income adults in expansion and non-expansion states and by insurance status, we used data from the 2016 National Health Interview Survey (NHIS). The 2016 NHIS were the most recent data available when we conducted our analyses. This appendix describes the data source, study population, analyses conducted, study limitations, and data reliability assessment. Data Source The NHIS collects demographic, health status, health insurance, health care access, and health care service use data for the civilian, noninstitutionalized U.S. population. It is an annual, nationally representative, cross-sectional household interview survey. NHIS interviews are conducted continuously throughout the year for the National Center for Health Statistics (NCHS), which is a federal agency within the Department of Health and Human Services that compiles statistical information to help guide health policy decisions. Interviews are conducted in respondents’ homes, and interviewers may conduct follow- up interviews over the telephone to complete an interview. Information about some NHIS respondents, such as information about their health status, may be obtained through an interview with another family member on behalf of the respondent. NHIS data are organized into several data files. Estimates used for our study are based on data with the 2016 Family and Sample Adult Core components of the 2016 NHIS. Sociodemographic, insurance, and select health care access and utilization variables were defined using data collected in the Family Core component of the survey, which includes data on every household member for the families participating in NHIS. Other measures of health care access and utilization examined in this study are based on data collected in the Sample Adult Core component. In this component, the respondent (i.e., the sample adult) is randomly selected from among all adults aged ≥18 years in the family. A proxy respondent might respond for the sample adult if, because of health reasons, the sample adult is physically or mentally unable to respond themselves. The 2016 imputed income files were used to define poverty thresholds, which is based on reported and imputed family income. The NHIS publicly released data files for 2016 include data for 40,220 households containing 97,169 persons, and the total household response rate was 67.9 percent. Study Population For this study we asked NCHS to provide estimates of low-income, non- elderly adults, which we defined as individuals ages 19 to 64, with family incomes that did not exceed 138 percent of the federal poverty level (FPL). We also requested that estimates be provided separately for respondents based on whether they resided in an expansion or non- expansion state, and whether they were covered by private health insurance, Medicaid, or had no insurance. We gave NCHS specifications for the definition of low-income, non-elderly adults; the states that should be classified as expansion or non-expansion states in calendar year 2016; and the respondents who should be classified as having private health insurance, Medicaid, or no insurance. We asked NCHS to exclude respondents who were noncitizens, were covered by Medicare, only received health care services through military health care or through the Indian Health Service, or had Supplemental Social Security Income. We also excluded adult females from the Sample Adult file who responded they were pregnant at the time of the interview. In addition, we asked NCHS to exclude individuals for which information was missing—not recorded or not provided during the interview—on health insurance coverage (Medicaid, private health insurance, Indian Health Service, military health care, or no health insurance), receipt of Supplemental Social Security Income, and U.S. citizenship. We classified individuals in our study population as residing in an expansion or non-expansion state based on their state of residence when they were interviewed for the 2016 NHIS. We classified the 30 states and the District of Columbia that expanded their Medicaid eligibility before July 1, 2016, as expansion states. The remaining 20 states were classified as non-expansion states. Louisiana expanded Medicaid coverage on July 1, 2016; therefore, we classified it as a non-expansion state. We decided not to classify Louisiana as an expansion state because we allowed a 6- month period for the effects of expansion to appear. Therefore, for Louisiana we only included NHIS respondents interviewed from January through June 2016 when Louisiana was a non-expansion state. Similarly, for two expansion states—Alaska and Montana—we only included individuals who were interviewed March through December 2016 and July through December 2016, respectively, after the state expanded Medicaid to allow for a 6-month time period for the effect of expansion to take place. (See table 2.) Table 3 below illustrates the sample size and population estimates of low- income sample adults by expansion state, non-expansion state, and national total. We classified NHIS respondents as having private health insurance, Medicaid, or no insurance based on the health insurance classification approach used by NCHS for NHIS. NCHS assigned NHIS respondents’ health insurance classification based on a hierarchy of mutually exclusive categories in the following order: private health insurance, Medicaid, other coverage, and uninsured. Low-income adults with more than one coverage type were assigned to the first appropriate category in the hierarchy. Respondents were classified as having private health insurance if they reported that they were covered by any comprehensive private health insurance plan (including health maintenance and preferred provider organizations). Private coverage excluded plans that pay for one type of service, such as accidents or dental care. Respondents were classified as having Medicaid if they reported they were covered by Medicaid or by a state-sponsored health plan with no premiums or it was not known whether a premium was charged. Respondents were classified as being uninsured if they did not report having any private health insurance, Medicare, Medicaid, Children’s Health Insurance Program, state-sponsored or other government-sponsored health plan, or military health plan. Respondents were also classified as being uninsured if they only had insurance coverage with a private plan that paid for one type of service, such as accidents or dental care. Analyses Conducted We gave NCHS officials specifications to calculate estimates from the 2016 NHIS for demographic characteristics, access to care, as well as composite measures of access to health care based on selected survey questions. Composite measures are NCHS-developed measures based on responses to NHIS questions covering related topics. The analysis included two composite measures: 1. any unmet medical needs, which is based on responses to six underlying survey questions that asked respondents about whether during the past 12 months they needed medical care but did not get it because they could not afford it; delayed seeking medical care because of worry about the cost; or did not get prescription medicines, mental health care or counseling, eyeglasses, or dental care due to cost; and 2. any non-financial barriers to health care, which is based on five underlying questions that asked respondents whether they delayed care in the past 12 months for any of the following reasons: could not get through on the telephone; could not get an appointment soon enough; waited too long to see the doctor after arriving at the doctor’s office; the clinic/doctor’s office was not open when respondent could get there; and did not have transportation. NCHS officials calculated our requested estimates of groups within our study population based on whether respondents resided in an expansion or non-expansion state and whether they had private health insurance, Medicaid, or were uninsured at the time of the interview. For each comparison—such as comparisons of access to health care for respondents in expansion versus non-expansion states—we asked NCHS to test for statistically significant differences. We identified a statistically significant difference when the p-value from a t-test of the difference in the estimated proportions between two study subgroups had a value of less than 0.05. To describe the number and demographic characteristics of uninsured, low-income adults, we compared estimates of selected demographic characteristics (race and ethnicity, gender, poverty status, and employment status) and reported health status for this group in expansion and non-expansion states. These and other estimates of demographic characteristics and reported health status from the 2016 NHIS for uninsured, low-income adults by expansion states, non-expansion states, and all states are provided in tables 6 and 7 in appendix III. To describe unmet medical needs, barriers to health care, and having a usual place of care and receiving selected services for all low-income adults in expansion and non-expansion states and by insurance status, we asked NCHS to calculate estimates based on responses to selected NHIS questions and NCHS composite measures. We selected these survey questions and composite measures from the Family and Adult Access to Health Care and Utilization and Adult Health Behaviors sections of the 2016 NHIS. To summarize estimates of low-income adults in expansion and non-expansion states and by insurance status, responses to selected survey questions and composite measures were calculated as an estimated percentage of the relevant group’s total population for eight groups of low-income adults: (1) those in expansion states, (2) those in non-expansion states, (3) those who had Medicaid in expansion states, (4) those who had Medicaid in non-expansion states, (5) those who had private health insurance in expansion states, (6) those who had private health insurance in non-expansion states, (7) those who were uninsured in expansion states, and (8) those who were uninsured in non-expansion states. We asked NCHS to test for statistically significant differences for the estimates of access to care between selected groups of low-income adults. (See table 4.) The results of the tests for statistically significant differences for these comparison groups are in appendixes IV through VI. Study Limitations and Data Reliability Assessment Our study has some limitations. First, our study did not examine whether statistically significant differences in estimates of access to health care between respondents in expansion and non-expansion states were associated with the choice to expand Medicaid. Second, NHIS data are based on respondent-reported data, which may be subject to potential biases and recall of participants’ use of health services and may be less accurate than administrative data or clinical data. Third, we could not report estimates of access to health care that did not meet NCHS’s standards of reliability or precision. We assessed the reliability of NHIS data by reviewing NHIS data documentation; interviewing knowledgeable NCHS officials and academic researchers; and examining the data for logical errors, missing values, and values outside of expected ranges. We determined that the data were sufficiently reliable for the purposes of these analyses. Appendix II: Status of Medicaid Eligibility Expansion by States, as of 2017 Under the Patient Protection and Affordable Care Act (PPACA), states may opt to expand their Medicaid programs’ eligibility to cover certain low-income adults beginning January 2014. As of December 2017, 31 states and the District of Columbia had expanded their Medicaid programs as permitted under PPACA and 19 states had not. Table 5 lists the states that expanded Medicaid eligibility and those that did not. It also includes state population and other Medicaid data, which is presented in the roll-over information in interactive figure 2. Appendix III: Estimates of Demographic Characteristics and Health Status in Expansion and Non-Expansion States This appendix provides additional 2016 National Health Interview Survey (NHIS) estimates we obtained from the National Center for Health Statistics (NCHS). Table 6 presents estimates of selected demographic characteristics for low-income adults who were uninsured at the time of the survey interview. The table provides estimates for these adults based on whether they resided in states that expanded Medicaid eligibility as permitted under the Patient Protection and Affordable Care Act (PPACA) (referred to as expansion states) or states that did not (referred to as non- expansion states). We report statistically significant differences when comparing the responses of uninsured, low-income adults in expansion and non-expansion states. Table 7 shows estimates of the reported health status of uninsured, low- income adults based on whether they resided in an expansion or non- expansion state. The table provides the number and percent of these adults who reported that at the time of the interview their health status was excellent or very good; good; or fair or poor. The table also shows the extent to which these adults reported whether their health status was different at the time of the interview compared to the previous year. We report statistically significant differences when comparing the responses of uninsured, low-income adults in expansion and non-expansion states. Appendix IV: Estimates of Any Unmet Medical Needs in Expansion and Non- Expansion States and by Insurance Status This appendix provides estimates of any unmet medical needs for low- income adults—individuals ages 19 to 64, with family incomes that did not exceed 138 percent of the federal poverty level (FPL)—from the 2016 National Health Interview Survey (NHIS), which were produced by the National Center for Health Statistics (NCHS). Estimates are based on a composite measure of any unmet medical needs. Table 8 shows estimates of all low-income adults in expansion and non-expansion states. We also report statistically significant differences between low- income adults in expansion and non-expansion states. Table 9 shows estimates of six groups of low-income adults: (1) low- income adults who were uninsured in expansion states; (2) low-income adults who were uninsured in non-expansion states; (3) low-income adults who had Medicaid in expansion states; (4) low-income adults who had Medicaid in non-expansion states; (5) low-income adults who had private health insurance in expansion states; and (6) low-income adults who had private health insurance in non-expansion states. We also report any statistically significant differences when comparing the six groups of low-income adults, specifically: low-income adults who were uninsured in expansion states compared with each of the four groups of low-income adults who were insured— low-income adults who had Medicaid in expansion states, low-income adults who had Medicaid in non-expansion states, low-income adults who had private health insurance in expansion states, and low-income adults who had private insurance in non-expansion states; low-income adults who were uninsured in non-expansion states compared with each of the four groups of low-income adults who were insured; low-income adults who were uninsured in expansion states compared with low-income adults who were uninsured in non-expansion states; low-income adults who had Medicaid in expansion states compared with low-income adults who had Medicaid in non-expansion states; and low-income adults who had private health insurance in expansion states compared with low-income adults who had private health insurance in non-expansion states. Appendix V: Estimates of Barriers to Health Care in Expansion and Non-Expansion States and by Insurance Status This appendix provides estimates of barriers to health care for low- income adults—individuals ages 19 to 64, with family incomes that did not exceed 138 percent of the federal poverty level (FPL)—from the 2016 National Health Interview Survey (NHIS), which we obtained from the National Center for Health Statistics (NCHS). Estimates of financial barriers to needed medical, specialty, and other types of health care and prescription drugs are based on selected survey questions. Estimates of non-financial barriers to health care are based on responses to selected survey questions and a composite measure. Estimates are reported for: All low-income adults in expansion and non-expansion states. We also report statistically significant differences between low-income adults in expansion and non-expansion states. Six groups of low-income adults: (1) low-income adults who were uninsured in expansion states; (2) low-income adults who were uninsured in non-expansion states; (3) low-income adults who had Medicaid in expansion states; (4) low-income adults who had Medicaid in non-expansion states; (5) low-income adults who had private health insurance in expansion states; and (6) low-income adults who had private health insurance in non-expansion states. We also report any statistically significant differences when comparing the six groups of low-income adults, specifically: low-income adults who were uninsured in expansion states compared with each of the four groups of low-income adults who were insured—low-income adults who had Medicaid in expansion states, low-income adults who had Medicaid in non-expansion states, low-income adults who had private health insurance in expansion states, and low-income adults who had private insurance in non-expansion states; low-income adults who were uninsured in non-expansion states compared with each of the four groups of low-income adults who were insured; low-income adults who were uninsured in expansion states compared with low-income adults who were uninsured in non- expansion states; low-income adults who had Medicaid in expansion states compared with low-income adults who had Medicaid in non- expansion states; and low-income adults who had private health insurance in expansion states compared with low-income adults who had private health insurance in non-expansion states. Financial barriers to medical, specialty, and other types of health care. Tables 10 and 11 present estimates and differences in estimates of responses to survey question that asked whether respondents did not obtain different types of needed health care services in the past 12 months because they could not afford it. Financial barriers to prescription drugs. Tables 12 and 13 present estimates and differences in estimates of survey question that asked respondents who had been prescribed medications whether they had taken actions during the past 12 months to save money on medications. Non-financial barriers to health care. Tables 14 and 15 present estimates and differences in estimates of the NCHS composite measure on any non-financial barriers to health care, which was based on responses to five survey questions on whether respondents delayed care in the past 12 months due to long wait times, a lack of transportation, and other non-financial reasons. Additionally, these tables present estimates and differences in estimates of responses to the composite measure’s five underlying survey questions. Appendix VI: Estimates on Place of Care and Services in Expansion and Non- Expansion States and by Insurance Status This appendix provides estimates on having a usual place of care and receiving selected health care services for adults—individuals ages 19 to 64, with family incomes that did not exceed 138 percent of the federal poverty level (FPL)—from the 2016 National Health Interview Survey (NHIS), which we obtained from the National Center for Health Statistics (NCHS). Estimates are based on responses to selected survey questions on having a usual place of care, receiving selected health care services, and contacting health care professionals. Estimates are reported for: All low-income adults in expansion and non-expansion states. We also report statistically significant differences between low-income adults in expansion and non-expansion states. Six groups of low-income adults: (1) low-income adults who were uninsured in expansion states; (2) low-income adults who were uninsured in non-expansion states; (3) low-income adults who had Medicaid in expansion states; (4) low-income adults who had Medicaid in non-expansion states; (5) low-income adults who had private health insurance in expansion states; and (6) low-income adults who had private health insurance in non-expansion states. We also report any statistically significant differences when comparing the six groups of low-income adults, specifically: low-income adults who were uninsured in expansion states compared with each of the four groups of low-income adults who were insured—low-income adults who had Medicaid in expansion states, low-income adults who had Medicaid in non-expansion states, low-income adults who had private health insurance in expansion states, and low-income adults who had private insurance in non-expansion states; low-income adults who were uninsured in non-expansion states compared with each of the four groups of low-income adults who were insured; low-income adults who were uninsured in expansion states compared with low-income adults who were uninsured in non- expansion states; low-income adults who had Medicaid in expansion states compared with low-income adults who had Medicaid in non- expansion states; and low-income adults who had private health insurance in expansion states compared with low-income adults who had private health insurance in non-expansion states. Having a usual place of care. Tables 16 through 19 present estimates and differences in estimates of survey questions that asked respondents about the place of care they usually go to when sick or need advice about their health and the type of place that respondents most often went. Receiving selected health care services. Tables 20 and 21 present estimates and differences in estimates of survey questions that asked respondents whether they had received a blood cholesterol check, flu vaccine, or other selected services. Contacting health care professionals. Tables 22 and 23 present estimates and differences in estimates of survey questions that asked respondents whether they had visited or spoken to a general doctor, specialist, or other health care professionals about their health in the past 12 months. Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Katherine M. Iritani (Director), Tim Bushfield (Assistant Director), Deitra H. Lee (Analyst-in-Charge), Kristin Ekelund, Laurie Pachter, Vikki Porter, Merrile Sing, and Emily Wilson made key contributions to this report.
Why GAO Did This Study Under PPACA, states could choose to expand Medicaid coverage to certain uninsured, low-income adults. As of December 2017, 31 states and the District of Columbia chose to expand Medicaid to cover these adults, and 19 states did not. GAO was asked to provide information about the demographic characteristics of and access to health care services for low-income adults—those with household incomes less than or equal to 138 percent of the federal poverty level—in expansion and non-expansion states. This report describes 2016 national survey estimates of (1) the number and demographic characteristics for low-income adults who were uninsured in expansion and non-expansion states, (2) unmet medical needs for low-income adults in expansion and non-expansion states and by insurance status, (3) barriers to health care for low-income adults in expansion and non-expansion states and by insurance status, and (4) having a usual place of care and receiving selected health care services for low-income adults in expansion and non-expansion states and by insurance status. GAO obtained 2016 NHIS estimates from the National Center for Health Statistics (NCHS), the federal agency within the Department of Health and Human Services that maintains these survey data. NHIS is a household interview survey designed to be a nationally representative sample of the civilian, non-institutionalized population residing in the United States. Estimates were calculated for demographic characteristics for uninsured, low-income adults. In addition, estimates were calculated for unmet medical needs, barriers to health care, and having a usual place of care and receiving selected health services for low-income adults in expansion and non-expansion states and by insurance status The estimates were based on responses to selected survey questions. GAO selected these survey questions from the Family and Adult Access to Health Care and Utilization and another section of the 2016 NHIS. GAO took steps to assess the reliability of the 2016 NHIS estimates, including interviewing NCHS officials and examining the data for logical errors. GAO determined that the data were sufficiently reliable for the purposes of its analyses. The Department of Health and Human Services provided technical comments on a draft of this report, which GAO incorporated as appropriate. What GAO Found According to the 2016 National Health Interview Survey (NHIS), an estimated 5.6 million uninsured, low-income adults—those ages 19 through 64—had incomes at or below the income threshold for expanded Medicaid eligibility as allowed under the Patient Protection and Affordable Care Act (PPACA). Estimates from this nationally representative survey showed that about 1.9 million of the 5.6 million uninsured, low-income adults lived in states that chose to expand Medicaid under PPACA, while the remaining 3.7 million lived in non-expansion states—those that did not choose to expand Medicaid. In 2016, over half of uninsured, low-income adults were male, over half were employed, and over half had incomes less than 100 percent of the federal poverty level in both expansion and non-expansion states. The 2016 NHIS estimates showed that low-income adults in expansion states were less likely to report having any unmet medical needs compared with those in non-expansion states, and low-income adults who were insured were less likely to report having unmet medical needs compared with those who were uninsured. Among the low-income adults who were uninsured, those in expansion states were less likely to report having any unmet medical needs compared with those in non-expansion states. The 2016 NHIS estimates also showed that low-income adults in expansion states were less likely to report financial barriers to needed medical care and other types of health care, such as specialty care, compared with those in non-expansion states, and low-income adults who were insured were less likely to report financial barriers to needed medical care compared with those who were uninsured. Among low-income adults who were uninsured, those in expansion states were less likely to report financial barriers to needed medical care compared with those in non-expansion states. Finally, the 2016 NHIS estimates showed that low-income adults in expansion states were more likely to report having a usual place of care to go when sick or needing advice about their health and receiving selected health care services compared with those in non-expansion states. The estimates also showed that low-income adults who were insured were generally more likely to report having a usual place of care and receiving selected health care services compared with those who were uninsured. Among the uninsured, relatively similar percentages of low-income adults in expansion and non-expansion states reported having a usual place of care. Similarly, estimates showed that relatively similar percentages of low-income adults who were uninsured in expansion and non-expansion states reported receiving selected health care services, such as receiving a flu vaccine or a blood pressure check.
gao_GAO-18-60
gao_GAO-18-60_0
Background This section includes information about seismic surveys, oil and gas activities in the four OCS regions, and the potential effects of seismic activities on the environment and marine mammals as well as related requirements. Seismic Surveys Seismic surveys use mechanically generated sound waves from an acoustic source such as an airgun to transmit energy into the subsurface. Some of this energy is reflected or refracted back to recording sensors, and data are transformed into representative images of the layers in the subsurface of the earth. Entities use seismic surveys for several purposes. For example, oil and gas companies use both onshore and offshore seismic surveys to collect data on geology that may indicate the presence of oil and gas. Other entities, such as research institutions, use seismic surveys for a variety of purposes, such as helping to detect groundwater, identifying archaeological resources and fault zones, and conducting other research. There are two main types of seismic surveys used on the OCS: (1) deep- penetration and (2) high-resolution seismic surveys. Deep-penetration seismic surveys are conducted by vessels towing an array of airguns that use a low frequency source and emit high-energy acoustic pulses into the seafloor over long durations. Deep-penetration seismic surveys can penetrate several thousand meters into the subsurface and are then reflected and recorded by receivers to image deep geological features. Deep-penetration seismic surveys are often acquired prior to the drilling phase of oil and gas exploration. High-resolution seismic surveys typically use high-frequency acoustic signals to image the sea bottom and shallow parts right below the ocean bottom with a higher level of detail. Seismic surveys vary in technologies used, as well as in their size and scope, with towed gear in some cases spanning several miles (see fig. 1). Activities in the Outer Continental Shelf (OCS) Regions The OCS refers to the submerged lands outside the territorial jurisdiction of all 50 states but that appertain to the United States and are under its jurisdiction and control. State submerged lands generally extend from the shore to 3 geographical miles offshore. Federal submerged lands, which are lands under the jurisdiction of the federal government—generally extend from 3 geographical miles to 200 nautical miles offshore. With certain exceptions, waters and submerged lands beyond 200 nautical miles offshore are considered international. The OCS is divided into four regions managed by BOEM—Alaska, Atlantic, Gulf of Mexico, and Pacific—each with its own histories and concerns and levels of commercial activities, including oil and gas development and history of using seismic surveys. The Gulf of Mexico OCS region has had the most oil and gas activity. The Alaska OCS encompasses the Arctic submerged lands, Cook Inlet planning area, and the Gulf of Alaska. The Arctic waters of the Alaska OCS include the Beaufort and Chukchi planning areas and the Bering Sea. In the last 25 years, seismic activities in the Alaska OCS have generally taken place in the Cook Inlet and the Chukchi and Beaufort Seas. The Atlantic OCS region is divided into four areas for administrative purposes under BOEM’s oil and gas leasing program: the North Atlantic, Mid-Atlantic, South Atlantic, and the Straits of Florida. At present, no active OCS oil and gas leases exist in any of these four planning areas. The most recent geological and geophysical seismic data for the Mid- and South Atlantic OCS were gathered more than 30 years ago. The Gulf of Mexico’s central and western planning areas—offshore Texas, Louisiana, Mississippi and Alabama—remain the United States’ primary offshore source of oil and gas, generating about 97 percent of all OCS oil and gas production. BOEM oversees offshore oil and gas resource-management activities, including preparing the 5-year OCS oil and gas leasing program, conducting lease sales and issuing leases, and receiving, reviewing, and approving oil and gas exploration and development and production plans. As part of its role, BOEM also issues permits for geological and geophysical data acquisition on the OCS, including seismic surveys, under the Outer Continental Shelf Lands Act and regulations under the act. BOEM does not have statutory review time frame requirements for issuing geological and geophysical seismic survey permits. Entities seeking to conduct geological and geophysical scientific research related to oil and gas but not associated with oil and gas exploration and development, including seismic surveys, generally do not need to obtain a permit from BOEM, but they are generally required to file a Notice of Scientific Research with the Regional Director of BOEM at least 30 days before beginning such research. Environmental Impacts of Seismic Surveys Man-made sources of ocean noise—such as from commercial shipping, marine pile driving, sonar, and seismic activities—may have a variety of impacts on marine mammals ranging from minor disturbance to injury or death. Effects of noise on marine mammals depend on a variety of factors including the species and behavior, as well as the frequency, intensity, and duration of the noise. NMFS and FWS evaluate the potential effects of activities, such as seismic surveys, on marine mammals in determining whether to authorize incidental take under the MMPA when such authorization is requested by entities engaging in those activities. Agencies are required to evaluate potential environmental effects of their actions, such as approval of seismic survey permits, under the National Environmental Policy Act (NEPA), and in cases where Endangered Species Act listed species may be affected, conduct Endangered Species Act section 7 consultations. Marine Mammal Protection Act The MMPA was enacted in 1972 to ensure that marine mammals are maintained at or restored to their optimum sustainable population. NMFS and FWS implement the MMPA, which generally prohibits the “taking” of marine mammals. However, the MMPA provides a mechanism for NMFS and FWS, upon request, to authorize the incidental take of small numbers of marine mammals by U.S. citizens engaging in a specified activity, other than commercial fishing, within a specified geographic region. Specifically, NMFS and FWS issue incidental take authorizations after finding that the activities will cause the taking of only small numbers of marine mammals of a species or stock, the taking will have a negligible impact on such marine mammal species or stocks, and the taking will not have an unmitigable adverse impact on the availability of the species or stock for taking for subsistence uses. Entities whose seismic survey activities may result in incidental take of marine mammals obtain an incidental take authorization from NMFS or FWS, or both, depending on the affected species. If operators incidentally take a marine mammal and do not have authorization to cover the incidental take, they would be in violation of the MMPA. By statute, incidental take authorizations must also include permissible methods of taking and means of affecting the least practicable adverse impact on affected species and stocks and their habitat, monitoring requirements, and reporting requirements. National Environmental Policy Act Under NEPA, federal agencies are required to evaluate the potential environmental effects of actions they propose to carry out, fund, or approve (e.g., by permit). NEPA and implementing regulations set out an environmental review process that has two principal purposes: (1) to ensure that an agency carefully considers information concerning the potential environmental effects of proposed actions and alternatives to proposed actions and (2) to ensure that this information will be made available to the public. Under NEPA, before approving any oil and gas leasing, exploration, geological and geophysical permits, or development activities, BOEM must evaluate the potential environmental effects of approving or permitting those activities. NMFS and FWS also must evaluate potential environmental effects under NEPA of issuing the MMPA incidental take authorization as part of their review of the proposed authorizations. Generally, the scope of the proposed permit or authorization—that is, the federal action—determines whether the federal agency prepares either an environmental assessment or a more detailed environmental impact statement. Agencies may prepare an environmental assessment to determine whether a proposed action is expected to have a potentially significant impact on the human environment. If the agency determines that the action will not have significant environmental impacts following the environmental assessment, the agency will issue a Finding of No Significant Impact. If prior to or during the development of an environmental assessment, the agency determines that the action may cause significant environmental impacts, an environmental impact statement should be prepared. In implementing NEPA, federal agencies may rely on “tiering”, in which prior broader, earlier NEPA reviews are incorporated into subsequent site-specific analyses. Tiering is used to avoid duplication of analysis as a proposed activity moves through the NEPA process, from a broad assessment to a site-specific analysis. If an agency would like to evaluate the potential significant environmental impacts of multiple similar or recurring activities, the agency can prepare a programmatic environmental assessment or environmental impact statement. Because BOEM prepares a site specific environmental analysis for each geological and geophysical permit application, to increase efficiency, BOEM uses this tiering process and tiers from either an existing environmental impact statement or environmental assessment during its site specific environmental analysis review. Endangered Species Act The Endangered Species Act provides programs for conserving threatened and endangered species. Under section 7 of the act, federal agencies must ensure that any action they authorize, fund, or carry out is not likely to jeopardize the continued existence of any endangered or threatened species or result in the destruction or adverse modification of its critical habitat. To fulfill this responsibility, federal agencies must consult with NMFS or FWS, depending on the affected species, to assess the potential effects of proposed actions, including approval of seismic survey permits and authorization of incidental take under the MMPA, on threatened and endangered species. The Endangered Species Act allows NMFS and FWS to exempt incidental takings from the taking prohibition for endangered and threatened species as provided through an incidental take statement. The statement is to include the amount or extent of anticipated take, reasonable and prudent measures to minimize the effects of incidental take, and the terms and conditions that must be observed. Formal consultations between federal agencies and NMFS or FWS are required where a proposed action could have an adverse effect on listed species or designated critical habitat and are concluded with issuance by NMFS or FWS of biological opinions. The biological opinion is to discuss in detail the effects of the proposed action on listed species and their critical habitat and contain NMFS’s or FWS’s opinion on whether the proposed action is likely to jeopardize the continued existence of the species or destroy or adversely modify any designated critical habitat. For consultations involving marine mammals, an Endangered Species Act section 7 incidental take statement cannot be issued until the incidental take has been authorized under the MMPA. Agencies may informally consult with NMFS or FWS, and if it is determined by the federal agency during such informal consultation that the proposed action is not likely to adversely affect endangered or threatened species or critical habitat, the informal consultation process is concluded upon written concurrence of NMFS or FWS, and no further action is necessary. If an action agency would like to evaluate the impacts of multiple similar or recurring activities on endangered and threatened species, NMFS or FWS can prepare a programmatic biological opinion for the OCS region. BOEM’s Process Differs by OCS Region, and BOEM Reviewed 297 Seismic Survey Permit Applications from 2011 through 2016 BOEM’s Process for Reviewing Seismic Survey Permit Applications Differs by Selected OCS Region BOEM has a documented process for reviewing seismic survey applications in each of the three selected OCS regions that differs at the final step (see fig. 2), depending on the region. For the Alaska and Atlantic regions, the applicant generally submits an application to BOEM for a seismic survey permit at the same time that the applicant submits an application to NMFS or FWS for an incidental take authorization. For the Gulf of Mexico region, the applicant has generally only submitted an application to BOEM for a seismic survey permit. In all three regions, BOEM is required to conduct environmental reviews under NEPA, and Endangered Species Act Section 7 consultations as necessary to help ensure agency actions, such as permit approvals, do not jeopardize the continued existence of a species or destroy or adversely modify critical habitat. In all three regions, when appropriate, BOEM is also to coordinate with relevant stakeholders, such as state officials, the Department of Defense and the National Aeronautics and Space Administration, if proposed activities have the potential to interfere with defense or civil aerospace activities in the same area. The final step in BOEM’s process for reviewing seismic survey permit applications differs among the three selected OCS regions. In the Atlantic region, prior to issuing a permit, BOEM intends to require incidental take authorizations related to the seismic survey activities proposed in the permit application to be in place before issuing permits, but BOEM issues conditional permits while waiting for incidental take authorizations in the Alaska region. In the Gulf of Mexico region, BOEM generally issues permits without requiring incidental take authorizations to be in place. Stakeholders from industry groups and BOEM officials we interviewed stated that differences in the review process were the natural result of the process adapting to the three different OCS regions and their history of oil and gas exploration. For example, agency officials stated that, in terms of oil and gas activity, the Atlantic is a “frontier region,” and, according to a stakeholder group, has vocal coastal communities that are uncomfortable with offshore energy development and, relatedly, the potential impacts of seismic surveys on marine mammals and commercial fishing. If certain activities are considered controversial or have more vocal public opponents, they may result in an increased number of public comments the agency must review, which in turn may result in BOEM taking extra time to review applications for permits or NMFS requiring more time to review incidental take authorization applications, agency officials said. For example, in the Atlantic OCS, there was a large vocal public opposition to the seismic surveys proposed. Specifically, 126 municipalities, 1,200 officials, and over 40,000 businesses representing Republicans and Democrats opposed seismic surveying, according to testimony at a July 2017 hearing of the House Committee on Natural Resources. By contrast, according to BOEM officials and industry stakeholders we interviewed, the Gulf of Mexico region has a long history of offshore energy development and seismic survey activity. BOEM has issued permits in the Gulf of Mexico region without requiring an applicant to already have an incidental take authorization in place. According to two industry stakeholders we interviewed, obtaining permits in the Gulf of Mexico has been a fairly routine process. BOEM has made a policy decision to generally require an incidental take authorization in Alaska and the Atlantic but not in the Gulf of Mexico, agency officials said. While historically, BOEM has not required incidental take authorizations in the Gulf of Mexico to be in place prior to issuing seismic survey permits, around 2002, ocean noise emerged as an environmental concern in the region, according to BOEM officials. At that time, BOEM requested incidental take regulations from NMFS for the Gulf of Mexico at the request of NMFS and on behalf of the industry and submitted revised requests in 2004, 2011, and 2016. According to BOEM officials we interviewed, the agency has been working with NMFS since 2002 to get incidental take regulations in place. According to NMFS officials, BOEM’s 2002 request only addressed 1 of the 21 species present in the Gulf of Mexico, so NMFS requested that BOEM revise its request. The 2004 request included all marine mammals present in the area, according to NMFS officials. BOEM and NMFS agreed to require mitigation measures on all deep penetration seismic surveys in lieu of the formal authorization until completion of the pending rulemaking, according to BOEM officials. Meanwhile, in 2010, a consortium of environmental organizations sued Interior, alleging that BOEM permitted seismic activities in the Gulf of Mexico in violation of NEPA. In correspondence with BOEM, plaintiffs also alleged that seismic activities permitted by BOEM in the Gulf of Mexico resulted in the unauthorized take of marine mammals in violation of the MMPA. In June 2013, the parties reached an agreement providing for a temporary stay of all proceedings in the lawsuit until Final Action, as defined in the settlement agreement, with respect to BOEM’s application for incidental take regulations or until the expiration of 30 months, whichever occurs first. In addition, BOEM agreed to consider the appropriateness of prescribing additional mitigation measures for industry applicants related to seismic survey permits during the stay, including seasonal restrictions for coastal waters and certain monitoring and reporting requirements; the plaintiffs agreed not to challenge such permits for surveys implementing the mitigations during the stay. In February 2016, the parties agreed to extend the stay through September 25, 2017, subject to BOEM’s consideration of certain additional conditions on seismic surveys permitted in the Gulf of Mexico. In October 2016, BOEM submitted a revised request to NMFS for incidental take regulations governing geophysical surveys in the Gulf of Mexico. In December 2016, NMFS published in the Federal Register a notice of receipt and request for comments and information in response to BOEM’s revised request for incidental take regulations. According to NMFS officials, the agency is currently working on developing incidental take regulations for the Gulf of Mexico region. In September 2017, the parties agreed to extend the stay through November 1, 2018. From 2011 through 2016, BOEM Reviewed 297 Applications for Seismic Survey Permit Applications and Issued 264 Permits Based on our review of agency data, from 2011 through 2016, BOEM reviewed 297 applications for seismic survey permits. Of the 297 seismic survey permit applications reviewed, BOEM issued 264 permits during this period, and the number of applications reviewed and permits issued varied by OCS region (see table 1). For the Gulf of Mexico region, which has had the most oil and gas activity, BOEM reviewed the most permit applications (268) and issued the most permits (250). From 2011 through 2016, BOEM Time Frames for Issuing Seismic Survey Permit Applications Varied by OCS Region BOEM does not have statutory review time frame requirements for issuing geological and geophysical seismic survey permits. The range of BOEM’s review time frames—from the date the agency determined that an application was complete to when BOEM issued a seismic survey permit—varied by OCS region (see table 2 and fig. 3). This table does not include pending, denied, or withdrawn applications or Notices of Scientific Research. This table also does not include the Pacific Outer Continental Shelf region because the Bureau of Ocean Energy Management did not issue any seismic survey permits there from 2011 through 2016. The six permits issued in the Atlantic region were for high-resolution seismic surveys for non-oil and gas mineral resources. Internally, according to BOEM officials, BOEM’s goal in the Gulf of Mexico OCS region is to issue high-resolution seismic survey permits within 40 days and to issue deep penetration (airgun) permits within 70 days. Our analysis of BOEM data on seismic survey permits found that, in the Gulf of Mexico OCS region, for high-resolution seismic survey permits, the agency issued 103 permits out of 108 permits (95 percent) within 40 days; for deep penetration permits, the agency issued 90 permits out of 142 permits (63 percent) within 70 days. NMFS and FWS Follow a Similar Process for Incidental Take Authorization Reviews, but Guidance Does Not Sufficiently Describe How to Record Certain Review Dates NMFS and FWS follow a similar application review process for reviewing incidental take authorization applications, and from 2011 through 2016, the agencies reviewed a total of 35 applications. However, neither agency was able to provide accurate data for the dates on which it began its formal processing of these applications because neither agency’s guidance sufficiently describes how to record certain review dates. As a result, it is not possible to determine whether the agencies were meeting their statutory time frames for the type of incidental take authorization application that has such time frames—the incidental harassment authorizations. NMFS and FWS Follow a Similar General Process to Review Incidental Take Authorization Applications Based on our review of agency guidance, NMFS and FWS follow a similar general process in reviewing applications for incidental take authorizations—both incidental harassment authorizations and letters of authorization with associated incidental take regulations—related to seismic survey activities (see fig. 4). According to NMFS and FWS officials we interviewed, the incidental take authorization process is concurrent with, but separate from, BOEM’s process for issuing seismic survey permits, and entities seeking to conduct seismic surveys apply separately with each agency, as appropriate. When applicants apply for an incidental take authorization, they are first to decide which type of authorization they need—an incidental harassment authorization or a letter of authorization associated with incidental take regulations, depending on the expected effect on marine mammals. Specifically, if the proposed activity has the potential to result in the taking of marine mammals by harassment only, applicants can request an incidental harassment authorization. Incidental harassment authorizations can be issued for up to 1 year. The MMPA provides that NMFS or FWS shall issue incidental harassment authorizations within 120 days of receiving an application. If an activity has the potential to result in serious injury to marine mammals, the applicant would request incidental take regulations, which can be issued for up to 5 years. Letters of authorization are required to conduct activities pursuant to incidental take regulations. Once incidental take regulations are finalized, the applicant can submit a request for a letter of authorization, which is issued under the incidental take regulations. Once NMFS or FWS initially receives an application for an incidental harassment authorization or incidental take regulation, agency officials said that they begin their review and determine whether the application is adequate and complete. They also work with the applicant to obtain any additional required or clarifying information, according to agency officials we interviewed. According to agency regulations and guidance, once the agency deems an application to be adequate and complete, it begins to formally process the application and may initiate several review actions, including a NEPA environmental review and, if appropriate, an Endangered Species Act Section 7 consultation. In the case of NMFS, the agency publishes a notice of receipt of a request for incidental take regulations in the Federal Register. The agencies then publish in the Federal Register a proposed incidental harassment authorization or proposed incidental take regulations. For incidental harassment authorizations, the MMPA provides that NMFS or FWS, or both, are to publish a proposed incidental harassment authorization and request public comment in the Federal Register no later than 45 days after receiving an application. Following a 30-day public comment period for proposed incidental harassment authorizations, the agencies would make their final determination on the authorization, based on: the findings of their NEPA review, the Endangered Species Act consultation, an assessment of whether the proposed activity is consistent with the requirements of other statutes, as necessary, an analysis of the applicant’s ability to implement any necessary mitigation measures to reduce potential effects on marine mammals, and a review of the formal public comments submitted regarding the proposed application. Not later than 45 days after the close of the public comment period, NMFS and/or FWS is to, under the MMPA, issue an incidental harassment authorization, including any appropriate conditions. In order to issue an incidental harassment authorization, the relevant agency must make the required findings that the activity will result in a taking by harassment only of small numbers of marine mammals, that the anticipated take will have a negligible impact on the species or stock, and the anticipated take will not have an unmitigable adverse impact on the availability of the species or stock for subsistence uses. For incidental take regulations, the agencies are to publish proposed regulations in the Federal Register and generally provide a public comment period of 30-to-60 days, depending on the type of authorization requested and circumstances that may warrant a shorter or longer period. The agencies then publish a final rule in the Federal Register, which includes the agencies’ response to public comments received. Generally, 30 days after the final rule is published, an approved incidental take regulation becomes effective. Once the regulation becomes effective, the agencies may issue letters of authorization, the applications for which may have been received at the same time as the submission of the incidental take regulation request or following the implementation of the regulations, and then determine whether the activities in the letter of authorization application are within the scope of the activities analyzed in the regulations. The relevant agency can issue a letter of authorization based on a determination under the agency’s regulations that the level of any incidental takings will be consistent with the findings used to determine the total taking allowable under the specific regulations. NMFS Reviewed and Approved Incidental Take Authorizations in Three OCS Regions, and FWS Reviewed and Approved Authorizations in the Alaska OCS From 2011 through 2016, based on our analysis of agency data, NMFS reviewed 28 applications for incidental take authorizations and issued 21 incidental take authorizations across the Alaska, Atlantic, and Gulf of Mexico OCS regions, and FWS reviewed and issued 7 authorizations only in the Alaska OCS, in part because the marine species under FWS’ jurisdiction do not tend to occur in waters of the OCS in the other regions. Of the 28 applications NMFS reviewed, it reviewed the most applications (18) and issued the most authorizations (16) related to seismic surveys in the Alaska region (see table 3). With regard to incidental take regulations, NMFS reviewed and issued one set of incidental take regulations related to seismic surveys in Alaska but did not receive applications for—and as a result has not issued—any letters of authorization associated with the incidental take regulations, agency officials said. There were no requests for incidental take regulations related to seismic surveys in the Atlantic region, and NMFS is currently developing incidental take regulations for the Gulf of Mexico, in response to BOEM’s request, as noted previously. From 2011 through 2016, FWS reviewed applications for and issued incidental take authorizations related to seismic surveys only in the Alaska region, in part because the species under FWS’ jurisdiction do not tend to occur in waters of the OCS in the other regions or there has not been industry interest in applying for incidental take authorizations in those regions, according to agency officials. Specifically, FWS reviewed and issued two incidental harassment authorizations and two incidental take regulations, which had five associated letters of authorization, for seismic activities in the Alaska OCS. Both NMFS and FWS Did Not Accurately Record Certain Review Dates Because Neither Agency’s Guidance Sufficiently Describes How to Record Such Dates From 2011 through 2016, NMFS did not accurately record the dates on which it determined applications to be adequate and complete, and FWS did not record those dates at all; therefore, it is not possible to determine NMFS and FWS time frames for reviewing incidental take authorization applications. As noted previously, both agencies, per their guidance and regulations, are to begin their formal processing of a request for an incidental take authorization once an application is determined to be “adequate and complete.” NMFS has general guidance on what constitutes an adequate and complete incidental take authorization application—for both incidental harassment authorization and incidental take regulation applications, as well as associated letter of authorization applications. Specifically, NMFS’ regulations and website outline 14 sections of information required in an incidental take authorization application, such as the anticipated impact of the activity to the species or stock of marine mammal. The agency’s website also notes that adequate and complete means “with enough information for the agency to analyze the potential impacts on marine mammals, their habitats, and on the availability of marine mammals for subsistence uses.” FWS also has general guidance on what constitutes an adequate and complete incidental take authorization application, for both incidental harassment authorization and incidental take regulations, as well as associated letters of authorization. Specifically, FWS regulations and guidance specify that all applications must include certain pieces of information and note that if an application is determined to be incomplete, FWS staff are to notify the applicant within 30 days of receiving the application that information is lacking. However, neither NMFS nor FWS guidance sufficiently describes how agency staff should record the date on which an application is determined to be adequate and complete, which would start the time frame for reviewing incidental take authorization applications. Specifically, NMFS’ guidance provides information on what should be included in an adequate and complete application but does not include information on how or when staff should record the date an application is determined to be adequate and complete. NMFS officials we interviewed told us that while they generally record these dates, they are not sufficiently accurate to be used for an analysis of review time frames. These officials said that determinations of whether an application is adequate and complete have historically varied by staff member, with some staff waiting until all outstanding questions are resolved with an applicant before deeming the application adequate and complete, and others considering an application to be adequate and complete if more substantive questions are answered (e.g., the dates, duration, specified geographic region of, and estimated take for the proposed activity), even if some less substantive questions are still outstanding (e.g., contact information). In addition, NMFS officials told us that, in some cases, staff might not enter into their system the date they determine an application to be adequate and complete and might instead enter the information in batches once they have a few applications that are ready for data entry. This might mean that, in cases where a staff member waits until an application is done being processed and reviewed, the date recorded for the determination of adequate and complete, and the date the incidental take authorization is published, may be zero to a few days apart. Based on our review of NMFS data, in at least two cases, the date NMFS recorded for the determination of adequacy and completeness of an application was after the date when the proposed incidental take authorization was published in the Federal Register. While FWS has guidance on what applicants should include in an incidental take authorization application, the guidance does not specify how or when staff should record the date on which they determine an application is adequate and complete. One FWS official we interviewed told us that the agency does not record this date in the spreadsheet for tracking incidental take authorization applications. According to this FWS official, agency officials do not record this date because they do not wait until the application is considered adequate and complete to begin their review. Instead, they begin processing the application while working with applicants to provide missing information and clarifications. By the time FWS officials consider an application to be adequate and complete, the officials said that they usually have a well-developed draft incidental take authorization and are typically finalizing details with the applicant. According to FWS officials, recording an adequate and complete date would have little meaning. NMFS’s and FWS’s guidance does not specify how or when staff should record the date an application is determined to be adequate and complete to help ensure that such a date is recorded consistently. As a result, the agencies are either not accurately recording the date an application is adequate and complete or not recording that date. Thus, the agencies are not able to determine how long their formal processing takes. This outcome is inconsistent with federal internal control standards, which call for management to use quality information to achieve agency objectives and design control activities, such as accurate and timely recording of transactions, to achieve objectives and respond to risk. Officials we interviewed at both agencies told us that they work to help meet applicants’ project timelines—for example, applicants might need an incidental harassment authorization to be in place when their seismic survey vessel becomes available to begin operations. Until NMFS and FWS develop guidance that clarifies how and when staff should record the date on which the agency determines the “adequacy and completeness” of an application, the agencies and applicants will continue to have uncertainty around review time frames for incidental take authorizations. Further, NMFS and FWS do not know if they are meeting their statutory time frames for reviewing one type of incidental take authorization application—incidental harassment authorization applications—because they do not assess the time it takes their agencies to review applications and make authorization decisions. As noted previously, the MMPA provides that NMFS or FWS shall issue incidental harassment authorizations within 120 days of receiving an application. Industry representatives, scientific researchers, and agency officials we interviewed noted, however, that the agencies often take longer than 120 days to make a decision about whether to issue an incidental harassment authorization. For example, NMFS and FWS officials we interviewed told us they often do not complete incidental harassment authorization reviews within the 120-day statutory time frame. According to NMFS and FWS officials, reviews may take longer than 120 days in cases where the agency determines that a threatened or endangered species under the Endangered Species Act may be affected, because the agency generally must request the initiation of a section 7 consultation, which by regulation can take up to 135 days. More specifically, NMFS and FWS officials we interviewed were unable to provide accurate estimates of how long it takes their agency to review incidental harassment authorization applications because they said that they do not conduct analyses of their review time frames. This practice is inconsistent with federal standards for internal control, which call for agency management to design control activities to achieve objectives and respond to risks, including by comparing actual performance to planned or expected results throughout the organization and analyzing significant differences. Without analyzing how long it takes to review incidental harassment authorization applications, from the date the agency determines that an application is adequate and complete until the date an application is approved or denied, and comparing it to the statutory review time frame, NMFS and FWS will be unable to determine whether they are meeting their objectives of completing reviews within the statutory time frame of 120 days. For Several Years, BOEM and NMFS Have Been Reviewing Certain Seismic Survey Permit and Incidental Take Authorization Applications in the Atlantic OCS As of October 2017, in addition to the six permits BOEM issued in the Atlantic OCS from 2011 through 2016, another six permits were pending a decision. Five related incidental harassment authorizations have also been pending a decision by NMFS, as of October 2017. BOEM Has Been Reviewing Six Seismic Survey Permit Applications in the Atlantic OCS Region for Several Years As of October 2017, in addition to the six permits BOEM issued in the Atlantic OCS from 2011 through 2016, another six permits were pending a decision. From March to May 2014, BOEM received these six applications for seismic survey permits in the Atlantic region (see fig. 5). Of the six applicants that applied to BOEM during that time, five also applied to NMFS for incidental harassment authorizations related to their seismic survey permit applications, from August 2014 to January 2016. The sixth applicant that applied to BOEM for a seismic survey permit in the Atlantic OCS region did not apply for an incidental harassment authorization with NMFS, according to NMFS officials. BOEM officials we interviewed stated that beginning in August 2014, the agency began conducting outreach to Atlantic state officials to explain the geological and geophysical permitting process and the seismic technologies involved in the applications. In addition, according to BOEM officials, the agency began coordinating with the Department of Defense and the National Aeronautics and Space Administration to ensure that the proposed seismic surveys did not interfere with any of their activities. According to BOEM data we reviewed, the agency had determined that all six applications to be “accepted,” or complete in late April to early June 2014. In March 2015, BOEM made the applications available for public comment for 10 or 30 days, depending on the type of activity proposed. According to BOEM officials, while the agency does not generally provide a similar public comment period for the Gulf of Mexico or Alaska OCS regions, once the Atlantic applications were considered “accepted,” BOEM decided to provide a public comment period for them because the region is considered a “frontier area”—a region without a long history of oil and gas development—and local communities in Atlantic states are less familiar with the impacts of seismic surveys than communities in the Gulf states. From March 2015 until January 2017, BOEM had no further data on its review activities that took place. BOEM officials we interviewed told us that their seismic survey permit reviews were complete, but the agency did not issue the seismic survey permits because it had made a policy decision to wait for NMFS to issue incidental harassment authorizations before doing so. In January 2017, BOEM denied the six applications for deep-penetration seismic survey permits in the Atlantic OCS region after reviewing the applications for 948 to 982 days. In May 2017, BOEM announced it would reconsider the six applications for seismic survey permits in the Atlantic region, after the new administration rescinded the permit denials. As of August 2017, BOEM officials we interviewed were unable to provide estimates of when the agency’s reviews would be completed. NMFS Has Been Reviewing Incidental Harassment Authorization Applications Related to Seismic Survey Permits in the Atlantic OCS for Several Years In addition to the four incidental harassment authorizations NMFS approved in the Atlantic OCS region from 2011 through 2016, there are five authorization applications related to seismic survey permits that are pending a decision by NMFS, as of October 2017. NMFS received three incidental harassment authorization applications related to seismic surveys in the Atlantic OCS region from August to September 2014, a fourth in March 2015, and a fifth in January 2016 (see fig. 6). In fall 2014, NMFS redirected staff reviewing the Atlantic incidental harassment authorization applications to work on issues related to the agency’s Fisheries Science Center, according to a NMFS official we interviewed. According to this official, review of the Atlantic applications resumed in February 2015. In spring 2015, NMFS became aware of some academic studies concerning the impacts of seismic surveys on marine mammals that they felt would be important to consider with the Atlantic OCS applications under review, according to agency officials we interviewed. According to these officials, NMFS notified applicants of these studies, and one applicant voluntarily revised its impact estimates based on the studies. In summer 2015, NMFS officials said they determined the three applications were sufficiently complete to begin processing. The agency also published a formal notice of receipt and request for comments in the Federal Register. According to NMFS officials we interviewed, this procedure is not a required step in the incidental harassment authorization review process, but NMFS officials thought it was important to solicit the input, given potential local community concern over the surveys in the Atlantic OCS region. Also according to NMFS officials, based on comments received during the public comment period, NMFS determined one application had been erroneously considered complete and returned the application to the applicant. In fall 2015, NMFS officials informed applicants that NMFS would need revised applications based on the new academic studies. In addition to the applicant noted above who updated its application in spring 2015, one additional applicant chose to update its application in fall 2015, and NMFS updated two additional remaining applications. NMFS officials told us they received the last major revisions to the applications in May 2016 and were reviewing and drafting mitigation and monitoring proposals throughout 2016. In November 2016, according to NMFS officials, the five proposed incidental harassment authorizations were ready to be published in the Federal Register, but internal leadership placed the process on hold due to uncertainty regarding BOEM’s actions on the permits. Following BOEM’s denials in January 2017, NMFS suspended the five incidental harassment authorization applications related to the denied seismic survey permits; according to NMFS officials, NMFS determined there was no longer a valid basis for any proposed activity following BOEM’s denial of permits for the actual activity. Agency officials informed applicants that NMFS may resume its incidental harassment authorization review if BOEM resumed its permit review at some point in the future. Once BOEM announced it would reconsider the six applications for seismic survey permits in the Atlantic region, NMFS published five proposed incidental harassment authorizations related to the permits being reconsidered by BOEM in June 2017. In July 2017, NMFS extended the public comment period an additional 15 calendar days for a total of 45 days. After the close of the public comment period, under the MMPA, NMFS is to finalize its decision regarding the applications and either publish the final incidental harassment authorizations or deny the applications. As of October 2017, officials we spoke with at NMFS were unable to provide estimates of when the agency’s reviews would be completed. Conclusions Offshore seismic surveys provide federal agencies and commercial entities with a wide range of information, including data on fault zones and geology that may indicate the presence of oil and gas. This information can help inform regulatory and resource development decisions. In reviewing applications for seismic survey permits, BOEM records the date on which an application for a seismic survey permit is “accepted”, or complete, which may be weeks or months after an application is received. NMFS and FWS, however, were unable to provide accurate data on the dates that they determined applications for incidental take authorizations were adequate and complete because the agencies’ guidance does not specify how or when staff should record this date. Until NMFS and FWS develop guidance that clarifies how and when staff should record the date the agency determines the “adequacy and completeness” of an application, the agencies and applicants will continue to have uncertainty around review time frames for incidental take authorizations. Moreover, NMFS and FWS officials we interviewed said that they do not analyze their review time frames, a practice that is inconsistent with federal standards for internal control. Without analyzing how long it takes to review incidental harassment authorization applications and comparing time frames to the statutory review time frame, NMFS and FWS will be unable to determine whether they are meeting their statutory review time frame of 120 days. Recommendations for Executive Action We are making the following four recommendations, including two to NMFS and two to FWS. Specifically: The Assistant Administrator for Fisheries of NMFS should develop guidance that clarifies how and when staff should record the date on which the agency determines the “adequacy and completeness” of an incidental take authorization application. (Recommendation 1). The Principal Deputy Director of FWS should develop guidance that clarifies how and when to record the date on which the agency determines the “adequacy and completeness” of an incidental take authorization application. (Recommendation 2). The Assistant Administrator for Fisheries of NMFS should analyze the agency’s time frames for reviewing incidental harassment authorization applications—from the date the agency determines that an application is adequate and complete until the date an application is approved or denied—and compare the agency’s review time frames to the statutory review time frame. (Recommendation 3). The Principal Deputy Director of FWS should analyze the agency’s time frames for reviewing incidental harassment authorization applications— from the date the agency determines that an application is adequate and complete until the date an application is approved or denied—and compare the agency’s review time frames to the statutory review time frame. (Recommendation 4). Agency Comments and Our Evaluation We provided a copy of this report to the Departments of Commerce and the Interior for review and comment. The Department of Commerce provided comments on behalf of the National Marine Fisheries Service (NMFS). NMFS agreed with our recommendations but recommended changes to some of the terms used in our report and stated that our characterization of the statutory and mandated requirements did not fully describe the extent of review and analysis required during their review. While we believe that our description of the extent and complexity of NMFS’ review and analysis, including the terms we use to describe NMFS’ process, was sufficient for this report, we revised the report as appropriate. In its letter, NMFS acknowledged that it does not consistently record the date that an application is deemed ”adequate and complete,” and agreed with our recommendations, including describing the steps it plans to take to address them. The Department of Commerce also provided technical comments, which we incorporated throughout our report as appropriate. The Department of Commerce’s letter can be found in appendix II. The Department of the Interior provided comments on behalf of the Bureau of Ocean Energy Management (BOEM) and the U.S. Fish and Wildlife Service (FWS). The FWS partially concurred with our first recommendation and fully concurred with our second. Regarding the first recommendation, FWS noted that it plans to develop guidance for recording the “adequate and complete” date of incidental harassment authorization applications; however, it did not indicate that it would develop such guidance for the other type of incidental take authorization—the incidental take regulations. We believe that FWS should develop guidance for both. Such guidance is necessary to maintain consistency with federal internal control standards, which call for management to use quality information to achieve agency objectives and design control activities, such as accurate and timely recording of transactions, to achieve objectives and respond to risk. The Department of the Interior also provided technical comments, which we incorporated throughout our report as appropriate. The Department of the Interior’s letter can be found in appendix III. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Acting Director of BOEM, the Assistant Administrator for Fisheries of NMFS, and the Principal Deputy Director of FWS. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made significant contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines (1) BOEM’s process for reviewing seismic survey permit applications in each OCS region, the number of applications reviewed from 2011 through 2016, and BOEM’s review time frames; (2) NMFS’s and FWS’s processes for reviewing incidental-take authorization applications related to seismic surveys in each OCS region, the number of such applications reviewed by the agencies from 2011 through 2016, and their review time frames; and (3) the status of pending seismic survey permit applications and related incidental take authorizations in the Atlantic OCS region. In our preliminary review of all four OCS regions— Alaska, the Atlantic, the Gulf of Mexico, and the Pacific—we determined that there had been no new oil and gas and related seismic activity in the Pacific OCS region for the last two decades; as a result, we excluded the Pacific OCS region from our review. To examine BOEM’s, NMFS’s, and FWS’s processes for reviewing seismic survey permit applications and related incidental take authorizations, we analyzed relevant laws and regulations that govern the processes and reviewed and analyzed agency guidance, such as process flowcharts, and other documents, including Federal Register notices. We also interviewed BOEM, NMFS, and FWS agency officials, in their headquarters and regional offices, responsible for overseeing seismic permitting and incidental take authorization reviews in each selected OCS region. In addition, we interviewed a range of stakeholders, identified and selected because of their knowledge of the seismic survey permit and incidental take authorization application processes, to obtain their views. Specifically, we interviewed representatives from 10 stakeholder groups, which included industry groups, a research institution, and environmental organizations. Because this was a nonprobability sample of stakeholders, the views of stakeholders we spoke with are not generalizable beyond those groups that we interviewed. To examine the number of seismic survey permit applications and related incidental take authorizations that BOEM, NMFS, and FWS reviewed from 2011 through 2016, we obtained data from BOEM, NMFS, and FWS on the number of permit and authorization applications each agency reviewed and the number of permits and authorizations the agencies issued in each selected OCS region. We asked the agencies to categorize their data with different types of seismic survey technologies (e.g., deep-penetration seismic surveys, high-resolution seismic surveys, or other seismic survey technology such as vertical seismic profile technology). As a result, we identified the number of relevant permits and authorizations that were identified by these agencies as having used seismic survey technologies. We used publicly available information on the number of permit and authorization applications on agency websites to check the reliability of BOEM, NMFS, and FWS data and found the data on the number of permits and authorizations to be sufficiently reliable for our purposes. To examine the review time frames for seismic survey permit applications and related incidental take authorizations from 2011 through 2016, as well as pending applications, and the extent to which NMFS and FWS are meeting their statutory time frames for reviewing incidental harassment authorization applications related to seismic survey permits, we obtained data from BOEM, NMFS, and FWS. We also interviewed agency officials knowledgeable about the data and analyzed the data to determine the range of review time frames by agency and by selected OCS region. We focused our review of pending applications on the Atlantic OCS region because it was the only region with applications that had been pending review for several years. We used information on the dates applications were received and issued as listed in the Federal Register or publicly available documentation to check the reliability of BOEM, NMFS, and FWS data. For BOEM, we found the dates the agency gave us generally were consistent with the dates listed in the Federal Register. As a result, we used BOEM’s dates from the time an application was deemed “accepted,” or adequate and complete, until the permit was issued. We found the data to be sufficiently reliable for our purposes. For NMFS and FWS, we found errors between the dates the agencies gave us and the dates listed in the Federal Register. In addition, the agencies told us they did not have reliable information on the dates that applications were determined to be adequate and complete. We also examined NMFS and FWS guidance on review time frames, agency communication with applicants, and data- recording procedures. We also interviewed agency officials as well as industry stakeholders to learn more about time frames for seismic survey permit applications and related incidental take authorizations. Appendix II: Comments from the Department of Commerce Appendix III: Comments from the Department of the Interior Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Christine Kehr (Assistant Director), Nirmal Chaudhary, Maggie Childs, John Delicath, Marissa Dondoe, Cindy Gilbert, Jessica Lewis, Greg Marchand, Patricia Moye, Katrina Pekar-Carpenter, Caroline Prado, Dan Royer, and Kiki Theodoropoulos made key contributions to this report.
Why GAO Did This Study Offshore seismic surveys provide federal agencies and other entities with a wide range of data, from research on fault zones to geology that may indicate the presence of oil and gas. Companies seeking to conduct such surveys to find oil and gas resources in the OCS must obtain a permit from BOEM—which oversees offshore oil and gas activities. Man-made sources of ocean noise, such as seismic surveys, may harm marine mammals. Entities whose activities may cause the taking of marine mammals, which includes harassing or injuring an animal, may obtain incidental take authorizations for seismic surveys from NMFS or FWS, depending on the potentially affected species. GAO was asked to provide information on the seismic permitting process. This report examines (1) BOEM's review process, the number of permit applications reviewed from 2011 through 2016, and its review time frames; and (2) NMFS's and FWS's review process, the number of incidental take authorization applications reviewed from 2011 through 2016, and their review time frames, among other objectives. GAO reviewed laws and regulations and agency documents, analyzed data on applications to BOEM, NMFS, and FWS, and interviewed agency officials. What GAO Found The Department of the Interior's Bureau of Ocean Energy Management's (BOEM) process and time frames for reviewing seismic survey applications differ by region along the Outer Continental Shelf (OCS). From 2011 through 2016, BOEM reviewed 297 applications and issued 264 seismic survey permits, and the reviews' time frames differed by region (see table). As part of the process, BOEM may require approved “incidental take” authorizations from the Department of Commerce's National Marine Fisheries Service (NMFS) or Interior's U.S. Fish and Wildlife Service (FWS), given the possibility such surveys may disturb or injure marine mammals. BOEM does not have statutory review time frame requirements for issuing permits, and officials said the agency starts its formal review once it determines that an application is complete. In some cases, the agency issued a permit on the same day it determined an application was complete. NMFS and FWS follow a similar general process for reviewing incidental take authorization applications related to seismic survey activities. From 2011 through 2016, NMFS and FWS reviewed 35 and approved 28 such applications across the three OCS regions, including some authorizations related to BOEM permits as well as research seismic surveys not associated with BOEM permits. NMFS was unable to provide accurate data for the dates the agency determines an application is adequate and complete—and FWS does not record this date. For example, based on GAO's review of NMFS data, in at least two cases, the date NMFS recorded the application had been determined adequate and complete was after the date when the proposed authorization was published in the Federal Register . Federal internal control standards call for agencies to use quality information. Without guidance on how to accurately record review dates, agencies and applicants will continue to have uncertainty around review time frames. Further, under the Marine Mammal Protection Act, the agencies are to review one type of incidental take authorization application—incidental harassment authorization applications—within 120 days of receiving an application for such authorizations. NMFS and FWS have not conducted an analysis of their review time frames. Not conducting such an analysis is inconsistent with federal internal control standards that call for agency management to design control activities to achieve objectives and respond to risks. Without analyzing the review time frames for incidental harassment authorization applications and comparing them to statutory review time frames, NMFS and FWS are unable to determine whether they are meeting their objectives to complete reviews in the 120-day statutory time frame. What GAO Recommends GAO is recommending that both NMFS and FWS develop guidance clarifying how and when staff should record review dates of incidental take authorization applications and analyze how long the reviews take. NMFS agreed and FWS partially agreed with our recommendations.
gao_GAO-18-289
gao_GAO-18-289_0
Background Navy’s Real Property Audit Assertions DOD has defined audit readiness as having the capabilities in place that allow an auditor to plan and perform a full financial statement audit that results in actionable feedback to DOD. In DOD’s May 2016 FIAR Plan Status Report, the DON initially asserted that it would be audit ready with regard to real property (including construction-in-progress) for the existence and completeness assertions by June 2016 and with regard to the valuation assertion by March 2017. Subsequently, in DOD’s November 2016 FIAR Plan Status Report, the DON asserted that it would be audit ready for the existence, completeness, and valuation assertions by March 2017. In DOD’s May 2017 FIAR Plan Status Report, the DON reported that it had validated that the existence and completeness assertions for real property. Ultimately, the DON reported in DOD’s November 2017 FIAR Plan Status Report that it had achieved audit readiness for the existence and completeness assertions and was in the process of determining audit readiness for the valuation assertion. Real Property Valuation Methods In August 2016, the Federal Accounting Standards Advisory Board issued Statement of Federal Financial Accounting Standards (SFFAS) No. 50, which allows reporting entities to apply alternative valuation methods in establishing opening balances of general property, plant, and equipment (G-PP&E). Such alternative valuation methods may be applied in reporting periods beginning after September 30, 2016. SFFAS No. 50 permits each reporting entity to use alternative methods when presenting financial statements, or one or more line items, (1) for the first time or (2) after a period during which existing systems could not provide the information necessary for producing financial statements in accordance with generally accepted accounting principles (GAAP) without using alternative methods. SFFAS No. 50 permits reporting entities to apply an alternative method only once per line item after the period during which the existing systems could not provide the information for producing financial statements in accordance with GAAP. As of March 2018, the Navy has not made an unreserved assertion attesting that its opening balances of G-PP&E are reported in accordance with SFFAS No. 50. After opening balances are established using an alternative valuation method, federal accounting standards require historical cost to be used in valuing G-PP&E acquired or constructed. Plant Replacement Value Being Used to Develop Opening Balances for the Navy’s Buildings DOD already uses plant replacement value (PRV) for decision making and management purposes and has reported that it will use PRV to develop opening balances for the Navy’s buildings. Navy is also currently using PRV (an allowable alternative valuation method under SFFAS No. 50) for financial statement reporting of its buildings and plans to do so until the DON makes an unreserved assertion that its financial statements or its G-PP&E line item or reported assets classes are presented fairly in accordance with GAAP. PRV represents an estimate of the replacement cost in current year dollars to design and construct a facility to replace an existing facility at the same location. As such, the replacement (or construction) cost factor, generally applied to buildings as a dollar amount multiplied by square footage, is also indexed to increase or decrease the amount to account for other variations in costs for different geographic areas or complexity of the facility. Once the calculation prescribed by the formula has resulted in PRV, accumulated depreciation is computed based on the placed in service date. Figure 1 shows an example of the PRV formula being applied to an enlisted housing facility. The valuation adjustment factors, as shown below, vary by location and use of the building. Acquisitions and capital improvements made to existing buildings during subsequent financial periods are to be recorded at the actual cost of obtaining the asset or improvement and placing it into service. Internal Control Activities Internal control activities, as defined in Standards for Internal Control in the Federal Government, are the policies, procedures, and techniques that enforce management’s directives to achieve the entity’s objectives and address related risks. A deficiency in internal control exists when the design, implementation, or operation of a control does not allow management or personnel, in the normal course of performing their assigned functions, to achieve control objectives and address related risks. Internal Control Deficiencies Impaired the Navy’s Ability to Properly Record and Report Certain of Its Buildings We identified internal control deficiencies that impaired the Navy’s ability to assert that as of September 30, 2016, (1) buildings recorded in iNFADS and reported as assets in Navy’s financial statements existed and (2) all of the Navy’s buildings were recorded in iNFADS and correctly reported as assets in Navy’s financial statements. As shown in figure 2, the effects of these internal control deficiencies contributed to the Navy (1) continuing to maintain records in iNFADS for buildings that had been demolished, sometimes many years ago, and including these buildings as assets in its financial statements; (2) excluding some of the buildings that it owns from being recorded in iNFADS and reported as assets in its financial statements; (3) erroneously reporting nonfunctional buildings as assets in its financial statements; and (4) excluding certain buildings from being reported as assets in its financial statements that met or exceeded DOD’s capitalization threshold. The Navy Did Not Have Properly Designed Procedures and Related Control Activities to Reasonably Assure Proper Accounting for Its Demolished Buildings While the Navy had written procedures for the multistep process for disposal of real property by demolition, these procedures and related control activities were not properly designed to reasonably assure that demolished buildings were recorded as disposed and removed from the accounting records. Specifically, the procedures and related control activities did not reasonably assure that RPAOs were provided with a signed demolition approval document and the related disposal form. Without these documents, an RPAO may not be aware that a building has been demolished and therefore may not take the appropriate actions to record the asset as disposed in iNFADS so that the asset record is subsequently removed from iNFADS at the end of the fiscal year and the asset is thereby not included in Navy’s financial statements. When a building is designated for disposal, multiple parties are involved in the demolition process. This business process can involve the installation’s Public Works Department; the Regional Commander; the Facilities Engineering Command realty specialist; the Commander, Navy Installations Command; the demolition project manager; the demolition contractor; and the General Services Administration. The multiple functional offices involved in the disposal by demolition business process and the lack of communication between the offices can result in buildings being demolished without the RPAO’s knowledge. The Navy’s procedures for the disposal of real property by demolition state that the RPAO is to receive a signed demolition approval document from the installation’s Public Works Department. After the demolition has been completed, the project manager is to work with the demolition contractor (if applicable), the planner, and the RPAO to complete the disposal form. The RPAO, within 10 days of the completion of the demolition, is to upload supporting documentation about the disposed asset into iNFADS and create the iNFADS disposal record. The Navy’s procedures did not include a control activity, such as a step to verify the RPAO’s receipt of a signed demolition approval document and disposal form, to reasonably assure that the RPAOs are notified of all building demolitions. These notifications are critical so that each RPAO can properly account for a building by creating an iNFADS disposal record, which ultimately results in records for demolished buildings being deleted from iNFADS and therefore not included as assets in the financial statements. During our testing of a nongeneralizable sample of buildings in iNFADS, we identified buildings that had been demolished prior to September 30, 2016, but were still recorded in iNFADS as of September 30, 2016, and therefore were reported as assets in Navy’s financial statements as of September 30, 2016. According to SFFAS No. 6, Accounting for Property, Plant, and Equipment, assets, including real property, shall be removed from the asset accounts along with the associated accumulated depreciation if the asset no longer provides service to the operations of the entity. The inclusion of demolished buildings in iNFADS results in inaccurate Navy real property records and can lead to an overstatement of reported balances for real property in Navy’s financial statements. Of the 40 buildings for which we performed book-to-floor tests for existence, we found that 4 had been destroyed and no longer physically existed but were still recorded in iNFADS and reported as assets in Navy’s financial statements. Because we used a nongeneralizable sample of buildings, results from the sample cannot be used to make inferences about all of the Navy’s buildings. The four demolished buildings are described below. A six-car garage building had been demolished several years ago according to the Navy, but its operational status was shown as active in iNFADS as of September 30, 2016. Navy officials stated that while the actual demolition date for this building is not known, based on the demolition drawing for another building nearby, it appears to the Navy that the garage was demolished prior to 2001. A marina shop building was demolished as of June 30, 2016, so that a new building could be constructed at the same location. As of September 30, 2016, the operational status of this marina shop was shown as active in iNFADS. The disposal of the marina shop building was not recorded in iNFADS until May 2017. A storage building was demolished in February 2016 but was still recorded in iNFADS as of September 30, 2016. The RPAO was not notified that the building had been demolished until April 2016. After searching for the relevant paperwork, which could not be located, the RPAO prepared the disposal form that was dated December 20, 2016. An aviation warehouse, which had previously been demolished, was still recorded in iNFADS as of September 30, 2016. According to Navy officials, the demolition package was initiated in 2013, but the warehouse needed to remain in iNFADS until the site restoration work was completed. Based on available information, the warehouse was demolished around May 2014. The site restoration work was completed in 2016, but the RPAO was never notified. According to supporting documentation, a search for the relevant paperwork was completed, after which the building was recorded in iNFADS as disposed in March 2017. Consistent with our findings, the Navy Office of Financial Operations, in preparing a white paper on real property accumulated depreciation, also found that there were buildings recorded as existing in iNFADS that did not exist. For this white paper, the Navy selected a generalizable sample of 650 real property assets, including buildings, to test. Noted in the white paper as of May 31, 2017, only 584 of the 650 sampled real property assets were able to be tested. Specifically, 51 could not be validated, and an additional 15 real property assets, or 2.5 percent of the sample, were found to not exist, but were still recorded in iNFADS as existing. Based on Navy’s testing, we estimated that 2.5 percent of real property in the Navy’s iNFADS database as of May 2016 no longer existed but had not been recorded in iNFADS as disposed. The Navy Lacks Procedures and Related Control Activities to Reasonably Assure Proper Accounting for Certain Buildings During some of our site visits, the RPAOs stated that some buildings acquired or constructed with non-military construction funds (Non- MILCON) and that cost under $750,000 were not recorded in iNFADS. A Navy official confirmed that there were issues with recording Non- MILCON construction costing $250,000 and above, but under $750,000, for financial reporting purposes. Specifically, buildings or capital improvements are sometimes built using other Non-MILCON funding, and in some cases, an entity other than NAVFAC spends the funds. The RPAOs therefore may not know of buildings constructed as Non-MILCON projects if NAVFAC was not involved in the construction project. For example, at one location, we observed a sentry house that had been constructed for the Navy using Non-MILCON funding around 2006. However, the sentry house was not recorded into iNFADS until 2014 when the building was identified as existing through the Navy’s physical inventory procedures. NAVFAC did not have final procedures and related control activities to reasonably assure that buildings funded with Non-MILCON funding below $750,000 were consistently recorded in iNFADS and, if the cost exceeded the capitalization threshold, were reported as assets in the Navy’s financial statements. In 2015, the Navy began to develop both the process and system changes required to track construction-in-progress costs for the Navy’s Non-MILCON projects with costs greater than $750,000, so that the cost of the buildings associated with these projects would be properly recorded in iNFADS. In March 2017, NAVFAC updated its BMS process document with the steps for Non-MILCON buildings with costs greater than $750,000 and adopted the new guidance in May 2017. According to NAVFAC officials, the Navy has already determined that an equivalent detailed process is needed for Navy Non-MILCON buildings costing less than $750,000 to reasonably assure that the RPAOs are aware of these projects. The RPAOs are not involved in project authorization or project funding and otherwise would be unaware of these Non-MILCON projects. As a result, the RPAOs may not know of Non- MILCON buildings acquired or constructed with operations and maintenance or other Non-MILCON funding under $750,000 and accordingly do not have documentation to record the buildings’ acquisitions in iNFADS. A BMS process document that addresses Non- MILCON projects costing under $750,000 is being developed. However, according to a Navy official, a completion date has not been set for finalizing this document. Until effective procedures are implemented, Navy buildings constructed with Non-MILCON funding costing less than $750,000 may not be timely recorded in iNFADS, which would cause iNFADS to have incomplete information. If the buildings are not recorded in iNFADS, the buildings will not be reported as assets in the financial statements, as required, when the cost of the building meets or exceeds the Navy’s capitalization threshold of $250,000. The Navy Lacks Written Procedures and Related Control Activities to Reasonably Assure Proper Financial Reporting for Buildings Coded as Nonfunctional NAVFAC did not have written procedures requiring buildings coded as nonfunctional in iNFADS to be excluded when accumulating data from iNFADS for financial reporting purposes, nor did it have related control activities to provide reasonable assurance that such buildings were excluded. As a result, the Navy incorrectly included the amounts associated with buildings coded as nonfunctional when accumulating iNFADS information for financial reporting purposes. Specifically, based on our aggregation of iNFADS data, the Navy erroneously reported 189 buildings coded as nonfunctional, amounting to $411 million in gross value, $403 million in accumulated depreciation, and $8 million in net book value, as assets in the financial statements as of September 30, 2016. For example, one building coded as nonfunctional that we observed during our site visits was constructed in 1909, with a PRV of over $5 million in iNFADS. The building has been vacant and unusable since September 11, 2002, but was included as an asset in the financial statements for fiscal year 2016. According to federal accounting standards, fully impaired assets, such as nonfunctional buildings, should not be included in an entity’s financial statements and related notes. Specifically, SFFAS No. 6, Accounting for Property, Plant, and Equipment, states that G-PP&E, which includes real property, shall be removed from the accounts along with the associated accumulated depreciation if the asset no longer provides service to the operations of the entity. Moreover, SFFAS No. 44, Accounting for Impairment of General Property, Plant, and Equipment Remaining in Use, reiterates the requirement of SFFAS No. 6 by stating that fully impaired assets should be removed from the G-PP&E accounts along with the associated accumulated depreciation if, prior to disposal, the asset no longer provides service in the operations of the entity. Navy officials confirmed that they do not have written procedures or related control activities requiring buildings coded as nonfunctional in iNFADS to be excluded when accumulating iNFADS data for financial statement reporting purposes. As a result, for fiscal year 2016, the Navy erroneously included buildings coded as nonfunctional as assets on its financial statements. Navy officials agreed that nonfunctional buildings meet the impairment definition of SFFAS No. 6 and No. 44, as these buildings no longer provide service to Navy operations, and therefore should be removed from the G-PP&E accounts. For fiscal year 2017, Navy officials stated that nonfunctional buildings were reclassified from the asset class that includes buildings to the “Other” asset class. However, both asset classes were reported as G-PP&E on the balance sheet, and as a result, the nonfunctional buildings were again reported as assets in the G-PP&E line item in the Navy’s financial statements. The Navy Lacks Written Procedures and Related Control Activities to Reasonably Assure Proper Financial Reporting for Buildings That Meet or Exceed DOD’s Capitalization Threshold NAVFAC officials confirmed that they did not have written procedures and related control activities to reasonably assure that buildings recorded in iNFADS that met or exceeded DOD’s established capitalization threshold are properly included as assets in Navy’s financial statements. For financial reporting, the Navy’s policy is to capitalize buildings based on the established capitalization threshold in effect when each building was placed in service. According to Navy officials, buildings placed in service from October 1, 2007, through September 30, 2013, should have been included as assets in the financial statements if the buildings were valued at or above $20,000, the capitalization threshold that was in place during that period. However, for buildings placed in service during this period, the Navy continued to use the previous capitalization threshold of $100,000 rather than the $20,000 threshold. An Office of the Secretary of Defense memorandum dated September 20, 2013, directed the services to increase the capitalization threshold to $250,000 for assets acquired and placed in service on or after October 1, 2013, and the Navy implemented this change. Further, the Navy incorrectly reported in the notes to its fiscal year 2016 and 2017 financial statements that the $20,000 capitalization threshold was used for real property. Navy officials stated that when DOD’s capitalization threshold was changed to $20,000, the Navy did not adopt the reduced threshold pending an evaluation of changes needed to iNFADS and the development of procedures to implement the lower threshold. Because the Navy did not adopt DOD’s $20,000 capitalization threshold and instead continued to use the $100,000 threshold, buildings placed in service in fiscal years 2008 through 2013 with a value at or above $20,000 but less than $100,000 were not reported as assets in the Navy’s financial statements as of September 30, 2016, and in prior years. Navy officials could not quantify the effect on its financial statements that occurred based on the Navy’s use of the $100,000 capitalization threshold instead of the $20,000 threshold for fiscal years 2008 through 2013. Additionally, the Navy by not adopting DOD’s $20,000 capitalization threshold resulted in inconsistent reporting in DOD’s consolidated financial statements. Challenges the Navy Faces in Complying with Federal Accounting Standards for Valuing Its Buildings The Navy faces several challenges in valuing its buildings in accordance with federal accounting standards, including (1) finalizing documentation of actual cost information for buildings that are acquired and placed in service after the Navy’s opening balances have been established based on alternative valuation methods permitted by SFFAS No. 50; (2) capturing and recording costs of improvements that should be reported; (3) consistently completing asset evaluations for each building every 5 years as required by DOD Instruction 4165.14 to help ensure that each building’s information in iNFADS is correct; and (4) determining placed in service dates for previously unrecorded buildings that are subsequently discovered/identified through physical inventories/asset evaluations. Navy officials are aware of these challenges and have various efforts under way to address them. Effective implementation of these efforts is crucial to help address these challenges. Finalizing the DD-1354 for Buildings Being Valued at Cost As we have previously reported, each completed military construction project includes the DD-1354, Transfer and Acceptance of DOD Real Property, to formally transfer ownership from the constructing entity to the acquiring entity. The final version of the DD-1354 documents the final total cost of the project in iNFADS, the source of real property information for financial reporting. Navy officials acknowledge that significant delays may occur in getting to the final version of the DD-1354, which occurs after all costs are determined. If there are issues such as cost overruns or contract disputes, the delays in completing the final version of the form can be substantial. The Navy considers these substantial delays in getting to the final version of the DD-1354 to be an obstacle to timely documenting the final costs of buildings that are acquired and placed in service after the Navy’s opening balances have been established, based on alternative valuation methods permitted by SFFAS No. 50. During our site visits when we tested 79 buildings, we identified 13 buildings, either constructed or with capital improvements made from 2012 through 2016, for which a final DD-1354 had not yet been completed. According to several RPAOs we interviewed, getting to the final version of the DD-1354 is a complicated process, requiring coordination among multiple responsible parties and units, and determines all costs associated with the construction. For example, a complex project that involves the construction and demolition of multiple buildings makes the allocation of the construction costs among the buildings of the project considerably challenging. Recording Capital Improvements According to SFFAS No. 6, costs associated with capital improvements— those that extend the useful life of a building or improve its capacity—are to be recorded in the accountable real property system if the actual cost exceeds the capitalization threshold. Navy officials reported that one obstacle to capitalizing the costs of improvements is determining the actual costs associated with the projects for capital improvements that are made after the opening balances are established using alternative valuation methods. The Navy has developed and is testing its methodology to properly account for capital improvements to buildings. This methodology uses an automated link from the Facilities Information System (which has the construction-in-progress account) to iNFADS. The success of this methodology will be critical for capturing capital improvements for buildings. The inability to account for the total costs associated with capital improvements to buildings after the opening balances have been established using alternative valuation methods would result in the undervaluing of the total actual cost and annual depreciation expense associated with the buildings. Once PRV is used to establish the opening balance for buildings, the Navy must accurately record capital improvements in iNFADS in order to appropriately value the buildings and record the correct depreciation expense. Performing Timely Asset Evaluations We observed that the Navy has taken steps to improve the quality of its asset evaluations by completing and maintaining supporting documentation. However, we found that the Navy has not consistently completed asset evaluations for each building every 5 years as required by DOD policy. An asset evaluation is a key Navy control to help ensure that the information recorded in iNFADS is accurate. While the Navy issued a revised BMS process document formalizing asset evaluations procedures, these evaluations have not been performed every 5 years as required. Specifically, in a June 30, 2017, Navy analysis, the Navy determined that while an asset evaluation is required to be performed every 5 years, the asset evaluations had not been done for more than 5 years for 17.4 percent of real property, including buildings. When asset evaluations are not done every 5 years for each building, there is an increased risk that information in iNFADS may not be accurate. In addition, as a part of asset evaluations, Navy personnel verify key information, including the square footage of buildings that is used for the PRV calculation. The Navy has efforts under way to perform asset evaluations for those buildings for which these evaluations had not been completed in a 5-year period, including using contractors to help complete the asset evaluations. Determining Placed in Service Dates for Buildings Found by Inventory As stated in DOD’s Financial Management Regulation, real property assets and capital improvements to these assets are to be capitalized as of the date each asset was placed in service. Navy officials occasionally identify existing buildings that have not been recorded in iNFADS and are referred to as buildings found by inventory. These buildings are often identified through NAVFAC’s asset evaluations and periodic virtual inventories. For these buildings, the placed in service dates may not be known. While DOD and the Navy have subsequently developed procedures for determining the placed in service dates for buildings found by inventory, for some Navy buildings, the placed in service date recorded in iNFADS was the date the building was found, rather than the actual placed in service date. According to previous guidance, if a placed in service date could not be identified through the due diligence process, then the building was recorded as placed in service as of the date it was found. The Navy’s BMS process document for real property found by inventory, dated October 25, 2016, stated each building found by inventory is to be recorded with an estimated placed in service date determined using the criteria provided in DOD’s February 2015 guidance. We were told that until December 2016, any building found by inventory was recorded with a placed in service date of the day the building was found. The Navy’s use of the date the building was found by inventory as the placed in service date can substantially affect the information in iNFADS. For example, one of the buildings in our nongeneralizable sample was an old, abandoned maintenance shed. However, based on the iNFADS property record, the building appeared to be a relatively new building based on the recorded placed in service date of August 16, 2016, the date it was found by inventory (see fig. 3). As a result, the building is recorded in iNFADS on August 16, 2016, the placed in service date and therefore the accumulated depreciation would be less than a building with an older placed in service date. The complete, timely, and accurate recording of the placed in service date information enables ensures reliable and accurate reporting of real property information in DOD’s financial statements. Navy officials are aware of the challenges discussed above and have various efforts under way to address them. Effective implementation of these efforts is crucial to help address these challenges. Conclusions The Navy’s inability to accurately account for real property assets, specifically its buildings, continues to be a material weakness reported by independent auditors. Inadequate procedures and internal control deficiencies prevent the Navy from accurately recording and reporting its buildings and knowing how many buildings it actually owns. Some buildings recorded in the Navy’s accountable real property system, iNFADS, do not exist. Similarly, the Navy does not have adequate procedures and related controls to reasonably assure that all Non- MILCON buildings and capital improvements costing less than $750,000 are recorded in iNFADS. Additionally, the Navy erroneously reported nonfunctional buildings as assets in its financial statements and excluded certain buildings that met or exceeded DOD’s capitalization threshold as assets in its financial statements. As a result of these deficiencies, the Navy does not have adequate information to support reliable reporting of real property in its annual financial statements, and DOD, Congress, and others do not have reliable, useful, and timely information for decision making. Recommendations for Executive Action We are making the following four recommendations to the Navy. The Commander of NAVFAC should develop and implement procedures and related control activities for real property disposed of by demolition to provide reasonable assurance that the RPAOs timely receive a signed demolition approval document and disposal form, so that demolished buildings are recorded as disposals in iNFADS and removed at the end of the fiscal year. (Recommendation 1) The Commander of NAVFAC should finalize and implement written procedures and related control activities to reasonably assure that all buildings costing less than $750,000 and funded with Non-MILCON funding are recorded in the Navy’s iNFADS and therefore included as assets in the financial statements if they meet or exceed the Navy’s capitalization threshold. (Recommendation 2) The Commander of NAVFAC should develop and implement written procedures and related control activities to reasonably assure that buildings coded as nonfunctional in iNFADS are excluded for financial statement reporting purposes. (Recommendation 3) The Commander of NAVFAC should develop and implement written procedures and related control activities related to DOD’s capitalization thresholds and outline the specific information to be accumulated from iNFADS to reasonably assure that real property assets are properly reported for financial statement reporting purposes. (Recommendation 4) Agency Comments We provided a draft of this report to the Navy for comment. In its comments, reproduced in appendix II, the Navy concurred with our four recommendations. We are sending copies of this report to the Secretary of Defense, the Under Secretary of Defense (Comptroller)/Chief Financial Officer, the Deputy Chief Financial Officer, the Office of the Assistant Secretary of Defense (Energy, Installations, and Environment), the Assistant Secretary of the Navy (Energy, Installations and Environment), the Assistant Secretary of the Navy (Financial Management & Comptroller), the Director of the Office of Management and Budget, and appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (404) 679-1873 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This engagement was initiated in connection with the statutory requirement for GAO to audit the U.S. government’s consolidated financial statements. The focus of this engagement was the United States Navy’s (Navy) real property, specifically buildings, because the Department of the Navy was the first military department to initially assert real property audit readiness for existence and completeness. Our objectives were to (1) determine the extent to which the Navy had internal control deficiencies, if any, that may impair its ability to assert that its buildings, as reported in its financial statements, exist and that the information about the buildings is complete and adequately supported by property records and (2) identify the challenges, if any, that the Navy faces in valuing its buildings in accordance with federal accounting standards. To address our first objective, we interviewed Department of Defense (DOD) and Navy officials and reviewed relevant documentation, including the Naval Facilities Engineering Command’s (NAVFAC) Business Management System (BMS) process documents, which are similar to desktop procedures, to identify control activities over buildings. We reviewed the results from prior real property audit readiness testing conducted by a contractor that the Navy engaged to help it achieve audit readiness for its real property. We performed data analyses of buildings in the Navy’s accountable real property system, the internet Navy Facility Assets Data Store (iNFADS) as of September 30, 2016. To assess the reliability of data we used, we reviewed relevant Navy documentation, interviewed knowledgeable officials, reviewed policies and procedures regarding collecting and maintaining the data, performed data analyses to look for logical inconsistencies, and traced a nongeneralizable sample of buildings to supporting documents. We concluded that the data elements we used from iNFADS were sufficiently reliable for the purposes of selecting a nongeneralizable sample of buildings to test. We selected the Norfolk and San Diego geographic areas for site visits because of the numerous bases in each area and the proximity of 5 installations to one another in each of the areas. We analyzed data from the iNFADS database as of September 30, 2016, to select buildings that fit our selection criteria for our nongeneralizable sample of buildings for book-to-floor testing from these two geographic areas. These selection criteria included age of the buildings (both older and newer buildings); square footage of the buildings, including small buildings (such as sentry houses) and large buildings (such as training facilities and barracks); cost per square foot of the buildings, including lower cost (such as warehouses) and higher cost (sentry houses with sophisticated electronics); use of the buildings, to include a variety of uses (such as electrical substations, training facilities, and offices); and operational status code of the buildings, including active and nonfunctional. We conducted site visits in Norfolk and San Diego to interview real property accountable officers (RPAO), observe buildings, and review the available supporting documents for the sample buildings. We tested 40 buildings book to floor by visiting these buildings at 10 Navy installations across two geographic areas. During our site visits, we also selected a nongeneralizable sample of a total of 39 buildings on Navy installations to be tested floor to book—19 from 5 Norfolk and 20 from 5 San Diego areas. We met with the RPAOs at each of the10 installations and tested by observation whether the 40 buildings selected for book-to-floor testing existed. In addition to testing for existence, we compared the descriptions of the buildings in iNFADS with the buildings that we observed. For example, if the placed in service date in iNFADS was recent, we would observe whether it was a newer building. We selected a nongeneralizable sample of buildings for floor-to-book testing based on proximity to the buildings we had selected for book-to-floor testing. For the 39 buildings that we tested floor to book, we reviewed available supporting documents. We also reviewed a Navy Office of Financial Operations white paper on the risk and potential amount of material misstatement of accumulated depreciation on the Navy’s general fund consolidated balance sheet. This white paper presented the results of a statistical sample for which 15 selected real property assets were excluded from testing because the assets no longer existed. Two social science specialists with expertise in research design and statistics reviewed the methodology and sampling used in this study and found them to be sufficient for the purposes of estimating the proportion of Navy real property assets reported as existing in iNFADS that did not exist as of May 31, 2017. We used the sampling information in the study to create a confidence interval around the estimate of the proportion of buildings at the 95 percent confidence level. To address our second objective, we reviewed federal accounting standards, including Statement of Federal Financial Accounting Standard (SFFAS) No. 50, and the Navy’s documents for recording assets into iNFADS. We also interviewed agency officials responsible for financial reporting and real property management, including the RPAOs at the installations we visited, to identify the challenges the Navy faces in recording buildings at actual cost once the opening balances have been established according to SFFAS No. 50. While our audit objectives focused on certain control activities related to (1) the existence and completeness of the Navy’s buildings as reported in its financial statements and the completeness and adequacy of supporting property records for those buildings and (2) the valuation of the Navy’s buildings in accordance with federal accounting standards, we did not evaluate all control activities and other components of internal control. If we had done so, additional deficiencies may or may not have been identified that could impair the effectiveness of the control activities evaluated as part of this audit. We conducted this performance audit from September 2016 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of the Navy Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, the following individuals made key contributions to this report: Paul Kinney (Assistant Director), Marcia Carlsen, Dennis Clarke, Francine DelVecchio, Maxine Hattery, Jason Kelly, Jared Minsk, Lisa Motley, Robert Sharpe, Sandra Silzer, and Shana Wallace.
Why GAO Did This Study This engagement was initiated in connection with the statutory requirement for GAO to audit the U.S. government's consolidated financial statements. The 2018 National Defense Authorization Act requires that the Secretary of Defense ensure that a full audit is performed on the financial statements of DOD for each fiscal year and that the results be submitted to Congress no later than March 31 of the following fiscal year. The Navy was the first military department to assert real property audit readiness related to DOD's Financial Improvement and Audit Readiness effort. For this report, GAO's objectives were to (1) determine the extent to which the Navy had internal control deficiencies, if any, that may impair its ability to assert that its buildings, as reported in its financial statements, exist and that the information about these buildings is complete and adequately supported by property records and (2) identify the challenges, if any, that Navy faces in valuing its buildings in accordance with federal accounting standards. GAO reviewed the Navy's policies and procedures for control activities over its buildings, performed data analyses, and tested a nongeneralizable sample of buildings. GAO also discussed with Navy officials the challenges in complying with federal accounting standards for valuing its buildings. What GAO Found Although the United States Navy (Navy) has taken actions to become audit ready for its real property, GAO identified internal control deficiencies that impaired the Navy's ability to assert that (1) buildings recorded in the internet Navy Facility Assets Data Store (iNFADS), the Navy's real property system, and reported as assets in its financial statements existed and (2) all of the Navy's buildings were recorded in iNFADS and correctly reported as assets in the Navy's financial statements. As shown in the figure below, the effects of these internal control deficiencies contributed to the Navy (1) continuing to maintain records in iNFADS for buildings that had been demolished, sometimes many years ago, and include these buildings as assets in its financial statements; (2) excluding some of the buildings it owns from being recorded in iNFADS and reported as assets in its financial statements; (3) erroneously reporting nonfunctional buildings as assets in its financial statements; and (4) excluding certain buildings from being reported as assets in its financial statements that met or exceeded the Department of Defense's (DOD) capitalization threshold. The Navy has various efforts under way to address challenges in valuing its buildings for financial reporting in accordance with federal accounting standards. Navy officials have acknowledged that significant delays can sometimes occur in the Navy being able to complete supporting documentation of the final costs to properly report buildings in its financial statements. Additionally, implementation of the Navy's new methodology to properly account for capital improvements will be critical for capturing accurate costs for buildings. Furthermore, the Navy has not consistently completed a physical inventory (asset evaluation) for each building every 5 years as required by DOD policy. These asset evaluations are an important control to help ensure that the information recorded for buildings in iNFADS is accurate. Finally, the Navy also faces a challenge in determining the placed in service dates for those buildings found through inventory procedures. The Navy's use of the date the building was found rather than the estimated date the building was placed in service can substantially affect the accuracy of the information in the Navy's systems and financial statements. Navy officials are aware of these challenges and have various efforts under way to address them. Effective implementation of these efforts is crucial to help address these challenges. What GAO Recommends GAO is making four recommendations to the Navy to improve internal controls for its buildings by implementing needed written procedures and control activities. The Navy concurred with these recommendations.
gao_GAO-18-635
gao_GAO-18-635_0
Background A high-quality, reliable cost estimate is a key tool for budgeting, planning, and managing the 2020 Census. According to OMB, programs must maintain current and well-documented estimates of program costs, and these estimates must encompass the full life-cycle of the program. Among other things, OMB states that generating reliable program cost estimates is a critical function necessary to support OMB’s capital programming process. Without this capability, agencies are at risk of experiencing program cost overruns, missed deadlines, and performance shortfalls. A reliable cost estimate is critical to the success of any federal government program. With the information from reliable estimates, managers can: make informed investment decisions, allocate program resources, measure program progress, proactively correct course when warranted, and ensure overall accountability for results. To be considered reliable, a cost estimate must meet the criteria for each of the four characteristics outlined in our Cost Estimating and Assessment Guide. According to our analysis, a cost estimate is considered reliable if the overall assessment ratings for each of the four characteristics are substantially or fully met. If any of the characteristics are not met, minimally met, or partially met, then the cost estimate does not fully reflect the characteristics of a high-quality estimate and cannot be considered reliable. Those characteristics are: Well-documented: An estimate is thoroughly documented, including source data and significance, clearly detailed calculations and results, and explanations of why particular methods and references were chosen. Data can be traced to their source documents. Accurate: An estimate is unbiased, the work is not overly conservative or overly optimistic, and is based on an assessment of most likely costs. Few, if any, mathematical mistakes are present. Credible: Any limitations of the analysis because of uncertainty or bias surrounding data or assumptions are discussed. Major assumptions are varied, and other outcomes are recomputed, to determine how sensitive they are to changes in the assumptions. Risk and uncertainty analysis is performed to determine the level of risk associated with the estimate. The estimate’s results are cross- checked, and an independent cost estimate (ICE) is conducted to see whether other estimation methods produce similar results. Comprehensive: An estimate has enough detail to ensure that cost elements are neither omitted nor double counted. All cost-influencing ground rules and assumptions are detailed in the estimate’s documentation. Past GAO Work on Census Cost Estimation Meeting best practices outlined in our Cost Estimating and Assessment Guide for a reliable cost estimate has been a long-standing challenge for the Bureau. In 2008 we reported that the 2010 Census cost estimate was not reliable because it lacked documentation and was not comprehensive, accurate, or credible. For example, in our 2008 report on the Bureau’s cost estimation process, Bureau officials were unable to provide documentation that supported the assumptions for the initial 2001 life-cycle cost estimate as well as the updates. Consequently, we recommended that the Bureau establish guidance, policies, and procedures for estimating costs that would meet best practices criteria. The Bureau agreed with the recommendation and said at the time that it already had efforts underway to improve its future cost estimation methods and systems. Moreover, weaknesses in the life-cycle cost estimate were one reason we designated the 2010 Census a GAO High- Risk Area in 2008. In 2012 we reported that, while the Bureau was taking steps to strengthen its life-cycle cost estimates, it had not yet established guidance for developing cost estimates. We recommended that the Bureau finalize its guidance, policies, and procedures for cost estimation in accordance with best practices. The Bureau agreed with the overall theme of the report but did not comment on the recommendation. During this review we found that the Bureau took steps to address this recommendation, which is discussed later in this report. Such guidance can help to institutionalize best practices and ensure consistent processes and operations for producing reliable estimates. In a 2016 report we found that the October 2015 version of the Bureau’s life-cycle cost estimate for the 2020 Census was not reliable. Overall, we reported that the 2020 Census life-cycle cost estimate partially met two of the characteristics of a reliable cost estimate (comprehensive and accurate) and minimally met the other two (well-documented and credible). We recommended that the Bureau take specific steps to ensure its cost estimate meets the characteristics of a high-quality estimate. The Bureau agreed with this recommendation, and took steps to improve the reliability of its cost estimate, which we focus on later in this report. Consequently, an unreliable life-cycle cost estimate is one of the reasons we designated the 2020 Census a GAO High-Risk Area in 2017. Development of the 2020 Cost Estimate In October 2015, the Bureau estimated the cost of the 2020 Census to be $12.3 billion. According to the Bureau, the October 2015 version was the Bureau’s first attempt to model the life-cycle cost of its planned 2020 Census, in contrast to its earlier 2011 estimate, which the Bureau said was intended to produce an approximation of potential savings and to begin developing the methodology for producing decennial life-cycle cost estimates covering all phases of the decennial life cycle. To help control costs while maintaining accuracy, the Bureau introduced significant change to how it conducts the decennial census in 2020. Its planned innovations include reengineering how it builds its address list, improving self-response by encouraging the use of the Internet and telephone, using administrative records to reduce field work, and reengineering field operations using technology to reduce manual effort and improve productivity. In contrast to the estimated $12.3 billion in 2015, the 2020 Census would cost $17.8 billion in constant 2020 dollars if the Bureau repeated the 2010 Census design and methods, according to the Bureau’s estimates. In October 2017, Commerce announced that it had updated the October 2015 life-cycle cost estimate, projecting the life-cycle cost of the 2020 Census to be $15.6 billion, an increase of over $3 billion (27 percent) over its 2015 estimate. (See figure 1.) In developing the 2017 version of the cost estimate, Bureau cost estimators identified cost inputs, their ranges for possible outcomes, and overall cost estimating relationships (i.e., logical or mathematical formulas, or both). To identify cost inputs and the ranges of potential outcomes, the Bureau worked with subject matter experts and used historical data to support assumptions and generate inputs. The Bureau’s cost estimation team used a software tool to generate the cost estimate. Because cost estimates predict future program costs, uncertainty is always associated with them. For example, data from the past (such as fuel prices) may not always be relevant in the future. Risk and uncertainty refer to the fact that because a cost estimate is a forecast, there is always a chance that the actual cost will differ from the estimate. One way to determine whether a program is realistically budgeted is to perform an uncertainty analysis, so that the probability associated with achieving its point estimate can be determined, usually relying on simulations such as those of Monte Carlo methods. This can be particularly useful in portraying the uncertainty implications of various cost estimates. Consistent with cost estimation practices outlined in our Cost Estimating and Assessment Guide, the estimate was compared with two independent cost estimates (ICE), developed by Commerce’s Office of Acquisition Management (OAM) and the Bureau’s Office of Cost Estimation, Analysis, and Assessment. The offices producing the ICEs and the cost estimate team worked together to examine the process each used, an effort known as the reconciliation process. Through this reconciliation, the Bureau identified areas where discrepancies existed and elements that could require additional review and possible improvement. According to Bureau documentation the estimate will be updated as the program meets milestones and to reflect changes in technical or program assumptions. Figure 2 details the Bureau’s cost estimation process. OAM was involved extensively in the development of the 2017 estimate, an increased involvement compared to 2015, according to Bureau officials. OAM participated in regular review meetings throughout the development of the estimate and also developed an independent cost estimate, as shown in the figure below. End-to-end system testing activities for the 2020 Census are currently underway in Providence, Rhode Island. According to the Bureau, information collected from the test, such as overall response rates and the use of administrative records to inform census records, will inform future versions of the life-cycle cost estimate. Some updates from the test will be incorporated into the next cost estimate, which will be available in the first quarter of the coming fiscal year. Census Bureau Has Made Progress but Has Not Taken All the Steps Needed to Ensure the Reliability of 2020 Cost Estimate Since our June 2016 report, in which we reviewed the Bureau’s 2015 version of the cost estimate, the Bureau has made significant progress. For example, the Bureau has put into place a work breakdown structure (WBS) that defines the work, products, activities, and resources necessary to accomplish the 2020 Census and is standardized for use in budget planning, operational planning, and cost estimation. However, the Bureau’s October 2017 cost estimate for the 2020 Census does not fully reflect characteristics of a high-quality estimate as described in our Cost Estimating and Assessment Guide and cannot be considered reliable. Our Cost Estimating and Assessment Guide describes best practices for developing reliable cost estimates. For our reporting needs, we collapsed these best practices into four characteristics for sound cost estimating— comprehensive, well-documented, accurate, and credible—and identified specific best practices for each characteristic. To be considered reliable, an organization must meet or substantially meet each characteristic. Our review found the Bureau met or substantially met three out of the four characteristics of a reliable cost estimate, while it partially met one characteristic: well-documented. When compared to the October 2015 estimate, the 2017 estimate shows considerable improvement. (See figure 3 below.) Well-Documented Cost estimates are considered valid if they are well-documented to the point they can be easily repeated or updated and can be traced to original sources through auditing, according to best practices. The Bureau only partially met the criteria for well-documented, as set forth in our Cost Estimating and Assessment Guide. A cost estimate that does not fully meet the criteria for well-documented cannot be used by management to make informed and effective implementation decisions. The well-documented characteristic comprises five best practices. The Bureau substantially met two out of five best practices (as shown in figure 4). First, the estimate describes in sufficient detail the calculations performed and the estimating methodology used to derive each element’s cost, and the cost estimate had been reviewed by management. Since cost estimates can inform key decisions and budget requests, it is vital that management review and understand how the estimate was developed, including risks associated with the underlying data and methods. The cost estimate only partially met three best practices for the characteristic of being well-documented. In general, some documentation was missing, inconsistent, or difficult to understand. First, we found that source data did not always support the information described in the basis of estimate document or could not be found in the files provided for two of the Bureau’s largest field operations: Address Canvassing and Non- Response Follow-Up (NRFU). For example, the cost estimate documentation referred to actual data from the 2010 Census and information obtained from experts as sources for address canvassing rework rates. However, the folder source documents provided as support for the basis of estimate did not include this information. Next, in several cases, we could not replicate calculations, such as for mileage costs, using the description provided. Lastly, we found that some of the cost elements did not trace clearly to supporting spreadsheets and assumption documents. Failure to document an estimate in enough detail makes it more difficult to replicate calculations, or to detect possible errors in the estimate; reduces transparency of the estimation process; and can undermine the ability to use the information to improve future cost estimates or even to reconcile the estimate with another independent cost estimate. The Bureau told us it would continue to make improvements to ensure the estimate is well- documented. For the estimate to be considered well-documented, the Bureau will need to address these issues. Accurate An accurate cost estimate supports measurement of program progress by providing unbiased and correct data, which can help management ensure accountability for scheduled results. We found the Bureau’s cost estimate substantially met the criteria for accuracy. As shown in figure 5, and in line with best practices outlined in our Cost Estimating and Assessment Guide, the estimate was not overly optimistic; appeared to be free of errors; was based on historical data or input from subject matter experts; and, according to Bureau officials, is updated regularly as information becomes available. The Bureau can enhance the accuracy of their estimate by increasing the level of detail included in the documentation, such as detail on specific inflation indices used, and by monitoring actual costs against estimates. We identified areas for improvement, which, according to Bureau officials, will be addressed as part of its ongoing efforts. For example, while the basis of estimate document describes different inflation indexes, it was not clear exactly which indexes were applied to the various cost elements in the estimate. Also, evidence of how variances between estimated costs and actual expenses would be tracked over time was not available at the time of our analysis. Tools to track variance enable management to measure progress against planned outcomes. Bureau officials stated that they already have systems in place that can be adapted for tracking estimated and actual costs. Credible All estimates include a certain amount of informed judgment about the future. Assumptions made at the start of a program can turn out to be inaccurate. Credible cost estimates identify limitations due to uncertainty or bias surrounding data or assumptions, and control for these uncertainties by identifying and quantifying cost elements that represent the most risk. We found that the Bureau’s cost estimate substantially met the criteria for credible, as shown in figure 6 below. The Bureau’s cost estimate clearly identifies risks and uncertainties, and describes approaches taken to mitigate them. In line with best practices outlined in our Cost Estimating and Assessment Guide, the Bureau did the following: Sensitivity analysis. The Bureau conducted sensitivity analysis to identify possible changes to estimated costs for the 2020 Census based on varying major assumptions, parameters, and data inputs. For example, the Bureau calculated the likely cost implications for a range of possible response rates to identify a range of projected costs and to calculate appropriate reserves for risk. Bureau officials stated that they also identified the estimate input parameters that contributed the most to estimate uncertainty. Risk and uncertainty analysis. A cost estimate is a forecast, and as such, there is always a chance that the actual cost will differ from the estimate. Uncertainty is the indefiniteness about the outcome of a situation. Uncertainty is assessed in cost estimate models to estimate the risk (or probability) that a specific funding level will be exceeded. We found the Bureau performed an uncertainty analysis on a portion of the estimate to determine whether estimated costs were realistic and to establish the probability of achieving projections outlined in the estimate. The Bureau used a combination of modeling based on Monte Carlo analysis and allocations of funding for risks. The Monte Carlo simulation was performed on a portion of the estimate to account for uncertainty around various operational parameters for which a range of outcomes was possible, including Internet response rates and the extent to which data collection issues might be resolved using administrative records. To account for the inherent uncertainty of assumptions included within the life-cycle cost estimate, the Bureau added funding to the cost estimate totaling approximately $292 million to account for risks based on the results of the Monte Carlo analysis. For other risks, such as acquisition lead time and the possibility of delays in information technology (IT) development, contingency funding was added to the estimate to reflect the potential cost of resolving these issues, through use of a backup system or an alternative approach. These are described as “special risks” in the Bureau’s basis of estimate, and total approximately $171 million. Based on additional sensitivity analysis, the Bureau added approximately $965 million to the cost estimate to reflect discrete risks outlined in the risk register as well as those associated with (1) variability in self-response rates, (2) the effect of fluctuations in the size and wage rate of the temporary workforce on the cost of field operations, and (3) the potential need to reduce the enumerator-to- manager staffing ratio in case expected efficiencies in field operations are not realized. In addition to these provisions, the Secretary of Commerce added a contingency amount of about $1.2 billion to account for what the Bureau refers to as unknown-unknowns. Bureau documentation states that conducting a decennial census is an extremely complex, high-risk operation. In order to mitigate some of the risk, contingency funding must be available to initiate ad hoc activities necessary to overcome unforeseen issues. According to Bureau documentation these include such risks as natural disasters or cyber-attacks. The Bureau provides a description of how the general risk contingency is calculated. However, this description does not clearly link calculated amounts to the risks themselves. In our June 2016 report we reported the Bureau had not properly accounted for risk and recommended the Bureau, in part; improve control over how risk and uncertainty are accounted for. We continue to believe the prior recommendation from our June 2016 report remains valid and should be addressed: that the Bureau properly account for risk in the 2020 Census cost estimate, among other things. As such, risks need to be linked to the $1.2 billion general risk contingency fund. Independent cost estimate. According to best practices outlined in our Cost Estimating and Assessment Guide, an independent cost estimate should be performed to determine whether alternate estimate approaches produce similar results. The Bureau compared their estimate with two independent cost estimates, developed by Commerce’s Office of Acquisition Management and the Bureau’s Office of Cost Estimation and Assessment. As part of their process for finalizing the cost estimate, Bureau officials reconciled differences between the estimates in discussions with the two offices, resulting in more conservative assumptions by the Bureau around risk and uncertainty in both cases. In addition to implementing our recommendation to properly account for risk, going forward, while the Bureau substantially met the credibility characteristic it will be important for them to also integrate regular cross-checks of methodology into their cost estimation process. In our analysis we observed that no specific cross-checks of cost methodology were performed. According to the Bureau, cross- checks were not performed because the Bureau considered the independent cost estimates as overall cross-checks on the reliability of their methodology and did not conduct additional cross-checks. The main purpose of cross-checking is to determine whether alternative methods for specific cost elements within the cost estimate could produce similar results. An independent cost estimate, though important for the credibility of an estimate, does not fulfill the same function as a targeted cross-check of individual elements. Comprehensive Comprehensive estimates have enough detail to ensure that cost elements are neither omitted nor double-counted, all cost-influencing assumptions are detailed in the estimate’s documentation, and a work breakdown structure is defined. Our analysis of the 2017 cost estimate demonstrates improvement over the 2015 cost estimate when the Bureau’s cost estimate only partially met the criteria for comprehensive. We found the Bureau met or substantially met all four best practices for the comprehensive characteristic, as shown in figure 7. For example, all life-cycle costs are included in the estimate along with a complete description of the 2020 Census program and current schedule. We also found that the Bureau substantially met criteria for documenting cost influencing ground rules and assumptions. A standardized WBS (as detailed in table 1) with supporting dictionary outlines the major work of the program and describes the activities and deliverables at the project level where costs are tracked. In 2016, the Bureau’s WBS did not contain sufficient detail and we found significant differences in the presentation of the work between sources. In 2017, based on our review of Bureau documentation and interviews with Bureau officials, we found that the WBS is standardized and cost elements are presented in detail. The WBS is a necessary program management tool because it provides a basic framework for a variety of related tasks like estimating costs, developing schedules, identifying resources, determining where risks may occur, and providing the means for measuring program status. Although the Bureau’s updated life-cycle cost estimate reflects three of the four characteristics of a reliable cost estimate, we are not making any new recommendations to the Bureau in this report. We continue to believe the prior recommendation, made in 2016, remains relevant: that the Secretary of Commerce ensure that the Bureau finalizes the steps needed to fully meet the characteristics of a high-quality estimate, most notably in the well-documented area. The Bureau told us it has used our best practices for cost estimation to develop their cost estimate, and will focus on those best practices that require attention moving forward. Without a reliable cost estimate, the Bureau is limited in its ability to make informed decisions about program resources and to effectively measure progress against operational objectives. Life-Cycle Cost Estimate Is Used by Management to Inform Decisions OMB, in its guidance for preparing and executing agency budgets, cites that credible cost estimates are vital for sound management decision making and for any program or capital project to succeed. A well- developed cost estimate serves as a tool for program development and oversight, supporting management to make informed decisions. According to the Bureau, the 2020 Census cost estimate is used as a management tool to guide decision making. Bureau officials stated the cost estimate is used to examine the cost impact of program changes. For example, the cost estimate served as the basis for the fiscal year 2019 funding request developed by the Bureau. The Bureau also said it used the 2020 Census life-cycle cost estimate to establish cost controls during budget formulation activities and to monitor spending levels for fiscal year 2019 activities. According to the Bureau, as detailed operational and implementation plans are defined, the 2020 Census life- cycle cost estimate has been and will continue to be used to support ongoing “what-if” analyses in determining the cost impacts of design decisions. Specifically, using the cost estimate to model the impact of changes on overall cost, the Bureau adjusted the scope of the Census Enterprise Data Collection and Processing (CEDCaP) operation. Census Bureau Guidance to Develop Cost Estimates Meets Best Practices The processes for developing and updating estimates are designed to inform management about program progress and the use of program resources, supporting cost-driven planning efforts and well-informed decision making. Our work has identified a number of best practices for use in developing guidance related to cost estimation and analysis that are the basis of effective program cost estimating and should result in reliable and valid cost estimates that management can use for making informed decisions. In 2012 we reported that the Bureau had not yet established guidance for developing cost estimates. We recommended that the Bureau establish guidance, policies, and procedures for developing cost estimates that would meet best practice criteria. The Bureau agreed with the theme of the report but did not specifically agree with the recommendation. Moreover, in June 2016, we also reported that the cost estimation team did not record how and why it changed assumptions that were provided to it and did not document the sources of all data it used. The documentation of these changes to assumptions did not happen because the Bureau lacked written guidance and procedures for the cost estimation team to follow. During this review we found the Bureau has since established reliable guidance, processes, and policies for developing cost estimates and managing the cost estimation process. The following documents, shown in table 2, establish roles and responsibilities for oversight and approval of cost estimation processes, provide a detailed description of the steps taken to produce a high-quality cost estimate, and clarify the process for updating the cost estimate and associated documents over the life of a project. The Decennial Census Program’s Cost Estimate and Analysis Process, which provides a detailed description of the steps taken to produce a high-quality estimate, is reliable as it met the criteria for 8 steps and substantially met the criteria for 4 steps of the 12 best steps outlined in our Cost Estimating and Assessment Guide, as shown below in figure 8. To avoid cost overruns and to support high performance, it will be important for the Bureau to abide by their newly developed policies and guidance and continue to use the life-cycle cost estimate as a management tool. Increased Costs Are Driven by an Assumed Decrease in Self-Response Rates and Increases in Contingency Funds and IT Cost Categories The 2017 life-cycle cost estimate includes significantly higher costs than those included in the 2015 estimate. In 2015, the Bureau estimated that they could conduct the operation at a cost of $12.3 billion in constant 2020 dollars. The Bureau’s latest cost estimate, announced in October 2017, reflects the same design, but at an expected cost of $15.6 billion. Figure 9 below shows the change in cost by WBS category for 2015 and 2017. The largest increases occurred in the Response, Managerial Contingency, and Census/Survey Engineering categories. Increased costs of $1.3 billion in the response category (costs related to collecting, maintaining, and processing survey response data) were in part due to reduced assumptions for self-response rates, leading to increases in the amount of data collected in the field, which is more costly to the Bureau. Contingency allocations increased overall from $1.35 billion in 2015 to $2.6 billion in 2017, as the Bureau gained a greater understanding of risks facing the 2020 Census. Increases of $838 million in the Census/Survey Engineering category were due mainly to the cost of an IT contract for integrating decennial survey systems that was not included in the 2015 cost estimate. Bureau officials attribute a decrease of $551 million in estimated costs for Program Management to changes in the categorization of costs associated with risks: In the 2017 version of the estimate, estimated costs related to program risks were allocated to their corresponding WBS element. More generally, factors that contributed to cost fluctuations between the 2015 and 2017 cost estimates include: changes in assumptions for census operations, improved ability to anticipate and quantify risk, an overall increase in IT costs, and more defined contract requirements. Changes in Assumptions Several assumptions for the implementation of the 2020 Census have changed since the 2015 cost estimate. Some assumptions contributing to cost changes, mainly in the Response (related to collecting and processing response data) and Frame (the mapping and collecting addresses to frame enumeration activities) categories, include the following: Self-response rates. Changes in assumptions for expected self- response rates contributed to increases in the response category, as the assumed rate decreased from 63.5 percent in 2015 to 60.5 percent in 2017, thereby increasing the anticipated percentage and associated cost of nonresponse follow-up. When the Bureau does not receive responses by mail, phone, or Internet, census enumerators visit each nonresponding household to obtain data. Thus, reduced self-response rates lead to increases in the amount of data collected in the field, which is more costly to the Bureau. Bureau officials attributed this decrease to a forecasted reduction in Internet response due to added authentication steps at log in and the elimination of the function allowing users to save their responses and return later to complete the survey. Productivity rates. The productivity of enumerators collecting data for NRFU is another variable in the cost estimate that was updated, contributing to cost increases in the response category. Expected productivity rates for NRFU decreased from the 2015 estimate of 4 attempts per hour to 2.9. According to Bureau documentation, this more conservative estimate is based on historical data, rather than research and test data. In-office address canvassing rates. The Bureau will not go door-to- door to conduct in-field address canvassing across the country to update address and map information for every housing unit, as it has in prior decennial censuses. Rather, some areas would only need a review of their address and map information using computer imagery and third-party data sources—what the Bureau calls “in-office” address canvassing procedures. However, in March 2017, citing budget uncertainty the Bureau decided to discontinue one of the phases of in-office review address canvassing for the 2020 Census. The cancellation of that phase of in-office review is expected to increase the number of housing units canvassed in-field by 5 percent (from 25 to 30 percent of all canvassed housing units). In-field canvassing is more labor intensive compared to in office procedures. The 2017 version of the cost estimate reflects this increase in workload for in-field address canvassing, though overall changes in estimated costs for the Frame category, of which Address Canvassing is a part, were minimal. Staffing. Updated analysis resulted in changes to several staffing assumptions, which resulted in decreases across WBS categories. Changes included reduced pay rates for field data collection staff based on current labor market conditions and reductions in the length of staff engagement. Anticipation of Risk In general, contingency allocations increased overall from $1.35 billion in 2015 to $2.6 billion in 2017. This increase in contingency can be attributed, in part, to the Bureau gaining a clearer understanding of risk and uncertainty in the 2020 Census as it approaches. The Bureau developed some of its contingency based on proven risk management techniques, including Monte Carlo analysis and allocated funding for known risk scenarios. The 2017 estimate includes close to $1.4 billion in estimated costs for these risks, almost three times the amount included in the 2015 estimate. The basis of estimate contains detail on the various risks and the process for calculating the associated contingency. The 2017 version also includes a contingency amount of $1.2 billion for general risks, or unknown-unknowns, such as natural disasters and cyber-attacks. Contingency amounts were reallocated within the WBS to more closely reflect the nature of the risk: Bureau officials attribute a decrease from the 2015 estimate of $551 million in estimated costs for program management to changes in the categorization of costs associated with risks. Officials stated that, in 2015, discrete program risks were previously consolidated as program management costs. In 2017, these discrete costs were reallocated to associate risks with the appropriate WBS element. For example, contingency amounts related to the likelihood of achieving a certain response rate previously included in the program management work breakdown category are now a part of the “response” work breakdown category. Increased IT Costs Increases in IT costs, totaling $1.59 billion, represented almost 50 percent of the total cost increase from 2015 to 2017. The total share of IT costs as a percentage of total census costs increased from 28 percent in 2015 to 32 percent in 2017, or from $3.41 billion to approximately $5 billion. Increases in IT costs are spread across seven cost categories. Figure 10 shows the IT and non-IT cost by WBS for the 2017 cost estimate. IT costs in infrastructure, response data, and census/survey WBSs account for the majority of the approximately $5 billion. The Bureau’s October 2015 cost estimate included IT costs for, among other things, system engineering, test and evaluation, and infrastructure, as well as for a portion of the Census Enterprise Data Collection and Processing (CEDCaP) program. The 2017 estimated IT cost increases were due, in large part, to the Bureau (1) updating the cost estimate for CEDCaP; (2) including an estimate for technical integration services that contributed to increases in the Census and Survey Engineering category; and (3) updating costs related to other major contracts (such as mobile device as a service, field IT services, and payroll systems). Contract Requirements Bureau documents described an overall improvement in the Bureau’s ability to define and specify contract requirements. This resulted in updated estimates for several contracts, including for the Census Questionnaire Assistance (CQA) contract. Assumptions regarding call volume to the CQA were increased by 5 percent to account for expected response by phone after the elimination of the option to save Internet responses and return to complete the form later. The Bureau also cited updated cost data and the results of reconciliation with independent cost estimates as factors contributing to the increased costs of other major contracts, including for the procurement of data collection devices. Agency Comments and Our Evaluation The Secretary of Commerce provided comments on a draft of this report on August 2, 2018. The comments are reprinted in appendix II. The Department of Commerce generally agreed with our findings regarding the improvements the Census Bureau has made in its cost estimates. However, Commerce did not agree with our assessment that the Bureau’s 2017 lifecycle cost estimate is “not reliable.” Commerce noted that it had conducted two independent cost analyses and was satisfied that the cost estimate was reliable. The Bureau also provided technical comments that we incorporated, as appropriate. We maintain that, to be considered reliable, a cost estimate must meet or substantially meet the criteria for each of the four characteristics outlined in our Cost Estimating and Assessment Guide. These characteristics are derived from measures consistently applied by cost estimating organizations throughout the federal government and industry and are considered best practices for the development of reliable cost estimates. Without a reliable cost estimate, the Bureau is limited in its ability to make informed decisions about program resources and to effectively measure progress against operational objectives. Thus, while the Bureau has made considerable progress in all four of the characteristics, it has only partially met the criteria for the characteristic of being well-documented. Until the Bureau meets or substantially meets the criteria for this characteristic, the cost estimate cannot be considered reliable. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of the report to the appropriate congressional committees, the Secretary of Commerce, the Under Secretary of Economic Affairs, the Acting Director of the U.S. Census Bureau, and other interested parties. In addition, this report is available at no charge on the GAO website at http://gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2757 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The purpose of our review was to evaluate the reliability of the Census Bureau’s (Bureau) life-cycle cost estimate using our Cost Estimating and Assessment Guide. We reviewed (1) the extent to which the Bureau’s life-cycle cost estimate and associated guidance met our best practices for cost estimation using documentation and information obtained in discussions with the Bureau related to the 2020 life-cycle cost estimate and (2) compared the 2015 and 2017 life-cycle cost estimates to describe key drivers of cost growth. For both objectives we reviewed documentation from the Bureau on the 2020 life-cycle cost estimate and interviewed Bureau and Department of Commerce officials. For the first objective, we relied on our Cost Estimating and Assessment Guide as criteria. Our cost specialists assessed measures consistently applied by cost-estimating organizations throughout the federal government and industry and considered best-practices for developing reliable cost estimates. We analyzed the cost estimating practices used by the Bureau against these best practices and evaluated them in four categories: comprehensive, well-documented, accurate, and credible. Comprehensive. The cost estimate should include both government and contractor costs of the program over its full life-cycle, from inception of the program through design, development, deployment, and operation and maintenance to retirement of the program. It should also completely define the program, reflect the current schedule, and be technically reasonable. Comprehensive cost estimates should be structured in sufficient detail to ensure that cost elements are neither omitted nor double counted. Specifically, the cost estimate should be based on a product-oriented work breakdown structure (WBS) that allows a program to track cost and schedule by defined deliverables, such as hardware or software components. Finally, where information is limited and judgments are made, the cost estimate should document all cost-influencing assumptions. Well-documented. A good cost estimate—while taking the form of a single number—is supported by detailed documentation that describes how it was derived and how the expected funding will be spent in order to achieve a given objective. Therefore, the documentation should capture in writing such things as the source data used, the calculations performed and their results, and the estimating methodology used to derive each WBS element’s cost. Moreover, this information should be captured in such a way that the data used to derive the estimate can be traced back to, and verified against, their sources so that the estimate can be easily replicated and updated. The documentation should also discuss the technical baseline description and how the data were normalized. Finally, the documentation should include evidence that the cost estimate was reviewed and accepted by management. Accurate. The cost estimate should provide for results that are unbiased, and it should not be overly conservative or optimistic. An estimate is accurate when it is based on an assessment of most likely costs; adjusted properly for inflation; and contains few, if any, minor mistakes. In addition, a cost estimate should be updated regularly to reflect significant changes in the program—such as when schedules or other assumptions change—and actual costs, so that it is always reflecting current status. During the update process, variances between planned and actual costs should be documented, explained, and reviewed. Among other things, the estimate should be grounded in a historical record of cost estimating and actual experiences on other comparable programs. Credible. The cost estimate should discuss any limitations of the analysis because of uncertainty or biases surrounding data or assumptions. Major assumptions should be varied, and other outcomes recomputed to determine how sensitive they are to changes in the assumptions. Risk and uncertainty analysis should be performed to determine the level of risk associated with the estimate. Further, the estimate’s cost drivers should be cross-checked, and an independent cost estimate conducted by a group outside the acquiring organization should be developed to determine whether other estimating methods produce similar results. If any of the characteristics are not met, minimally met, or partially met, then the cost estimate does not fully reflect the characteristics of a high- quality estimate and cannot be considered reliable. We also analyzed the Bureau’s cost estimation and analysis guidance and evaluated them against a 12-step process outlined in our Cost Estimation and Assessment Guide. A high-quality cost estimating process integrates the following: 1. Define estimate’s purpose. 2. Develop estimating plan. 3. Define program characteristics. 4. Determine estimating structure. 5. Identify ground rules and assumptions. 6. Obtain data. 7. Develop point estimate and compare it to an independent cost estimate. 8. Conduct sensitivity analysis. 9. Conduct risk and uncertainty analysis. 10. Document the estimate. 11. Present estimate to management for approval. 12. Update the estimate to reflect actual costs and changes. These 12 steps, when followed correctly, should result in reliable and valid cost estimates that management can use for making informed decisions. If any of the steps in the Bureau’s process do not meet, minimally meet, or partially meet the 12 steps, then the cost estimate guidance does not fully reflect best practices for developing a high-quality estimate and cannot be considered reliable. Lastly, to describe key drivers of cost growth, we compared cost information included in the 2015 and 2017 cost estimates. We analyzed both summary and detailed cost information to assess key changes in totals overall, by WBS category, and by information technology (IT) vs. Non-IT costs. We used this analysis in conjunction with information received from the Bureau during interviews and through document transfers to describe overall changes in the cost estimate from 2015 to 2017. We conducted this performance audit from December 2017 to August 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Commerce Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Lisa Pearson (Assistant Director), Karen Cassidy (Analyst in Charge), Brian Bothwell, Jackie Chapin, Ann Czapiewski, Jason Lee, Ty Mitchell, Kayla Robinson, and Tim Wexler made significant contributions to this report.
Why GAO Did This Study In October 2017, the Department of Commerce (Commerce) announced that the projected life-cycle cost of the 2020 Census had climbed to $15.6 billion, a more than $3 billion (27 percent) increase over its 2015 estimate. A high-quality, reliable cost estimate is a key tool for budgeting, planning, and managing the 2020 Census. Without this capability, the Bureau is at risk of experiencing program cost overruns, missed deadlines, and performance shortfalls. GAO was asked to evaluate the reliability of the Bureau's life-cycle cost estimate. This report evaluates the reliability of the Bureau's revised life-cycle cost estimate for the 2020 Census and the extent to which the Bureau is using it as a management tool, and compares the 2015 and 2017 cost estimates to describe key drivers of cost growth. GAO reviewed documentary and testimonial evidence from Bureau officials responsible for developing the 2020 Census cost estimate and used its cost assessment guide ( GAO-09-3SP ) as criteria. What GAO Found Since 2015, the Census Bureau (Bureau) has made significant progress in improving its ability to develop a reliable cost estimate. While improvements have been made, the Bureau's October 2017 cost estimate for the 2020 Census does not fully reflect all the characteristics of a reliable estimate. (See figure.) Specifically, for the characteristic of being well-documented, GAO found that some of the source data either did not support the information described in the cost estimate or was not in the files provided for two of its largest field operations. In GAO's assessment of the 2015 version of the 2020 Census cost estimate, GAO recommended that the Bureau take steps to ensure that each of the characteristics of a reliable cost estimate is met. The Bureau agreed and has taken steps, but has not fully implemented this recommendation. A reliable cost estimate serves as a tool for program development and oversight, helping management make informed decisions. During this review, GAO found the Bureau used the cost estimate to inform decision making. Factors that contributed to cost fluctuations between the 2015 and 2017 cost estimates include: Changes in assumptions. Among other changes, a decrease in the assumed rate for self-response from 63.5 percent in 2015 to 60.5 percent in 2017 increased the cost of collecting responses from nonresponding housing units. Improved ability to anticipate and quantify risk. In general, contingency allocations designed to address the effects of potential risks increased overall from $1.3 billion in 2015 to $2.6 billion in 2017. An overall increase in information technology (IT) costs. IT cost increases, totaling $1.59 billion, represented almost 50 percent of the total cost increase from 2015 to 2017. What GAO Recommends GAO is not making any new recommendations but maintains its earlier recommendation—that the Secretary of Commerce direct the Bureau to take specific steps to ensure its cost estimate meets the characteristics of a high-quality estimate. In its response to this report, Commerce generally agreed with the findings related to cost estimation improvements, but disagreed that the cost estimate was not reliable. However, until GAO's recommendation is fully implemented the cost estimate cannot be considered reliable.
gao_GAO-18-698
gao_GAO-18-698_0
Background Since 1867, the internal revenue laws have allowed the government to pay awards to individuals who provided information that aided in detecting and punishing those guilty of violating tax laws. In 1996, Congress increased the scope of the program to also provide awards for detecting underpayments of tax. It also changed the source of awards to money IRS collects as a result of information whistleblowers provide rather than appropriated funds. The Tax Relief and Health Care Act of 2006 created a mandatory whistleblower award program which made fundamental changes to IRS’s existing informant awards program. The 2006 act also established the IRS Whistleblower Office. The Whistleblower Office processes claims that allege a tax noncompliance of more than $2 million as potential 7623(b) claims. If these claims meet the requirements for an award, the whistleblower receives a mandatory award of between 15 and 30 percent of collected proceeds, with the exact percentage determined by IRS’s Whistleblower Office based on the extent of the whistleblower’s contributions. Claims not meeting the criteria for a 7623(b) claim are referred to as 7623(a) claims and are subject to procedural steps similar to those of 7623(b) claims. However, 7623(a) claims are neither eligible for appeals to the U.S. Tax Court nor subject to mandatory award payments. For claims processed as 7623(b) claims, the whistleblower claims process involves multiple steps, starting with a whistleblower’s initial application and ending with a rejection, a denial, or an award payment. The process begins when a whistleblower submits a signed Form 211, Application for Award for Original Information, to the Whistleblower Office. The Initial Claim Evaluation unit, which is part of the Small Business/Self- Employed operating division, performs an administrative review of the incoming applications. The Initial Claim Evaluation unit examines the submission for completeness and logs it into E-TRAK. They may reject claims because the tax noncompliance allegation is unclear, no taxpayer is identified, or the whistleblower is ineligible for an award. Claims that are not rejected are sent to classification to determine which operating division should review the claim. Claims are then generally sent to subject matter experts in the various operating divisions—usually the Small Business/Self-Employed or Large Business & International division—where they are reviewed to determine whether the claims merit further consideration by the operating division, should be referred to Criminal Investigation for investigation, or should be sent back to the Whistleblower Office as denied. Claims can be denied if there is limited audit potential or if there is limited time left on the statute of limitations, among other reasons. Claims that are not denied are generally added to the operating division’s inventory for potential examination. If a claim is selected for examination, the examiner completes and returns to the Whistleblower Office a Form 11369, Confidential Evaluation Report on Claim for Award, at the conclusion of the examination. The Whistleblower Office uses the information on this form when making an award determination. Figure 1 summarizes the full claim review process for 7623(b) claims. According to the fiscal year 2017 Whistleblower Office annual report, IRS collected $191 million in fiscal year 2017 as a result of both 7623(a) and 7623(b) whistleblower claims. IRS also paid out $34 million on 367 claims to 242 whistleblowers. The average whistleblower award for fiscal year 2017 was over $140,000. Figure 2 below shows the collection and payout amounts for fiscal years 2012 through 2017. Collected Proceeds Prior to February 9, 2018, section 7623(b) of Title 26 required the Whistleblower Office to calculate whistleblower award amounts as a percent of “collected proceeds (including penalties, interest, additions to tax, and additional amounts).” On August 12, 2014, IRS issued a final rule to implement section 7623 (the whistleblower law) that clarified that certain penalties—those collected under Title 31 for FBAR violations, and those collected under Title 18 for criminal and civil penalties for tax law violations—do not constitute collected proceeds for calculating whistleblower awards. IRS received comments on the proposed rule contending that excluding money collected under Title 18 and Title 31 eliminates a whistleblower’s incentive to provide information on violations under these titles and would reduce the number of whistleblowers willing to provide information to IRS. IRS stated in its final rule that section 7623 only authorizes awards for amounts collected under Title 26. IRS also noted that under the Victims of Crime Act, criminal fines paid for tax law violations must go into the Crime Victims Fund and are unavailable for payment to whistleblowers. Whistleblowers challenged IRS’s definition of collected proceeds in court. In August 2016, the U.S. Tax Court issued a ruling in response to a petition filed by a married couple who, as whistleblowers, had provided information leading to a conviction related to a tax fraud scheme and then disputed the award determination made by the Whistleblower Office. The U.S. Tax Court ruled that criminal fines and civil forfeitures were collected proceeds for purposes of an award under Section 7623(b). In its ruling, the court held that “the term ‘collected proceeds’ means all proceeds collected by the Government from the taxpayer” and that “…the term is broad and sweeping; it is not limited to amounts assessed and collected under title 26.” On April 24, 2017, IRS filed an appeal of the Tax Court’s decision with the U.S. Court of Appeals for the District of Columbia Circuit. Before the U.S. Court of Appeals made a final ruling, Congress replaced the term “collected proceeds” with the term “proceeds” and provided a definition of “proceeds” on February 9, 2018, in the Bipartisan Budget Act of 2018. The act’s definition of proceeds includes: (1) penalties, interest, additions to tax, and additional amounts provided under the internal revenue laws; and (2) any proceeds arising from laws for which the IRS is authorized to administer, enforce, or investigate including criminal fines and civil forfeitures, and violations of reporting requirements. This includes FBAR penalties in the definition of proceeds, as well as criminal fines and civil forfeitures. This definition of proceeds applies to cases for which a final determination for an award was not made prior to enactment. On March 26, 2018, IRS withdrew its appeal before the U.S. Court of Appeals. Reporting of Foreign Bank and Financial Accounts Under the Bank Secrecy Act of 1970, and in particular those sections incorporated into Title 31 of the U.S. Code, U.S. persons with a financial interest in, or signature or other authority over a bank, securities, or other financial account in a foreign country are required to keep records and file reports on transactions with foreign financial institutions. Persons with a financial interest or signature authority over one or more foreign financial accounts with a total value of more than $10,000 must file an FBAR with the Department of the Treasury (Treasury). If an FBAR is required, it must be filed each year for the previous calendar year on or before April 15 (or other date as prescribed by the IRS) to coincide with the tax filing deadline. Administration of this statute has been delegated by Treasury to the Financial Crimes Enforcement Network (FinCEN). In April 2003, FinCEN delegated its authority to IRS to enforce the FBAR requirements. These requirements include conducting examinations of FBAR compliance and taking such enforcement actions as assessing penalties, as appropriate. A person’s civil penalty for each FBAR violation can be up to $500 for a negligent FBAR violation and up to $10,000 for non-willful violation. In addition, a person with a willful FBAR violation may be subject to a civil monetary penalty equal to the greater of $100,000 or 50 percent of the amount in the account at the time of the violation, and also be subject to possible criminal sanctions. These penalties are per person, per account, and per year. According to the Internal Revenue Manual (IRM), FBAR penalties assessed by IRS are collected and tracked separately from tax assessments. Prior to February 2018, IRS Did Not Consider Whistleblower Information That May Have Led to FBAR Enforcement Actions in Award Determinations Whistleblowers Likely Identified Millions in FBAR Noncompliance for Which They Were Not Awarded IRS assessed approximately $10.7 million in FBAR penalties to taxpayers who were identified in our sample of whistleblower claims. We reviewed 92 whistleblower claims closed between January 1, 2012, and July 24, 2017, where the identified taxpayer was also subject to an IRS FBAR examination. IRS assessed FBAR penalties in 28 of these 92 cases. In none of these instances was the FBAR penalty included in the collected proceeds used to calculate whistleblower awards. Our analysis of these 28 claims suggests that if IRS had included FBAR penalties in the awards, the whistleblowers involved could have received an additional $1.6 million to $3.2 million, assuming an award of between 15 and 30 percent. Examples of Whistleblower Claims A whistleblower claim may provide IRS information on the undisclosed offshore account of a single individual (such as a business partner, former spouse, or family member), while other whistleblowers, such as bank insiders, may provide IRS a list of individuals with undisclosed offshore accounts. The exclusion of FBAR penalties from whistleblower awards is consistent with IRS’s August 2014 regulation outlining the whistleblower award process. The final regulation describes the process for determining whistleblower awards and includes a definition of collected proceeds. Specifically, the regulation defines collected proceeds as “limited to amounts collected under the provisions of Title 26, United States Code.” This definition excluded FBAR penalties assessed under Title 31 and criminal fines assessed under Title 18. This regulation’s definition of collected proceeds, however, has been superseded by the replacement of “collected proceeds” with “proceeds” and a definition of “proceeds” in the Bipartisan Budget Act of 2018, effective February 9, 2018. The new law defines proceeds as including “penalties, interest, additions to tax, and additional amounts provided under the internal revenue laws and any proceeds arising from laws for which the Internal Revenue Service is authorized to administer, enforce, or investigate, including criminal fines and civil forfeitures, and violations of reporting requirements.” While no whistleblowers were paid for any FBAR penalties collected as a result of the information they provided to the Whistleblower Office, our analysis found that IRS took FBAR enforcement actions against at least 10 taxpayers based on whistleblowers’ information. Table 1 shows the FBAR enforcement outcomes for the 92 claims we reviewed. Of these 92 whistleblower claims we reviewed where the identified taxpayer was subject to an FBAR enforcement effort, 39 involved taxpayers accepted into IRS’s Offshore Voluntary Disclosure Programs (OVDP). OVDP enables taxpayers with tax noncompliance from undisclosed offshore accounts to avoid prosecution and resolve their past noncompliance by paying limited civil penalties. As one of a number of required actions for OVDP, IRS assesses taxpayers accepted into the program a miscellaneous Title 26 offshore penalty in lieu of all other penalties for undisclosed foreign accounts, including FBAR penalties. According to IRS officials, because the OVDP penalty is a Title 26 penalty, these collections were included in collected proceeds for the purposes of whistleblower award calculations even before the new definition of proceeds took effect on February 9, 2018. The case files we reviewed included some examples of whistleblowers receiving an award based in part on the miscellaneous Title 26 OVDP penalty in addition to tax, interest, and other penalties. If the taxpayer had not participated in OVDP, the whistleblower would not have received an award on the part of the collected proceeds that came from the FBAR penalty. FBAR Warning Letters At the conclusion of a Report of Foreign Bank and Financial Accounts (FBAR) examination, an examiner can either assess a penalty or can use a warning letter (Letter 3800, Warning Letter Respecting Foreign Bank and Financial Accounts Report Apparent Violations) to notify taxpayers that they are not in compliance with FBAR reporting requirements. The examiner can use their discretion to issue a warning letter if they determine that the taxpayer would improve their FBAR reporting compliance in the future. A taxpayer’s failure to file an FBAR after receiving a warning letter supports a determination of a willful FBAR violation. The new definition of proceeds establishes a policy of including FBAR penalties in whistleblower awards regardless of whether the identified taxpayer enters OVDP or is assessed an FBAR penalty as a result of an FBAR exam. It also creates consistency with the treatment of penalties assessed under the Foreign Account Tax Compliance Act (FATCA). FATCA, enacted in 2010 under Title 26, assesses penalties for failure to report foreign financial accounts and assets. Because FATCA is under Title 26, any penalties assessed stemming from a whistleblower’s information were already eligible for inclusion in whistleblower awards. Of the total revenue collected from the 28 whistleblower claims we reviewed with an FBAR penalty assessed, more than 97 percent came from 10 cases with willful FBAR penalties. Willful FBAR penalties, which are up to 50 percent of the value of the account, represent a small portion (less than 0.1 percent) of all whistleblower claims closed in our time frame, and less than half of the 28 FBAR penalty cases we reviewed. However, we calculated that had these willful penalties been included in awards, the whistleblower awards would have increased by up to $3,145,754. In contrast, the 18 cases that had a non-willful or negligent FBAR penalty would have led to an increase in whistleblower awards of up to $78,912 based on our calculations. Table 2 shows the number of cases and total amount of FBAR penalties collected by the type of FBAR penalty. Whistleblowers may play an important role in bringing willfully noncompliant taxpayers to the attention of IRS. These taxpayers may be purposefully hiding their assets from IRS detection. To highlight the difference in the magnitude of FBAR penalties between willful and non- willful or negligent taxpayers, figure 3 shows the range of potential whistleblower awards had FBAR penalties been included in award determinations. Some Whistleblower Claims Closed between January 2012 and July 2017 Included FBAR Allegations There is no way to estimate how many whistleblowers would have come forward had IRS included FBAR penalties in whistleblower awards. However, we found a small number of whistleblower claims that included FBAR information anyway. To look for how often whistleblowers submitted claims with allegations of FBAR noncompliance, we identified 401 of the 10,306 IRS whistleblower claims closed between January 1, 2012, and July 24, 2017, as likely to contain allegations of FBAR noncompliance by an identified taxpayer. We identified three groups of claims as being most likely to contain allegations of FBAR noncompliance: 92 claims where the identified taxpayer was subject to an FBAR enforcement action (population discussed above); 299 claims that included key terms in E-TRAK indicating offshore assets; and 10 claims that were closed with “no Title 26 collected proceeds,” which could indicate FBAR noncompliance since FBAR penalties are Title 31 penalties. Since FBAR penalties were excluded from whistleblower proceeds, IRS did not track FBAR allegation data in E-TRAK. Therefore, our numbers might underrepresent the total population of claims likely to include allegations of FBAR noncompliance. We reviewed all 92 of the claims that included taxpayers that were also present in IRS’s FBAR Database (matched claims) and found that 85 of them included allegations of FBAR noncompliance on IRS Form 211, the form used to submit a claim to the Whistleblower Office. We reviewed a random sample of 30 claims from the 299 claims we identified as being likely to include FBAR information based on key terms in the E-TRAK database (key terms claims)—11 of them included allegations of FBAR noncompliance. We also reviewed all 10 of the claims that were closed with “no Title 26 collected proceeds” and found one allegation of FBAR noncompliance. This was not unusual because IRS uses the “no Title 26 collected proceeds” code for closures other than those with FBAR penalties, such as claims with Title 18 criminal fines. Table 3 shows our three populations and how often we found claims with allegations of FBAR noncompliance in each. Based on our stratified sample of selected whistleblower claims, we estimate that at least 1.4 percent (or at least 146 claims) of all large-dollar (7623(b)) whistleblower claims closed between January 1, 2012, and July 24, 2017, involved allegations of FBAR noncompliance. Because the Whistleblower Office did not require data in E-TRAK to indicate the nature of the violation the whistleblower is reporting, the actual number of claims that include allegations of FBAR noncompliance may be higher. While our estimate represents a small proportion of all whistleblower claims, this may be because of the prior policy of excluding FBAR penalties from awards. However, the analysis suggests that despite being ineligible for award payment, some whistleblowers provided information on FBAR noncompliance to IRS that may have helped improve FBAR’s effectiveness as a tool for anti-money laundering and tax enforcement. With the statutory change in award basis, IRS may see more whistleblowers come forward with better information about FBAR noncompliance, according to whistleblower attorneys we interviewed. IRS Historically Used FBAR Information from Whistleblower Claims for Enforcement Efforts, but the Statutory Change in Award Basis Increases the Importance of Reporting Full Information The Whistleblower Office Forwarded FBAR Information to Other IRS Divisions for Exam Purposes, Even Before FBAR Information Was Required to Be Included in Award Determinations Even though FBAR penalties were not considered for whistleblower awards until the February 9, 2018 legislative change, the Whistleblower Office forwarded allegations it received of FBAR noncompliance to IRS’s operating divisions for further examination. Whistleblower Office officials told us that if a whistleblower provides information concerning offshore accounts held by a taxpayer, including specific allegations of FBAR noncompliance, IRS evaluates it as it does any other information. The presence of information on possible FBAR noncompliance does not change the process for evaluating the claim. Whistleblower Office instructions for the initial review of a claim specify that, if the claim merits further consideration, it will be referred to the appropriate operating division for review. According to officials from the Small Business/Self-Employed and Large Business & International operating divisions, during their review process information dealing with offshore accounts and possible FBAR violations is treated just as all other information provided by a whistleblower. Once a claim is referred to an operating division, it is generally reviewed by a subject matter expert who then determines whether the claim has sufficient audit potential to warrant adding it to the division’s inventory of possible returns for audit. If the subject matter expert concludes that the claim does not have sufficient audit potential, or the division later decides not to proceed with an examination, the claim is returned to the Whistleblower Office. If the subject matter expert forwards a whistleblower claim for possible audit and an examination takes place, the examiners will establish an audit file for the tax examination. If evidence of FBAR noncompliance is found, a separate audit file is to be created. Most often, both files are maintained and updated by the same examiners. According to IRS officials and procedures laid out in the IRM, the outcome of the examination is based on the quality of the evidence and is not influenced by the presence of a whistleblower or the source of the information. Information on FBAR noncompliance developed by examiners may or may not be provided to the Whistleblower Office. At the conclusion of the examination process, the examiner provides the Whistleblower Office with a Form 11369, Confidential Evaluation Report on Claim for Award. On this form, examiners are required to answer a series of detailed questions about the whistleblower’s contribution to the investigation, such as whether the whistleblower identified specific issues or provided analysis that saved IRS time and resources. According to the IRM, the purpose of the Form 11369 is to inform the Whistleblower Office of the whistleblower’s contribution, if any, to an examination, investigation, or other action. According to the instructions on the Form 11369 as well as the IRM, the Whistleblower Office bases its award determinations in large part on the form and information provided to supplement it. There is no specific space set aside on the Form 11369 for information dealing specifically with FBAR noncompliance. In addition, there are no instructions on or accompanying the form to require examiners to provide documentation relating to FBAR noncompliance. Prior to the legislative change in February 2018 to include FBAR penalties in awards, the Whistleblower Office retained in its files any FBAR-related information provided by the operating division but did not use it for the award determination process. According to Whistleblower Office officials, any information about FBAR noncompliance in its claim files was there incidentally and not collected or retained for any specific tracking purposes. These officials told us, and we found in our review, that some claim files had information about FBAR violations or penalties because the operating division examiner chose to include it in the Form 11369 narrative or in supplemental information, even though the examiner was not required to do so. Because providing FBAR information with the Form 11369 was discretionary prior to the legislative change in February 2018, Whistleblower Office officials told us that if FBAR information existed in the files at the time of the interview, it may not be complete. While having complete information about FBAR exams on the Form 11369 was not needed when IRS did not consider FBAR noncompliance as part of award determinations, now that it is defined as such by statute, the Whistleblower Office will need such information on FBAR noncompliance on Form 11369 to properly determine whistleblower awards in accordance with the new legal requirements. As of June 28, 2018, the Whistleblower Office had not updated Form 11369 or its accompanying instructions. Whistleblower Office officials told us they were reviewing and commenting on draft guidance from the Office of Chief Counsel on how to implement the new provision but had not yet updated the Form 11369 or its instructions. IRS officials did not provide a timeline for when IRS expects to update the form. Because this form asks questions specific to Title 26 tax noncompliance examiners may not have clear guidance indicating that non-Title 26 issues should be included in these answers. According to the IRM, the Form 11369 should assist the Whistleblower Office in making an award determination by explaining how the whistleblower and their information assisted IRS in taking action. By not using an updated form that reflects the technical language distinguishing between tax issues and non-Title 26 issues that IRS also enforces, the Whistleblower Office may not be able to ensure the information it collects for determining whistleblower awards that includes non-Title 26 violations is complete and accurate. IRS Has Taken Some Steps to Communicate Change in Whistleblower Award Basis When enacted on February 9, 2018, the new law immediately required information concerning FBAR violations to be included in the awards determination process. Subsequently, the Whistleblower Office and IRS started to make changes to policies and procedures to ensure award determination decisions are made fairly and with full information. The day the new statutory definition became law, IRS placed a hold on whistleblower award determinations while the Whistleblower Office developed new procedures. On February 15, 2018, IRS lifted the hold, instructing Whistleblower Office analysts to check with their managers prior to making award determinations on any claims that may include non- Title 26 proceeds. However, the Whistleblower Office did not issue any additional specific guidance to Whistleblower Office staff on how to review claims for any non-Title 26 issues until April 19, 2018. According to IRS officials, the Whistleblower Office closed 2,096 whistleblower claims between the date the law changed and April 19, 2018 when IRS issued the internal guidance. In the April 19, 2018 policy alert, later reissued as a memo on May 8, 2018, Whistleblower Office staff were instructed to look over the Form 211 for indications of FBAR or criminal activity when reviewing a Form 11369 or making award determinations. The policy alert also instructs staff to contact the FBAR Penalty Coordinator and review Special Agent’s Reports and Judgement Documents for non-Title 26 proceeds and to document the results of these reviews in E-TRAK. Issuing complete and final guidance will take time; however the Whistleblower Office did not issue any interim guidance to IRS units outside the Whistleblower Office for more than 2 months after the enactment of the statute redefining proceeds. On April 12, 2018, the Director of the Whistleblower Office issued a memo to the commissioners of the operating divisions and chief of the Criminal Investigation division. This memo stated that those working on whistleblower claims need to provide the Whistleblower Office with details of how whistleblower information was used in any actions taken regardless of whether they were Title 26 issues or not. The Whistleblower Office emailed a communication similar to the memo to other IRS employees working on whistleblower claims on April 18, 2018. The initial memo did not provide specific instructions as to how to provide such information, such as specifying to use Form 11369, but the email said additional guidance and training would be forthcoming. According to Whistleblower Office officials, the timing of the internal communication about the change in whistleblower award basis was because the Whistleblower Office was waiting on draft guidance from the IRS Office of Chief Counsel. The Whistleblower Office received this draft guidance on April 19, 2018. In late April and early May, the Whistleblower Office posted information about these changes in internal IRS media, including IRS-wide web pages and pages for individual IRS operating divisions. The Whistleblower Office specified information should be included with the Form 11369 in these later communications. However, as noted above, the Form 11369 itself and its accompanying instructions had not been updated to reflect these new requirements. The current regulations on whistleblower claims, issued in August 2014, exclude non-Title 26 proceeds from the basis for determining whistleblower awards. According to IRS officials, as of June 20, 2018, IRS had not yet started to take action on making the regulatory change. IRS, however, is in the process of updating the IRM, which serves as the primary guidance for IRS employees. Section 25.2.2 of the IRM, which provides procedures and instructions for the whistleblower award programs, defines collected proceeds for the purpose of awards as tax, penalties, interest, and additions to tax limited to amounts collected only under the provisions of Title 26. According to IRS officials, while IRM updates take time to complete, generally the IRM can be updated quicker than a regulation. The officials could not provide a timeline for when these changes would be complete. IRS can communicate to the public about statutory changes to the whistleblower program through its various external communication channels, such as its website and social media accounts. Such communications are important because whistleblowers have a limited 30- day period to appeal certain award determinations. On May 9, 2018, IRS posted an announcement about the statutory change on the Whistleblower Office page of its web site. The announcement noted the enactment of the provision redefining proceeds for the purpose of whistleblower awards and provided a link to the May 8, 2018 Whistleblower Office memorandum. This information was posted 3 months after the statutory change went into effect and a month after we notified IRS that IRS had not yet announced the change through a press release, its web site, or its Twitter account. IRS Uses Its FBAR Database for Internal and External Reporting but Lacks Sufficient Controls IRS Uses Data from Its FBAR Database to Manage Workflow and for Internal and External Reporting IRS collects and maintains FBAR penalty data in a stand-alone database. According to IRS officials, they use these data to carry out IRS’s delegated duties to assess and collect such penalties. For example, the data are used for sending demand notice letters to taxpayers and tracking cases referred to the Department of Justice. According to these officials, IRS also uses information on FBAR penalty assessments and payments for a variety of related purposes including reporting FBAR data to the Financial Crimes Enforcement Network (FinCEN) and for use in annual reports to Congress. IRS also uses the database for internal management. Specifically, IRS officials stated that they use reports on inventory, penalties, and appeals for decision making. Given the February 2018 legislative change to include FBAR penalties in the definition of proceeds, the Whistleblower Office will also use FBAR penalty data for calculating some whistleblower award determinations. While FinCEN retains the rule-making authority for FBAR and is the repository of FBAR filings, IRS assesses and collects FBAR penalties from taxpayers who violate the FBAR reporting requirements. IRS also maintains the FBAR Database. While individuals file their FBAR forms through FinCEN’s online Bank Secrecy Act E-filing portal, IRS enforces these filing requirements. Following procedures laid out in the IRM, IRS examiners can access FBAR filing data from FinCEN’s database during the course of a tax examination. Information on the taxpayers’ FBAR filings is available to examiners through IRS’s Integrated Data Retrieval System, including data from filed tax and information returns. Data on FBAR enforcement actions, including penalties, are only housed in the FBAR Database. The FBAR Database is a stand-alone database maintained by the FBAR team within the Small Business/Self-Employed operating division. The FBAR Database does not interface or connect with any other IRS data sources or systems. Therefore, there is currently no mechanism for any data to automatically feed into or from the FBAR Database to cross-check with taxpayer information in other databases. When examiners open an FBAR exam, the IRM directs them to report exam and exam-outcome information to the FBAR team. Examiners fax, mail, or e-mail FBAR examination and penalty assessment information to the FBAR team which then transcribes the data into the FBAR Database manually. Within IRS, only the FBAR team has access to the database. Because the stand-alone FBAR Database is the only data source within IRS that tracks FBAR penalty assessments and payments, the FBAR team is responsible for completing all data entry as well as generating and circulating reports on FBAR enforcement actions to others within IRS. IRS Has Insufficient Controls for the Reliability of FBAR Penalty Data We assessed the reliability of the FBAR Database for the purposes of using limited data from this database for our own analysis. We determined that the data fields we used were sufficiently reliable for our purposes. Specifically, we matched taxpayer identification numbers in the FBAR Database to those in E-TRAK and reported on enforcement outcomes, including a limited number of penalty payments, as discussed previously. These data were the only available data within IRS on FBAR penalties and enforcement actions. Even though we found the data that we used to be sufficiently reliable for our purpose of identifying penalty information and selecting a sample of claims to review further, we identified some data control deficiencies related to data input and validation. We found certain elements of the database to have limited reliability. Because FBAR penalty information will be used for whistleblower award determinations, it is important for these data to be reliable. A key principle of federal internal control is the use of quality information. Agencies should have controls in their information systems to ensure the validity, completeness, and accuracy of data. Further, these controls should be documented. In addition, the Federal Information Security Modernization Act of 2014 (FISMA) provides for the development and maintenance of the minimum controls required to protect federal information and information systems. Among other things, FISMA requires the National Institute of Standards and Technology (NIST) to develop standards and guidelines that include minimum information security requirements on how agencies should design, protect, and manage their respective data systems. NIST’s guidance outlines appropriate data safeguards for agency data systems based on a risk- based approach. NIST guidance also states an agency’s information system should have controls to check the validity of inputs. This includes checking the valid syntax of inputs to ensure they match the specified definitions for format and content. NIST guidance also recommends controls to help ensure the information system behaves predictably, even if invalid data are entered. While FBAR team employees transcribe data manually into the database from emails or faxed or mailed paper forms, there are no procedures for data testing or validation. For example, there is no secondary check by another individual to ensure data were entered correctly and completely. The FBAR Database procedures also lack sufficient validity checks to ensure that the data entered are accurate. There are some basic data entry checks in the database, such as limiting input to alphanumeric entries and a warning if a date is more than a year from the current date. However, these checks serve only as a reminder for the employees entering the data to verify its accuracy; these checks do not prevent erroneous data from being entered and retained. Without additional controls for accuracy and validity, IRS risks relying upon inaccurate information for some of its reporting and decision making. According to IRS officials, not all fields in the FBAR Database are mandatory. In addition, some fields are new as of January 2017 and, therefore, only contain data after this time. IRS officials also told us that they are aware there are some data missing in the database, such as incomplete records for some taxpayers, but they could not quantify how often this occurs. They also told us that such missing data can contribute to inaccurate reports of FBAR total assessments. For example, if a date field is left blank, certain reports that pull data based on these date fields will not pull the records with this missing field, thereby underreporting FBAR outcomes. We found 44 records with input errors in this date field. The officials stated that they make every effort to input complete data into the database, but sometimes complete information is unavailable from the exam team. Because the FBAR data lack some reliability controls, IRS may rely on insufficient or incomplete data for reporting and decision making, including amounts of whistleblower awards. IRS officials did not have any documentation showing why or how the database was developed in November 2003. Further, IRS officials told us the only documentation on how the database is used is the FBAR Database desk guide. The desk guide provides instructions for data input; however, this guide does not include any information to describe or define the elements in the database. Standard data element definitions are intended to ensure that all users of the system define the same data in the same way and have a common understanding of their meaning. Such documentation is important for providing clear instructions to users to know what information should be input in each variable field to ensure that the type of data in each variable field is consistent. Without it, IRS and other users of the data may not have reasonable assurance that data in the database are input as intended. IRS recognized the need to address the FBAR Database and established an FBAR Improvement Project Team to review the FBAR Database and records system and make recommendations for improvements. The team was established in 2016 after reviews of database-generated reports indicated missing data. The FBAR Improvement Project Team has made recommendations to improve the overall function and reliability of the dataset, including updating FBAR policies and procedures and validating data for the report to Congress. They are also exploring automating case building by pulling taxpayer data from other IRS data sources and creating a report automation tool. As of April 2018, these recommendations had not been implemented. IRS officials were reviewing the recommendations and specific plans had not been vetted by the leadership in the relevant operating divisions. IRS officials noted that because of the small size and limited use of the database, it may be a low priority for scarce information technology resources. Until IRS develops and documents improved controls for the validity, completeness, and accuracy of data in the FBAR Database, it risks using incomplete and insufficient data for decision making. Award Exclusions May Have Negatively Affected Whistleblowers’ Willingness to Bring Information to IRS Selected Whistleblower Attorneys in Our Review Reported They Limited or Refused to Take on Clients Who Alleged FBAR Noncompliance When Penalties Were Excluded from Awards Whistleblower attorneys we spoke with referred to the former exclusion for FBAR and other non-Title 26 collections from whistleblower awards as a significant concern for them and their clients. Their concerns are important to the success of the whistleblower program because if whistleblowers are discouraged from coming forward, IRS risks losing opportunities to identify tax fraud and abuse and ultimately reduce the tax gap. This loss of help in identifying noncompliance could be significant for IRS. According to IRS, between 2007 and 2017, whistleblower information helped IRS collect $3.6 billion in tax revenue that may have otherwise gone uncollected. According to the whistleblower attorneys we spoke with, as well as information we gathered in a search of relevant literature, the estimated value of undisclosed offshore accounts may be in the tens of billions of dollars, but could be as great as hundreds of billions of dollars. Prior to the legislative change in the definition of collected proceeds, we interviewed 11 whistleblower attorneys from nine law firms about their experiences representing tax whistleblowers who submitted allegations of FBAR noncompliance to IRS. Several of these firms also had experience helping whistleblowers appeal IRS award determinations. Of these nine firms, eight firms’ attorneys told us they had refused or limited the number of whistleblowers alleging FBAR noncompliance they were willing to take on as clients when such collections were excluded from award determinations. For example, one attorney told us that his firm would take on whistleblower clients alleging FBAR violations only if there was strong evidence of tax noncompliance. An attorney with another firm reported that the firm was willing to take on such clients but advised these clients that the inclusion of FBAR penalties in any award may have to be litigated in court at the award determination phase. Further, attorneys with three of the nine firms reported fewer whistleblowers either approaching them for representation or following through on filing a claim once informed of the exclusion of non-Title 26 collections from awards. Attorneys with eight of the nine firms also reported that the exclusion of criminal fines from collected proceeds was a potential reason for whistleblowers not coming forward. We spoke with attorneys at eight of the nine firms again after the passage of the statutory change in the definition of proceeds. Most said that this was a positive step for the IRS whistleblower program and expected that more whistleblowers will come forward with information on criminal and FBAR violations. Attorneys with seven of the eight firms stated they would be willing or already had started taking on clients reporting FBAR and criminal violations. However, they cited other concerns with the program that could continue to limit their willingness to represent tax whistleblowers and discourage whistleblowers. These concerns included limits on anonymity for whistleblowers appealing Whistleblower Office decisions to the Tax Court; restrictions on filing claims anonymously; delays in award payments during the lengthy appeals process; and limited communication with the Whistleblower Office during the claim review process. According to these attorneys, for those whistleblowers who are offered an award that excludes FBAR penalty and criminal fine collections, many choose to forgo appealing the decision because it would delay their collection of any part of the award until the appeals process was complete, which can take years. Further, the whistleblower may risk losing their anonymity in an appeal. They added that some whistleblowers risk their lives and livelihoods to come forward and that anonymity is critical to their willingness to provide information to IRS. The attorneys generally stated that these issues can discourage whistleblowers, which then can limit the whistleblower program’s effectiveness. Our Analysis Found No Evidence That Presence of Whistleblower Alters the Mix of FBAR Penalty and Tax IRS Assesses Some of the attorneys we interviewed indicated that whistleblowers may have been further discouraged from bringing information on offshore noncompliance to IRS if they believed that IRS was purposefully trying to limit whistleblower awards by assessing higher FBAR penalties and lower taxes when a whistleblower was involved. The IRM provides IRS examiners with some level of discretion about when to assess tax and FBAR penalties, subject to the facts and circumstances of each individual case. Attorneys at seven of the nine firms we interviewed expressed concern that IRS examiners may have used this discretion to assess higher FBAR penalties and lower taxes as a way to reduce a whistleblower’s potential award. However, these attorneys did not provide specific evidence of this occurring. Because of taxpayer information privacy laws, IRS limits the amount and type of information it can share with whistleblowers and their attorneys about their claims once submitted to the Whistleblower Office. To investigate this claim, we analyzed IRS data on taxpayers that were assessed FBAR penalties from tax years 2010 to 2015. We compared the proportion of FBAR penalties assessed to the overall tax and FBAR penalties assessed to a taxpayer for exams where a whistleblower was and was not involved. Our analysis did not find any evidence of a statistically significant difference between the taxpayers identified by a whistleblower and taxpayers with no whistleblower involved. The IRM lays out the steps examiners should take when determining whether FBAR penalties are warranted and how they should be assessed. These steps are independent of IRM guidance on tax examinations and assessments. IRS officials that we interviewed, including those with oversight of examiners in Small Business/Self- Employed and Large Business & International, indicated that the Title 26 tax exams and Title 31 FBAR exams are conducted independently of each other and neither influences the outcome of the other. Further, they stated that the presence of a whistleblower has no bearing on the decision of whether to assess a tax or penalty or the amount of such assessments, as previously discussed. Conclusions For the IRS whistleblower program to be successful, whistleblowers need to have confidence in the program’s processes and outcomes, including paying awards when a whistleblower’s information is used. Despite IRS’s prior policy of not including non-Title 26 collections, we found some whistleblowers brought such information to IRS, and IRS assessed penalties on noncompliant taxpayers. However, according to whistleblower attorneys we spoke with, this policy of award exclusions may have discouraged other whistleblowers with significant information on FBAR reporting and tax noncompliance from coming forward. With the new statutory definition of proceeds enacted on February 9, 2018, that includes FBAR and other non-Title 26 collections, whistleblowers may now be more willing to submit claims. However, IRS has not yet fully changed some of the whistleblower program’s policies and procedures to reflect that FBAR penalties, as well as criminal fines and civil forfeitures, are now included in whistleblower awards. Because the change was effective for claims that had not had a final determination made as of February 9, 2018, the Whistleblower Office taking immediate steps to ensure it had full information from other offices and divisions within IRS about claims reaching the award determination phase would have helped IRS act on these determinations. While IRS has now taken steps to communicate the need for information about non- Title 26 actions to be included with the Form 11369, updating the form itself and its instructions will help to better ensure that complete and accurate information about such actions is reflected on the form to be provided to the Whistleblower Office for inclusion in award determinations. The FBAR Database is the only comprehensive source of information within IRS about the FBAR penalties assessed and paid. If this database does not have the controls necessary to provide reasonable assurance that the data are reliable, accurate, and complete, there is a risk that the Whistleblower Office may make award determinations based on incorrect data. Recommendations for Executive Action We are making the following two recommendations to IRS: The Commissioner of Internal Revenue should ensure that the Director of the Whistleblower Office modifies the Form 11369 and its accompanying instructions to clarify how to document how whistleblower information was used in any IRS actions taken, regardless of whether the laws administered, examined, or enforced are outside of Title 26, such as FBAR penalties. (Recommendation 1) The Commissioner of Internal Revenue should ensure that the Deputy Commissioner for Services and Enforcement develops and documents improved controls for the validity, completeness, and accuracy of data on FBAR exams and enforcement actions. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of the sensitive version of this report to IRS for review and comment. IRS agreed with our recommendations and provided technical comments which we incorporated as appropriate. However, IRS deemed some of the information in their original agency comment letter pertaining to the FBAR Database to be sensitive, which must be protected from public disclosure. Therefore, we have omitted the sensitive information in the comment letter, which is reproduced in part in appendix II. These omissions did not have a material effect on the substance of IRS’s comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Commissioner of Internal Revenue. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives were to: (1) describe the extent to which the Internal Revenue Service’s (IRS) Whistleblower Office included Report of Foreign Bank and Financial Accounts (FBAR) penalties in whistleblower awards prior to the statutory change; (2) examine how IRS used whistleblower information on FBAR noncompliance and how IRS responded to the statutory change in definition of proceeds; (3) describe the purposes for which IRS collects and uses data from the FBAR Database and assess the controls for ensuring data reliability; and (4) summarize what is known about the potential effect exclusions from collected proceeds, including FBAR penalties, may have had on whistleblowers bringing claims to IRS. This report is a public version of a sensitive report that we issued in August 2018. IRS deemed some of the information in our August report to be sensitive, which must be protected from public disclosure. Therefore, this report omits sensitive information about the information security safeguards of IRS’s FBAR Database as well as an associated recommendation. Although the information provided in this report is more limited, the report addresses the same objectives as the sensitive report and uses the same methodology. To address the first objective, we conducted a case file review of a generalizable stratified sample of closed 7623(b) whistleblower claims to identify how often and to what extent whistleblower claims included information about offshore accounts and FBAR violations. For this case file review, we started with the population of 10,306 7623(b) claims that had been closed by IRS between January 1, 2012 and July 24, 2017 (the time of our analysis). We identified three subpopulations of whistleblower claims from which we selected the claims we reviewed: 1. All 92 claims involving taxpayers who were identified in a whistleblower claim and who also appeared in IRS’s FBAR Database as having been subject to an FBAR examination. We designated this subpopulation as “Matched Claims.” 2. A random sample of 30 claims from a population of 299 claims that a text search within E-TRAK had identified as likely involving noncompliance with offshore account requirements, including FBAR, and that were not included in other samples. We designated this subpopulation as “Key Terms.” 3. All 10 denied claims closed in E-TRAK, the IRS Whistleblower Office’s claim tracking system, with the closing code “Denied - No Title 26 Collected Proceeds.” We designated this subpopulation as “No Title 26 Collected Proceeds.” Table 4 shows descriptive information about each of these subpopulations. The purpose of our file review was to determine how often whistleblower claims in each of our different subpopulations involved offshore accounts and allegations of FBAR violations. We reviewed all claims in our first and third subpopulations; because of the larger number of claims in the second subpopulation, we selected a random sample for review. For the 132 whistleblower claims in our review, two reviewers coded the content of each file into different categories, including: whether the Form 211, Application for Award for Original Information, included allegations of FBAR noncompliance; whether the whistleblower received a whistleblower award; and what collections were included in collected proceeds for those paid whistleblowers. To the extent there were disagreements among the reviewers’ coding for a file, a third reviewer resolved the differences. We agreed on a final coding for all of the data elements collected, recorded them in a summary document, and used these for our analysis. Because whistleblower files were not required to contain information on FBAR penalty assessments or other enforcement actions, although some of the files we reviewed did have this information, we supplemented our file review with data on FBAR enforcement actions, such as penalties and warning letters, from the FBAR Database. We assessed the reliability of the FBAR Database and E-TRAK database for the purposes of using limited data from these databases for our own analysis. We reviewed agency documents, electronically tested data for missing data and outliers, and interviewed IRS officials about these databases. These two databases are the only sources of data within IRS for whistleblower claims information and FBAR enforcement actions and outcomes. We compared data in both databases to identify individuals that were both named by a whistleblower and subject to an FBAR enforcement action. We used data from the FBAR Database for the purpose of identifying and summarizing FBAR enforcement actions taken by IRS, and we used data from the E-TRAK database to identify whistleblower claims that were likely to include allegations of FBAR noncompliance. IRS officials told us that the FBAR Database is the most reliable data source at IRS for individuals who were subject to such FBAR enforcement actions as penalty assessments. We discuss the limitations of these databases in this report, but we concluded that the elements we used in our analyses were sufficiently reliable for the purposes of identifying a sample of whistleblower claims likely to include allegations of FBAR noncompliance and FBAR enforcement outcomes. We also interviewed IRS officials concerning the processing of claims and the operation and maintenance of the E-TRAK and FBAR databases. For the second objective, we reviewed relevant portions of the Internal Revenue Manual and other IRS internal guidance and documentation and interviewed officials from IRS’s Whistleblower Office and operating divisions that handle whistleblower claims about what IRS does when it receives information from whistleblowers that include allegations of FBAR noncompliance. We also reviewed the recently enacted statutory provisions concerning the definition of collected proceeds on which whistleblower awards are based. In addition we spoke to IRS Whistleblower Office officials concerning any changes IRS plans to make in its policies and procedures as a result of the statutory change. For our third objective, we evaluated IRS’s FBAR Database to identify any control deficiencies, using as criteria Standards for Internal Control in the Federal Government, the Federal Information Security Modernization Act of 2014, and National Institute of Standards and Technology Special Publication 800-53. We electronically tested the FBAR Database for missing data, outliers, and obvious errors. We also reviewed IRS documentation on the database. In addition, we interviewed IRS officials responsible for maintaining and using the database to determine how IRS uses the data, what controls are in place, and any known limitations of the database. We also met with IRS officials and discussed the ongoing development of plans for improvement of the database. For our fourth objective, we interviewed a nonprobability sample of attorneys who have represented multiple whistleblowers who have submitted claims to the IRS Whistleblower Office under section 7623(b). The views expressed in these interviews represented only those of the attorneys who participated and are not generalizable to all whistleblower attorneys or law firms. These attorneys have a financial interest in IRS’s treatment of whistleblower claims; however, interviewing these attorneys allowed us to gather broad viewpoints on how whistleblower award exclusions may affect their professional decisions and the decision of their clients and prospective clients. We began with whistleblower attorneys whom we previously spoke with for our 2011 and 2015 reports on the IRS Whistleblower Office and requested from those attorneys names of other attorneys currently active in the IRS whistleblower community who have represented clients who submitted allegations that included FBAR noncompliance. We individually interviewed 11 attorneys from nine firms, asking the same questions of each to obtain their perspectives on the effect the exclusion of FBAR penalties and criminal fines has on the nature and volume of whistleblower complaints and on the cases they bring forward. We also attended a regularly scheduled meeting of attorneys representing whistleblowers, including some we had spoken with and several others. Following the enactment of statutory provisions defining collected proceeds for the purpose of whistleblower awards to include FBAR penalties and other non-Title 26 collections, we contacted the 11 attorneys we had previously interviewed for their views on the effect of the new legislation, and we received written responses from 8 of them. For balance, we also analyzed data on FBAR penalty and tax assessments for a sample of taxpayers who were assessed an FBAR penalty in calendar years 2010 through 2015. For all taxpayers in our sample, we identified those where a whistleblower was involved in providing IRS information about the taxpayer and those where there was no whistleblower presence. We analyzed whether there was a statistically significant difference in proportion of FBAR penalty assessments compared to tax and FBAR penalty assessments based on whether a whistleblower was involved or not using a nonparametric Wilcoxon-Mann- Whitney test. This analysis did not control for other factors that could affect the results, such as the taxpayer being willfully noncompliant with FBAR reporting requirements, the total tax assessment of the taxpayer, or the total income of the taxpayer. In addition, we interviewed IRS Whistleblower Office officials and operating division officials to discuss the relative complexity of claims involving and not involving FBAR and how the exam teams use whistleblower information related to FBAR noncompliance. The performance audit upon which this report is based was conducted from March 2017 to August 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with IRS from August 2018 to September 2018 to prepare this public version of the original sensitive report for public release. This public version was also prepared in accordance with these standards. Appendix II: Comments from the Internal Revenue Service Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Tara Carter (Assistant Director), Danielle N. Novak (Analyst-in-Charge), James Ashley, Steven J. Berke, David Blanding, Amy Bowser, Andrew Emmons, Steven Flint, and Kayla Robinson made key contributions to this report.
Why GAO Did This Study Tax whistleblowers who report on the underpayment of taxes by others have helped IRS collect $3.6 billion since 2007, according to IRS. IRS pays qualifying whistleblowers between 15 and 30 percent of the proceeds it collects as a result of their information. However, until February 9, 2018, IRS did not pay whistleblowers for information that led to the collection of FBAR penalties. GAO was asked to review how often and to what extent whistleblower claims involve cases where FBAR penalties were also assessed. Among other objectives, this report (1) describes the extent to which FBAR penalties were included in whistleblower awards prior to the statutory change in definition of proceeds; (2) examines how IRS used whistleblower information on FBAR noncompliance, and how IRS responded to the statutory change in definition of proceeds; and (3) describes the purposes for which IRS collects and uses FBAR penalty data, and assesses controls for ensuring data reliability. GAO reviewed the files of 132 claims closed between January 1, 2012, and July 24, 2017, that likely included FBAR allegations; analyzed IRS data; reviewed relevant laws and regulations, and IRS policies, procedures and publications; and interviewed IRS officials. What GAO Found Prior to February 9, 2018, when Congress enacted a statutory change requiring the Internal Revenue Service (IRS) to include penalties for Report of Foreign Bank and Financial Accounts (FBAR) violations in calculating whistleblower awards, IRS interpreted the whistleblower law to exclude these penalties from awards. However, GAO found that some whistleblowers provided information about FBAR noncompliance to IRS. In a sample of 132 whistleblower claims closed between January 2012 and July 2017, GAO found that IRS assessed FBAR penalties in 28 cases. It is unknown whether the whistleblower's information led IRS to take action in all of these cases. These penalties totaled approximately $10.7 million. Had they been included in whistleblower awards, total awards could have increased up to $3.2 million. Over 97 percent of the FBAR penalties collected from these 28 claims came from 10 cases with willful FBAR noncompliance, for which higher penalties apply. IRS forwards whistleblower allegations of FBAR noncompliance to its operating divisions for further examination. However, IRS Form 11369, a key form used for making award determinations, does not require examiners to include information about the usefulness of a whistleblower's information FBAR and other non-tax issues. After Congress enacted the statutory change, IRS suspended award determinations for 1 week, but resumed the program before updating the form or its instructions, or issuing internal guidance on new information required on the Form. As of June 28, 2018, IRS had not begun updating the Form 11369 or its instructions. The lack of clear instructions on the form for examiners to include information on FBAR and other non-tax enforcement collections may result in relevant information being excluded from whistleblower award decisions. IRS maintains FBAR penalty data in a standalone database. It uses these data for internal and external reporting and to make management decisions. Because of the change in statute, IRS will need these data for determining whistleblower awards. GAO found that IRS does not have sufficient quality controls to ensure the reliability of FBAR penalty data. For example, IRS staff enter data into the database manually but there are no secondary checks to make sure the data entered are accurate. Without additional controls for data reliability, IRS risks making decisions, including award determinations, with incomplete or inaccurate data. This is a public version of a sensitive report issued in August 2018. Information on the FBAR Database that IRS deemed to be sensitive has been omitted. What GAO Recommends GAO recommends IRS update IRS Form 11369 and improve controls for the reliability of FBAR penalty data. IRS agreed with all of GAO's recommendations.
gao_GAO-18-538
gao_GAO-18-538_0
Background DHS’s National Protection and Programs Directorate leads the country’s effort to protect and enhance the resilience of the nation’s physical and cyber infrastructure. The directorate includes the Office of Infrastructure Protection, which leads the coordinated national effort to reduce risk to U.S. critical infrastructure posed by acts of terrorism. Within the Office of Infrastructure Protection, ISCD leads the nation’s effort to secure high-risk chemical facilities and prevent the use of certain chemicals in a terrorist act on the homeland; ISCD also is responsible for implementing and managing the CFATS program. The CFATS program is intended to ensure the security of the nation’s chemical infrastructure by identifying high-risk chemical facilities, assessing the risk posed by them, and requiring the implementation of measures to protect them. Section 550 of the DHS Appropriations Act, 2007, required DHS to issue regulations establishing risk-based performance standards for chemical facilities that, as determined by DHS, present high levels of risk, to include vulnerability assessments and the development and implementation of site security plans for such facilities. DHS published the CFATS interim final rule in April 2007 and Appendix A to the rule, published in November 2007, lists 322 chemicals of interest and the screening threshold quantities for each. According to DHS, subject to certain statutory exclusions, all facilities that manufacture, store, ship, or otherwise use chemicals of interest above certain threshold quantities and concentrations are subject to CFATS reporting requirements. However, only those facilities subsequently determined to present a high level of security risk are subject to the more substantive requirements of the CFATS regulation as described below. The CFATS Regulation and Process The CFATS regulation outlines a specific process for how ISCD is to administer the CFATS program. A chemical facility that possesses any of 322 chemicals of interest in quantities that meet or exceed a threshold quantity and concentration is required to complete what is called a Top- Screen survey using ISCD’s Chemical Security Assessment Tool (CSAT) system. CSAT is a web-based application through which owners and operators of chemical facilities provide self-reported information about the facility. The Top-Screen is an on-line survey whereby the facility is to provide DHS various data, including the name and location of the facility and the chemicals, quantities, and storage conditions at the site. ISCD uses a risk-based approach to evaluate chemical facilities of interest that are required to report under CFATS and determine whether these facilities are high-risk and therefore subject to further requirements under the regulation. More specifically, ISCD’s risk assessment methodology calculates risk scores—based on facility-supplied information in the Top-Screen survey, among other sources, and taking into account vulnerability, potential consequences, and threat of a terrorist attack—and uses these scores to determine which facilities are high-risk. Those facilities deemed high-risk are then placed into one of four risk- based tiers (Tier 1 through Tier 4). Tier 1 represents the highest risk. A facility not designated as high-risk is not subject to additional requirements under the CFATS regulation. If ISCD determines that a facility is high-risk (Tier 1–4), the facility must then complete and submit to ISCD a Security Vulnerability Assessment and one of two types of security plans—a Site Security Plan or an Alternative Security Program—which describes the existing and planned security measures to be implemented in order to be in compliance with the applicable risk-based performance standards. Facilities determined to be Tier 3 or 4 also have an option to submit an expedited security plan under the CFATS Expedited Approval Program. To meet risk-based performance standards, covered facilities may choose the security programs or processes they deem appropriate so long as ISCD determines that the facilities achieve the requisite level of performance on each of the applicable areas in their existing and agreed-upon planned measures. Prior to approving a facility’s security plan, ISCD inspectors conduct an authorization inspection at the facility to verify and validate that the content listed in their plan is accurate and complete; that existing and planned equipment, processes, and procedures are appropriate and sufficient to meet the established requirements of the risk-based performance standards; and to assist the facility in resolving any potential gaps identified. After the facility’s security plan is approved, the facility enters into the CFATS compliance cycle, which includes regular and recurring compliance inspections. ISCD inspectors conduct compliance inspections to ensure the existing and planned security measures identified in a facility’s approved security plan continue to be implemented fully; the equipment, processes, and procedures described in the security plan are appropriate and sufficient to meet the established performance standards; and the required corrective actions have been implemented and are sustainable. This compliance inspection includes a verification of other data provided to ISCD, including the Top-Screen. If, through a compliance inspection, ISCD determines a facility has not fully implemented security measures as outlined in its approved security plan, ISCD is to provide the facility with written notification that clearly identifies the deficiencies in the plan and will work with the facility toward achieving full compliance or, if warranted, take enforcement action. Figure 1 illustrates the CFATS regulatory process. ISCD Has Strengthened Its Processes for Identifying High-Risk Chemical Facilities ISCD Implemented Processes to Verify Self- Reported Information from Chemical Facilities In response to our prior recommendations, ISCD has taken action to strengthen its processes for verifying the accuracy of data it uses to identify high-risk chemical facilities. In July 2015, we found that ISCD used self-reported and unverified data to determine the risk categorization for facilities that held toxic chemicals that could threaten surrounding communities if released. At the time, ISCD required that facilities self- report the Distance of Concern—an area in which exposure to a toxic chemical cloud could cause serious injury or fatalities from short-term exposure—as part of its Top-Screen methodology. In our report, we estimated that more than 2,700 facilities with a toxic release threat misreported the Distance of Concern and recommended that ISCD (1) develop a plan to implement a new Top-Screen to address errors in the Distance of Concern submitted by facilities, and (2) identify potentially miscategorized facilities that could cause the greatest harm and verify that the Distance of Concern these facilities reported is accurate. ISCD has addressed both of these recommendations. In response to the first recommendation, ISCD implemented an updated Top-Screen survey in October 2016 and now collects data from facilities and conducts more accurate modeling to determine the actual area of impact (formerly called the Distance of Concern), rather than relying on the facilities’ calculation. In response to the second recommendation, ISCD officials reported in November 2016 that they reassessed all facility Top-Screens that reported threshold quantities of chemicals posing a toxic release threat, and identified 158 facilities with the potential to cause the greatest harm. In April 2018, ISCD officials reported that all of these facilities have since been reassessed using updated Top-Screen information and, where appropriate, assigned a risk tier. In addition, in October 2016, ISCD implemented a quality assurance review process whereby ISCD officials manually check and verify the accuracy of facility self-reported Top-Screen information used in identifying potential high-risk facilities. The objective of ISCD’s review process is to evaluate the information provided by a chemical facility in order to recommend approval or rejection of a submitted Top-Screen for accuracy prior to issuing a letter notifying the facility of its risk tier designation. According to ISCD, all Top-Screens undergo a quality assurance review with two exceptions: (1) a facility that registers through CSAT for the first time and submits a Top-Screen identifying zero chemicals of interest on site and which does not identify an exclusion; or (2) a facility that possessed a chemical of interest in the past but subsequently submits a follow-up Top-Screen for redetermination identifying they no longer possess the chemical of interest and after ISCD validates the removal of the chemical of interest. When a Top-Screen submission is rejected, ISCD sends a letter notifying the facility of the rejection and requesting that a revised Top-Screen be submitted. In addition, according to ISCD, they contact facilities prior to a Top-Screen rejection to ensure the facility understands the required updates and to discuss the potential reporting error. As of February 2018, a total of 1,956 Top-Screen submissions (across 1,799 unique facilities) were rejected as part of this quality assurance review process since implementing the updated Top-Screen survey in October 2016, according to ISCD data. According to ISCD, the majority of these Top-Screens were rejected due to common reporting errors, such as misreporting the flammability hazard rating for a chemical of interest subject to a release security issue or not reporting transportation packaging when a chemical of interest is identified as being subject to a theft or diversion security issue. ISCD Has Nearly Completed Applying Its Revised Risk Assessment Methodology for Designating High-Risk Chemical Facilities ISCD Revised Its Risk Assessment Methodology to More Accurately Identify and Assign Tiers to High-Risk Chemical Facilities Since we last evaluated it in 2013, ISCD took action to enhance the CFATS program’s risk assessment methodology—used to determine whether covered chemical facilities are high-risk and, if so, assign them a risk-based tier—by incorporating changes to address prior GAO recommendations, as well as the findings of an ISCD-commissioned peer review conducted in 2013, among other efforts. In April 2013, we found that DHS’s risk assessment approach did not consider all of the elements of threat, vulnerability, and consequence associated with a terrorist attack involving certain chemicals. Our work showed that DHS’s CFATS risk assessment methodology was based primarily on consequences from human casualties, but did not consider economic consequences, as called for by the NIPP and the CFATS regulation. We also found that DHS’s approach was not consistent with the NIPP because it treated every facility as equally vulnerable to a terrorist attack regardless of location or on-site security. In addition, DHS was not using threat data for 90 percent of the tiered facilities—those tiered for the risk of theft or diversion—and using 5-year-old threat data for the remaining 10 percent of those facilities that were tiered for the risks of release or sabotage. We recommended that ISCD (1) review and improve its risk assessment approach to fully address each of the elements of threat, vulnerability, and consequence, and (2) conduct an independent peer review after enhancements to the risk assessment approach were complete. Partly in response to our findings and recommendations, from 2013 through 2016, ISCD conducted a multiyear effort to review and improve the CFATS program’s risk assessment approach and tiering methodology with the primary goal of improving the identification and appropriate tiering of high-risk chemical facilities. Among these efforts was an ISCD- commissioned peer review of the CFATS tiering methodology conducted in 2013 by the Homeland Security Studies and Analysis Institute (HSSAI). HSSAI’s final report summarized the findings of the peer review and included a list of 44 recommendations for ISCD to implement in its efforts to improve and revise the CFATS risk assessment and tiering methodology. ISCD undertook a risk assessment improvement project to implement most of the recommendations described in the 2013 HSSAI final report; these efforts included, for example, convening advisory board meetings with experts drawn from across industry, academia, and government to review and make additional recommendations on the proposed improvements to the CFATS risk assessment methodology and associated tools and processes. The result of these efforts is an updated, “second generation” risk assessment approach and tiering methodology that addresses both of our prior recommendations and almost all of the recommendations described in the 2013 HSSAI final report. Specifically, with regard to our recommendation that DHS enhance its risk assessment approach to incorporate all elements of risk, ISCD worked with Sandia National Laboratories to develop and evaluate a model to estimate the economic consequences of a chemical attack. In addition, among other enhancements, the updated risk assessment methodology incorporates revisions to the threat, vulnerability, and consequence scoring methods to better cover the full range of chemical security issues regulated by the CFATS program. Additionally, with regard to our recommendation that DHS conduct a peer review after enhancing its risk assessment approach, DHS conducted peer reviews and technical reviews with government organizations and facility owners and operators, and worked with Sandia National Laboratories to verify and validate the CFATS program’s revised risk assessment methodology which was completed in January 2017. In addition, as of May 2018, ISCD has considered, implemented, or is in the process of implementing updates that address 39 of the 44 recommendations in the HSSAI peer review of the original CFATS risk assessment methodology. According to ISCD, DHS must undertake a rulemaking to update the CFATS regulation and to obtain public comment on any proposed changes to implement the remaining recommendations. These relate to possible changes in how or to what extent the CFATS program regulates the treatment of certain chemicals of interest, chemical weapons and their precursors, and other fuels or fuel mixtures. Implementation of the Revised Risk Assessment Methodology Is Nearly Complete Beginning in October 2016, ISCD notified chemical facilities that were not new to the CFATS program—that is, all facilities that had previously submitted a Top-Screen and had reported chemicals of interest above the threshold quantity and concentration on their most recent Top-Screen—to submit a revised Top-Screen in CSAT 2.0 so that they may be re- assessed using ISCD’s revised risk assessment methodology. As of February 2018, a total of 29,195 chemical facilities were assessed using ISCD’s revised risk assessment methodology, with 3,500 (or 12 percent) of these facilities designated as high-risk (i.e., assigned to tiers 1 through 4). The total of 29,195 chemical facilities includes 26,828 facilities that were previously assessed using the original risk assessment methodology and an additional 2,367 facilities new to the CFATS program, as shown in figure 2. Of the 3,500 tiered facilities, 265 were new to the CFATS program; 889 were not new to the program, but were previously not tiered and were reassessed as high-risk and assigned a tier; and 1,345 were previously tiered but were reassigned to a different tier. Also, 430 facilities that were previously tiered were no longer tiered. As of May 2018, ISCD had pending risk assessments for an additional 241 chemical facilities that were not new to the CFATS program but were not previously tiered. ISCD officials did not provide an estimated target completion date for these pending risk assessments, noting that completing the assessments is highly dependent on the facilities providing the necessary Top-Screen information. According to ISCD, there are four main drivers of the changes in facility tiering that resulted from implementing the second-generation risk assessment methodology: facilities placed in a lower tier due to implementation of revised consequence scoring methods that more accurately account for the impact of quantities of the chemicals subject to theft/diversion security issues; facilities placed in a higher or lower tier for chemicals of interest due to improvements to the distribution of population in consequence modeling for chemicals subject to release-toxic and release- flammable security issues; increases in the number of facilities tiered for select chemical weapon precursors due to the implementation of revised consequence scoring methods that more accurately account for the impact of certain chemicals of interest; and changes in tiering due to newly reported increases, decreases, and modifications of chemical holdings. ISCD Has Made Progress Conducting Compliance Inspections but Does Not Measure Reduction in Facility Vulnerability ISCD Has Increased the Number of Completed Compliance Inspections and Issued Two Corrective Actions for Noncompliance with Security Plans Since 2013, ISCD has reduced its backlog of unapproved site security plans and increased the number of conducted compliance inspections. As discussed earlier, in order to approve a facility’s site security plan, ISCD inspectors conduct an authorization inspection at the facility to verify and validate that the content listed in their plan is accurate and complete; that existing and planned equipment, processes, and procedures are appropriate and sufficient to meet the established requirements of the risk-based performance standards; and to assist the facility in resolving any potential gaps identified. After the facility’s security plan is approved, the facility enters into the CFATS compliance cycle and is subject to a compliance inspection. In 2013, we calculated that it could take from 7 to 9 years to review and approve the approximately 3,120 site security plans submitted by facilities that had been designated as high-risk but that ISCD had not yet begun to review. In 2015, we found that ISCD had made improvements to its processes for reviewing and approving site security plans and substantially reduced the time needed to approve remaining site plans to between 9 and 12 months. Our analysis of ISCD data since our 2015 report showed that ISCD has made substantial progress conducting and completing compliance inspections. Specifically, our analysis showed that ISCD has increased the number of compliance inspections completed per year since ISCD began conducting compliance inspections in 2013. For the 2,466 high-risk facilities with an approved site security plan as of May 2018, ISCD had conducted 3,553 compliance inspections. Table 1 shows the number of conducted compliance inspections from fiscal year 2014 to May 2018. ISCD officials project they will conduct fewer compliance inspections in fiscal year 2018 than in fiscal year 2017 due to two reasons. First, ISCD officials stated the program made progress resolving the backlog of facilities that required compliance inspections in fiscal years 2016 and 2017 when it conducted over 2,600 compliance inspections. Second, ISCD officials stated that the program’s revised risk assessment approach and continued outreach efforts have resulted in an increase in the number of identified facilities with chemicals of interest. As a result, ISCD officials stated they project an increased number of authorization inspections and fewer compliance inspections in fiscal year 2018 and 2019 as new facilities enter the program. ISCD increased the number of compliance inspections conducted from fiscal years 2014 to 2017 and less than 1 percent of compliance inspections during this period resulted in a determination that a facility was not in compliance. During a compliance inspection, if an inspector finds that a facility is noncompliant with its security plan, the CFATS regulation authorizes ISCD to take enforcement action, such as issuing an order for corrective action to the facility. Of the 3,553 compliance inspections ISCD conducted between fiscal year 2014 and May 2018, ISCD issued two corrective actions—both to Tier 4 facilities—because these facilities were not in compliance with their approved site security plan. Specifically, during the compliance inspection of one facility, which was determined to be high-risk based on both the release and theft/diversion security issues, ISCD found that the facility’s site security plan did not identify several existing or planned measures to secure the facility’s chemicals of interest. For example, the facility’s site security plan did not identify measures to monitor restricted areas or potentially critical targets within the facility against a theft or release attack. In addition, while the facility’s site security plan identified a chain link fence and an alarm on a gate to a secure cage that houses the chemicals of interest, ISCD inspectors found no evidence of either. During the compliance inspection of the second facility, which was determined to be high-risk based on the theft and diversion security issue, ISCD inspectors were unable to verify if the facility’s intrusion detection system was properly functioning and that an individual not employed by the facility may have had access to the facility’s chemicals of interest without a proper background check. Both of these facilities took actions to implement the measures identified in their site security plan and were later found to be in compliance with their site security plans. ISCD officials attribute the low number of corrective actions the program has issued to the program’s collaborative approach of working with facilities to ensure compliance. For example, of the two facilities ISCD found to be in noncompliance, ISCD conducted a compliance assistance visit with both facilities to provide assistance. In addition to compliance assistance visits, ISCD officials stated that the program has other collaborative tools, such as the CFATS Help Desk, to help ensure facility compliance. ISCD Continues to Implement Changes to Compliance Inspections and Improve Efficiency ISCD continues to implement changes that are intended to enhance compliance inspections. For example, ISCD officials stated the program continues to conduct preinspection phone calls with facilities to help them prepare for compliance inspections. In addition, ISCD officials stated they developed and provided supplemental guidance in fiscal year 2017 on steps ISCD inspectors need to take during a compliance inspection. ISCD’s supplemental guidance includes, among other things, best practices and lessons learned for conducting inspections and reporting items identified by the inspections. ISCD officials stated they plan to incorporate this supplemental guidance into their compliance inspection standard operating procedures in the third quarter of fiscal year 2018 and to update their compliance inspection handbook in the fourth quarter of 2018. In addition to updating its guidance for inspectors, ISCD has taken steps to improve the efficiency of compliance inspections. For example, ISCD continues its outreach efforts to chemical facilities on the inspection process. As part of these efforts, ISCD published guidance for facilities on steps to take to prepare for the compliance inspection, including information on the appropriate personnel and documentation that should be made available during the inspection. Finally, ISCD increased the number of compliance assistance visits with facilities to better prepare them for inspections. Representatives from 9 of the 11 industry associations we spoke with told us that ISCD’s communication with facilities had improved the efficiency of compliance inspections and increased the ability of facilities to comply with the risk-based performance standards. We accompanied inspectors on two separate compliance inspections to observe how the inspections were carried out and how inspectors used the risk-based performance standards to determine compliance. For example, during the compliance inspection of a facility identified as high- risk based on the theft and diversion security issue, we observed facility personnel and ISCD inspectors discussing the preinspection phone call ISCD had conducted to assist the facility in preparation for their compliance inspection. This discussion included confirmation that the facility communicated with the local fire and police departments and had requested their presence at the inspection. In addition, we observed the inspectors analyzing the facility’s emergency response plan to determine whether the facility’s plans were consistent with the applicable risk-based performance standards. We also observed the inspectors subsequently interviewing local fire and police department officials that were present during the inspection to validate statements made by the facility and to confirm that both entities received the facility’s emergency plan. We accompanied the inspectors and facility personnel on a tour of the facility where inspectors observed existing measures the facility used to protect the chemicals of interest, including the facility’s fencing barrier. We also observed inspectors testing security measures, including the facility’s access controls put in place to prevent unauthorized personnel gaining access to the chemicals of interest. At the other compliance inspection we observed, the facility personnel and ISCD inspectors confirmed a preinspection phone call to prepare the facility for the inspection. This phone call included a discussion of the appropriate training records and contract documentation that inspectors needed to confirm compliance with the applicable risk-based performance standard. During the inspection, we observed that the facility made this documentation and the appropriate personnel available to answer ISCD inspector questions on the security training the facility held during the prior year. We also observed inspectors verifying that existing measures, such as the facility’s fence barrier, were still present and not compromised with breaches. In addition, we observed the inspectors testing key cards to the building that housed the chemicals of interest to ensure the cards prevented unauthorized access. Finally, we observed inspectors requesting a demonstration of how the facility’s chemicals of interest are delivered to the facility and what controls were in place to monitor third-party contractors during delivery of chemicals of interest. We also discussed the compliance inspection process with representatives from trade associations that represent facilities covered by CFATS and considered high-risk. Representatives from 7 of the 11 trade associations that we spoke with stated that ISCD’s implemented changes have improved the compliance inspection process since the program’s inception. Specifically, representatives from three trade associations stated that ISCD inspectors’ efforts to increase communication with facilities, including preinspection phone calls and compliance assistance visits, have increased the ability of facilities to ensure they are compliant with their approved site security plan. However, representatives from 3 of the 11 trade associations we spoke with also noted some issues with the compliance inspection process. Specifically, officials from these 3 associations stated that ISCD inspectors inconsistently apply the risk-based performance standards relative to the measures the facilities implemented. Some of this inconsistency may be due, in part, to the flexibility inherent in the risk- based performance standards which give facilities the discretion or latitude to tailor security based on conditions and circumstances. For example, the amount and type of chemicals of interest may vary by facility, so some facilities may require additional security measures be put in place to ensure protection of these chemicals. In addition, facilities vary by geographic location, which may affect the measures the facility needs to implement to protect the chemicals of interest from potential theft or diversion. DHS officials stated that they believe any perceived inconsistency is due to the flexibility in application of the risk-based performance standards and the variety of facility conditions that contribute to the appropriateness of different security measures. Officials explained that, for example, inspectors would likely recommend that a large campus-type facility not invest in a perimeter fence line but instead utilize asset-based barriers to satisfy the performance standards. Officials noted that facilities can choose to employ security measures which best fit their specific situation and can request that inspectors provide multiple options for their consideration. DHS’s Methodology for Measuring Changes in Facility Site Security Does Not Reflect Reduction in Vulnerability ISCD developed its performance measure methodology for the CFATS program in order to evaluate security changes made by high-risk chemical facilities, but the methodology does not measure the program’s impact on reducing a facility’s vulnerability to an attack. In 2015 we found that while ISCD’s performance measure for the CFATS program was intended to reflect security measures implemented by facilities and the overall impact of the CFATS regulation on facility security, the metric did not solely capture security measures that are implemented by facilities and verified by ISCD. We recommended that DHS develop a performance measure that includes only planned security measures that have been implemented and verified. In response to our finding and recommendation, ISCD’s performance measure requires that ISCD officials verify that planned measures have been implemented in accordance with the approved site security plan (or alternative security program) by compliance inspection or other means before inclusion in the performance measure calculation. ISCD has since decided to develop a new methodology and performance measure for the CFATS program. In 2016, ISCD began development of an approach called the guidepost-based site security plan scoring methodology. ISCD officials stated they plan to use the methodology to evaluate the security measures a facility implemented from initial state— when a facility submits its initial site security plan—to the facility’s approved security plan, according to ISCD officials. Officials stated that once implemented, the methodology’s resulting performance measure will be maintained internally and, if approved, may be used to satisfy the program’s reporting requirements consistent with the Government Performance and Results Act (GPRA) and included in DHS’s Annual Performance Report. The methodology organizes a facility’s security measures based on five guideposts. Using the five guideposts as a framework, the security measures a facility reports in its site security plan are evaluated by ISCD under the applicable guidepost to determine the level of security performance. For example, the plan contains a question on whether a facility has a perimeter fence barrier and if so, what type, such as a chain link fence, metal fence, or vinyl fence. ISCD uses the facility’s responses to assign a numerical value that indicates the level of security performance for the type of fence a facility uses as a perimeter barrier. The scores of the five guideposts are then aggregated and the resulting score represents the site security plan score for a facility. Officials stated that a facility’s site security plan score is developed when the facility submits its initial site security plan and again when ISCD approves its site security plan and the facility has completed the CFATS inspection process. ISCD officials stated the purpose of the methodology is to measure the increase in security attributed to the CFATS program and stated that the methodology is not intended to measure risk reduction. As a result, the methodology and resulting performance metric do not reflect the program’s impact on reducing a facility’s vulnerability to an attack. While ISCD officials stated the program is exploring how to use the site security plan scores of a facility, this methodological approach may provide ISCD an opportunity to begin assessing how vulnerability is reduced and, by extension, risk is lowered, not only for individual facilities but for the program as a whole. The NIPP calls for evaluating the effectiveness of risk management efforts by collecting performance data to assess progress in achieving identified outputs and outcomes. The purpose of the CFATS program is to ensure facilities have security measures in place to reduce the risks associated with certain hazardous chemicals and to prevent these chemicals from being exploited in a terrorist attack. A measure that reflects risk reduction may include how the CFATS inspection process measures the reduction of one element of risk— vulnerability—of high-risk facilities to a terrorist attack. ISCD officials stated that challenges exist with incorporating vulnerability into the measure’s methodology, such as how to accurately measure a facility’s vulnerability to an attack before the facility started the CFATS inspection process. We recognize challenges ISCD might face in incorporating vulnerability into its scoring methodology. In our prior work, we acknowledged that assessing the benefits of a program—such as reducing a high-risk facility’s vulnerability to an attack—is inherently challenging because it is often difficult to isolate the impact of an individual program on behavior that may be affected by multiple other factors. However, ISCD could take steps to evaluate vulnerability reduction resulting from the CFATS compliance inspection process. For example, because facilities conduct their own vulnerability assessments when developing their site security plan for submission to ISCD, ISCD could establish a vulnerability baseline score when it evaluates a facility’s security measures during its initial review of the facility’s plan. ISCD could then use this baseline score as the starting point for assessing any reduction in vulnerability that ISCD can document that has occurred as a result of security measures implemented by the facility during the compliance inspection process. As the CFATS program continues to mature and ISCD begins its efforts to assign scores to facility site security plans, incorporating assessments of reductions in vulnerability at individual facilities and across the spectrum of CFATS facilities as a whole would enable ISCD to better measure the impact of the CFATS compliance inspection process on reducing risk and increasing security nationwide. First Responders and Emergency Planners May Not Have Information Needed to Respond to Incidents at High-Risk Chemical Facilities We found over 200 chemicals covered by CFATS that may not be included in the chemical inventory information that officials told us they rely on to prepare for and respond to incidents at chemical facilities. ISCD shares some CFATS information with state and local officials, including access to CFATS facility-specific data via a secure portal; however, this portal is not widely used at the local level by first responders and emergency planners. First Responders and Emergency Planners May Not Have Sufficient Information to Prepare for and Respond to Incidents at High-Risk Chemical Facilities First responders and emergency planners may not have the necessary information to prepare for and respond to incidents at high-risk chemical facilities regulated by the CFATS program. As mentioned earlier, on April 17, 2013, about 30 tons of ammonium nitrate fertilizer—containing a CFATS chemical of interest—detonated during a fire at a fertilizer storage and distribution facility in West, Texas killing 15 people, including 12 first responders, and injuring more than 260 others. This event, among others, prompted the President to issue Executive Order 13650 to improve chemical facility safety and security in coordination with owners and operators. The Executive Order established a Chemical Facility Safety and Security Working Group and included directives for the working group to, among other things, improve operational coordination with state, local, and tribal partners. The working group created a federal plan of action consisting of actions to improve the safety and security of chemical facilities. One key element of this plan focused on the Emergency Planning and Community Right-to-Know Act of 1986 (EPCRA), which was intended to encourage and support emergency planning efforts at the state and local levels. In accordance with EPCRA, state and local entities, such as Local Emergency Planning Committees (LEPCs)—consisting of representatives including local officials and planners, facility owners and operators, first responders, and health and hospital personnel, among others—were created. These LEPCs were designed to (1) prepare for and mitigate the effects of a chemical incident and (2) ensure that information on chemical risks in the community is provided to first responders and the public. The working group acknowledged there was a need to share data with representatives of these state and local entities to enable them to identify gaps and inconsistencies in their existing information that could reveal previously unknown risks in their communities. For facilities subject to EPCRA requirements, this data is to include, among other things, information about chemicals stored or used at the facility for which facilities are required to submit an emergency and hazardous chemical inventory form to these state and local entities. The working group’s federal plan also included a DHS commitment to share certain CFATS data elements with first responders, state agencies and LEPCs to help communities identify and prioritize risks and develop a contingency plan to address those risks while acknowledging that access to certain sensitive portions of CFATS data will remain restricted to officials with a “need-to-know” so as to appropriately balance security risks. In our interviews with 15 LEPCs—whose jurisdictions include 373 high- risk chemical facilities regulated by the CFATS program—we found that officials rely on information reported on EPCRA chemical inventory forms to prepare for and respond to incidents at CFATS facilities. These officials may not have sufficient information to respond to emergencies at CFATS facilities because EPCRA reporting requirements may not cover some of the chemicals covered under the CFATS program. Specifically, we analyzed the chemicals covered by both CFATS and EPCRA’s reporting requirements and found there are over 200 CFATS chemicals of interest that, depending upon state reporting guidelines, may not be covered by EPCRA reporting requirements. Several of these chemicals may require specific response techniques to minimize the risk of injury or death to first responders and the surrounding community. For example, in the event of fire, aluminum powder, a chemical not subject to EPCRA reporting requirements but regulated under CFATS, produces flammable gases when in contact with water and requires responders to instead use a dry chemical or sand to extinguish the fire. Based on our analysis of tiered CFATS facilities, we estimate that about 32 percent of these high- risk facilities possess at least one chemical that may not be covered by EPCRA reporting requirements. In addition, we found these LEPCs may lack information on the CFATS facilities in their jurisdictions. Specifically, officials representing 11 of the 15 LEPCS we interviewed said they were not aware of which facilities in their jurisdiction were regulated by the CFATS program. Of these 11 LEPCs, officials from 8 LEPCs stated it would be very helpful or critical to know this information and officials from 2 LEPCs stated it would be somewhat helpful. According to these officials, this information would assist LEPCs, some of which have hundreds of facilities in their jurisdiction, to prioritize the most significant facilities for additional planning or scheduling of drills and exercises. Additionally, officials representing 5 LEPCs stated they were not aware of the differences between CFATS chemicals of interest and those chemicals subject to EPCRA reporting requirements. These LEPC officials stated that, among other things, it is critical to have a comprehensive understanding of all chemicals at a facility and that this information is very important for emergency responders to be aware of when responding to an incident. ISCD Could Take Additional Action to Share Information about High- Risk Facilities with First Responders and Emergency Planners Consistent with the CFATS Act of 2014, ISCD is to play a role in ensuring that first responders and emergency planners are properly prepared for and provided with the situational awareness needed to respond to security incidents at high-risk chemical facilities. While the CFATS Act of 2014 does not specifically require that information be shared directly with first responders, ISCD has taken steps to share CFATS information with state and local officials to help ensure that first responders are prepared to respond to such security incidents. These steps include, among other things, ensuring that facilities are developing and exercising an emergency plan to respond to security incidents internally and with assistance of local law enforcement and first responders. Planning and training are important to ensure that facility personnel, onsite security, law enforcement, and first responders are ready to respond to external and internal security incidents. Additionally, these planning activities and relationships with first responders can assist in reducing the impact of these incidents. According to ISCD officials, to verify compliance with this requirement, ISCD inspectors validate facility outreach to first responders, such as local law enforcement and fire departments, through review of facility documentation, including emails with first responders, records of drills, and logs of meetings and tours, or through direct contact with the local first responders by the inspection team. In addition, the Executive Order 13650 working group sought to, among other things, strengthen community planning and preparedness and ensure that first responders and emergency planners are aware of the risks associated with hazardous chemicals in their communities. Included was a goal to increase information-sharing with communities that are near chemical facilities. In a May 2014 report, this working group identified certain information, including the name and quantity of chemicals at a facility, as the most helpful to first responders and emergency planners. This information is intended to enable emergency planners to conduct an analysis to identify gaps and inconsistencies in their existing information that could reveal previously unknown risks in their communities. ISCD has taken action to ensure first responders and emergency planners have access to CFATS data. For example, in response to Executive Order 13650, ISCD shares CFATS data through the Infrastructure Protection (IP) Gateway. This online portal contains critical infrastructure data and analytic tools, including data on covered CFATS facilities, for use by federal officials, state, local, tribal, and territorial officials, and emergency response personnel. CFATS data available in the IP Gateway includes, among other things, facility name, location, risk tier, and chemicals on-site and is accessible to authorized federal and other state, local, tribal, and territorial officials and responders with an established need-to-know. The IP Gateway provides these officials and responders access to CFATS facility-specific information that may be unreported on EPCRA chemical inventory forms. This CFATS facility-specific information can help ensure these groups are properly prepared to respond to incidents at high-risk chemical facilities in their jurisdictions. While the IP Gateway is a mechanism for sharing names and quantities of chemicals at CFATS high-risk facilities with first responders and emergency planners, we found it is not widely used by officials at the local level. ISCD told us that in May 2018 they published three revised fact sheets and included information on the IP Gateway in presentation materials that officials told us was intended to increase promotion and use of the IP Gateway. However, according to DHS, there are 14 accounts categorized at the local level whose access to the IP Gateway layer includes the names and quantities of chemicals at CFATS facilities. A local account indicates the individual with access is a county- or city- level employee or contractor. Additionally, while not generalizable to all LEPCs, officials representing 7 of the 15 LEPCs we interviewed were not aware of the IP Gateway and officials representing 13 of the 15 LEPCs stated that they do not have access to CFATS information within the IP Gateway. Of the 13 officials that reported they did not have access, 11 said that it would be helpful or critical to have access for several reasons. Specifically, officials representing these LEPCs stated that this information would assist them to better prepare and respond to incidents and help emergency planners prioritize the most critical sites among the thousands of facilities that they oversee. According to DHS officials, their outreach plan, developed in March 2015, specifically addresses regular engagement with LEPCs, among other groups. However, these officials acknowledged that information may not be reaching some state and local officials due to a number of factors, including the large number of LEPCs and first responders across the country, and changes in the level of LEPC activity and personnel over time. While we recognize these challenges, providing first responders and emergency planners access to CFATS facility-specific information, including the name and quantity of chemicals at a facility, can help ensure these groups are properly prepared to respond to incidents at high-risk chemical facilities in their jurisdictions. The NIPP states that agencies should share actionable and relevant information across the critical infrastructure community—including first responders and emergency planners—to build awareness and enable risk-informed decision making as these stakeholders are crucial consumers of risk information. Additionally, the 2015 Emergency Services Sector-Specific Plan, an Annex to the 2013 NIPP, further calls for engaging with local emergency planning organizations, such as LEPCs, to enhance information-sharing and analytical capabilities for incident planning, management, and mitigation between stakeholders. The IP Gateway is one way through which ISCD can share CFATS facility-specific information, including the name and quantity of chemicals at high-risk facilities with first responders and emergency planners. As discussed earlier, although ISCD is not required to share CFATS facility-specific information directly with first responders, this information is critical to prepare for and respond to incidents at high-risk chemical facilities and to protect them and their communities from injury or death. By exploring ways to improve information-sharing of CFATS facility-specific data, such as promoting wider use of the IP Gateway among first responders and emergency planners, DHS will have greater assurances that the emergency response community has access to timely information about high-risk chemical facilities. Conclusions DHS, through ISCD, has made improvements to the CFATS program. ISCD has taken action to strengthen its processes for verifying the accuracy of data it uses to identify high-risk chemical facilities, revised its risk assessment methodology to more accurately identify and assign high-risk chemical facilities to tiers, and has nearly completed its efforts to apply this new methodology to facilities covered by CFATS. Furthermore, ISCD has conducted an increased number of compliance inspections and continues to make changes to improve the efficiency of the inspection process. While ISCD has developed a new methodology and performance measure for the CFATS program in order to evaluate security changes made by high-risk chemical facilities, we found that the methodology and metric do not reflect the program’s impact on reducing a facility’s vulnerability to an attack. ISCD may have an opportunity to explore how reductions in vulnerability at individual facilities resulting from the CFATS compliance inspection process could be used to develop an overall measure of the performance of the CFATS program in reducing risk and increasing security nationwide. Such a measure would be consistent with the NIPP, which calls for evaluating the effectiveness of risk management efforts by collecting performance data to assess progress in achieving identified outputs and outcomes. Moving forward, ISCD could also take additional actions to ensure information about high- risk chemical facilities is shared with first responders and emergency planners. During our review, we found that local emergency responders may not have the information they need to adequately respond to incidents at CFATS facilities; a situation that could expose them and their communities to potentially life-threatening situations. While the IP Gateway is a mechanism for sharing names and quantities of chemicals at high-risk facilities with first responders and emergency planners, we found it is not widely used by officials at the local level. The NIPP states that agencies should share actionable and relevant information across the critical infrastructure community—including first responders and emergency planners—to build awareness and enable risk-informed decision making, as these stakeholders are crucial consumers of risk information. By improving information-sharing with first responders and emergency planners, such as promoting access to and wider use of the IP Gateway, DHS will have greater assurances that the emergency response community has access to timely information about high-risk chemical facilities that could help protect them from serious injury or death. Recommendations for Executive Action We are making the following two recommendations to DHS: The Director of ISCD should incorporate vulnerability into the CFATS site security scoring methodology to help measure the reduction in the vulnerability of high-risk facilities to a terrorist attack, and use that data in assessing the CFATS program’s performance in lowering risk and enhancing national security. (Recommendation 1) The Assistant Secretary for Infrastructure Protection, in coordination with the Director of ISCD, should take actions to encourage access to and wider use of the IP Gateway and explore other opportunities to improve information-sharing with first responders and emergency planners. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to DHS for review and comment. DHS provided written comments, which are reproduced in full in appendix I, and technical comments, which we incorporated as appropriate. In its comments, DHS concurred with both recommendations and outlined efforts underway or planned to address them. Regarding the first recommendation that ISCD should incorporate vulnerability into the CFATS site security scoring methodology to help measure the reduction in the vulnerability of high-risk facilities and use that data to further assess the CFATS program’s performance in lowering risk and enhancing national security, DHS concurred but noted that developing a system that could numerically evaluate vulnerabilities will be challenging. DHS stated that implementing the recommendation would likely require, among other things, revising the regulatory language describing CFATS vulnerability assessments and updating tools used to gather them, potentially creating a significant burden on both industry and government. DHS added that its new proposed performance metric, described earlier in this report, demonstrates the enhancement to national security resulting from the CFATS program and, by extension, the program’s impact on vulnerability and overall risk. As stated earlier, we recognize challenges ISCD might face in incorporating vulnerability into its scoring methodology. In our prior work, we acknowledged that assessing the benefits of a program—such as reducing a high-risk facility’s vulnerability to an attack—is inherently challenging because it is often difficult to isolate the impact of an individual program on behavior that may be affected by multiple other factors. However, in order to fully implement this recommendation, ISCD needs to consider steps it can take to evaluate vulnerability reduction resulting from the CFATS compliance inspection process without revisions to the regulation or by creating a significant burden on both industry and government. We noted, for example, that ISCD could establish a vulnerability baseline score when it evaluates a facility’s security measures during its initial review of the facility’s site security plan. ISCD could then use this baseline score as the starting point for assessing any reduction in vulnerability that ISCD can document that has occurred as a result of security measures implemented by the facility during the compliance inspection process. As the CFATS program continues to mature and ISCD begins its efforts to assign scores to facility site security plans, incorporating assessments of reductions in vulnerability at individual facilities and across the spectrum of CFATS facilities as a whole would enable ISCD to better measure the impact of the CFATS compliance inspection process on reducing risk and increasing security nationwide. Regarding the second recommendation that the Office of Infrastructure Protection and ISCD take actions to encourage access to and wider use of the IP Gateway and explore other opportunities to improve information- sharing with first responders and emergency planners, DHS stated that it has various outreach activities underway, among other information- sharing efforts, to either directly share or ensure that high-risk chemical facilities are sharing CFATS information with first responders and emergency planners. DHS added that, to continue these efforts and to encourage better utilization of the IP Gateway, it will ensure contact is made with LEPCs representing the top 25 percent of CFATS high-risk chemical facilities no later than the end of the second quarter of fiscal year 2019. While the outreach and information-sharing efforts DHS described are a step in the right direction, in order to fully implement this recommendation it is critical that the intent of any actions taken is to ensure that all first responders and emergency planners with a need-to- know are provided with timely access to CFATS facility-specific information in their jurisdictions. This information should include the name and quantity of chemicals at a facility so as to help these groups be properly prepared to respond to incidents at high-risk chemical facilities and to minimize the risk of injury or death to first responders and the surrounding community. Furthermore, it is important that these actions are focused on ensuring that this CFATS facility-specific information is shared with first responders and emergency planners representing the entirety of CFATS facilities determined to be high-risk, not just those that represent the top 25 percent of CFATS high-risk facilities. We are sending copies of this report to the Secretary of Homeland Security, the Under Secretary for the National Protection Programs Directorate, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (404) 679-1875 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Appendix I: Comments from the Department of Homeland Security Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, John Mortin (Assistant Director), Hugh Paquette (Analyst in Charge), Chuck Bausell, Kristen Farole, Michele Fejfar, Brandon Jones, Tom Lombardi, Mike Moran, Rebecca Parkhurst, and Claire Peachey made significant contributions to this report.
Why GAO Did This Study Facilities that produce, use, or store hazardous chemicals could be targeted or used by terrorists to inflict mass casualties, damage, and fear. DHS established the CFATS program to assess the risk posed by these facilities and inspect them to ensure compliance with DHS standards. DHS places high-risk facilities in risk-based tiers and is to conduct inspections after it approves their security plans. Under the CFATS Act of 2014, authorization for the CFATS program expires in January 2019. GAO assessed the extent to which DHS has (1) enhanced the process for identifying high-risk facilities and assigning them to tiers, (2) conducted facility inspections and measured facility security, and (3) ensured that information is shared with emergency responders to prepare them for incidents at high-risk facilities. GAO reviewed DHS reports and data on compliance inspections and interviewed DHS officials. GAO also obtained non-generalizable information from 11 trade associations representing chemical facilities regarding DHS outreach and from 15 emergency planning committees about their awareness of CFATS and the chemicals it covers. What GAO Found Since 2013, the Department of Homeland Security (DHS) has strengthened its processes for identifying high-risk chemical facilities and assigning them to tiers under its Chemical Facility Anti-Terrorism Standards (CFATS) program. Among other things, DHS implemented a quality assurance review process to verify the accuracy of facility self-reported information used to identify high-risk facilities. DHS also revised its risk assessment methodology—used to assess whether chemical facilities are high-risk and, if so, assign them to a risk-based tier—by incorporating changes to address prior GAO recommendations and most of the findings of a DHS-commissioned peer review. For example, the updated methodology incorporates revisions to the threat, vulnerability, and consequence scoring methods to better cover the full range of security issues regulated by CFATS. As of February 2018, a total of 29,195 facilities—including all 26,828 facilities previously assessed and 2,367 facilities new to the program—were assessed using DHS's revised methodology. DHS designated 3,500 of these facilities as high-risk and subject to further requirements. DHS has also made substantial progress conducting and completing compliance inspections and has begun to take action to measure facility security but does not evaluate vulnerability reduction resulting from the CFATS compliance inspection process. In 2013, GAO found that the backlog of chemical facility security plans awaiting review affected DHS's ability to conduct compliance inspections, which are performed after security plans are approved. Since then DHS has made progress and increased the number of completed compliance inspections. As of May 2018, DHS had conducted 3,553 compliance inspections. DHS has also begun to update its performance measure for the CFATS program to evaluate security measures implemented both when a facility submits its initial security plan and again when DHS approves its final security plan. However, GAO found that DHS's new performance measure methodology does not measure reduction in vulnerability at a facility resulting from the implementation and verification of planned security measures during the compliance inspection process. Doing so would provide DHS an opportunity to begin assessing how vulnerability is reduced—and by extension, risk lowered—not only for individual high-risk facilities but for the CFATS program as a whole. DHS shares some CFATS information, but first responders and emergency planners may not have all of the information they need to minimize the risk of injury or death when responding to incidents at high-risk facilities. Facilities are currently required to report some chemical inventory information, but GAO found that over 200 CFATS chemicals may not be covered by these requirements. To improve access to information, DHS developed a secure interface called the Infrastructure Protection (IP) Gateway that provides access to CFATS facility-specific information that may be missing from required reporting. However, GAO found that the IP Gateway is not widely used at the local level. In addition, officials from 13 of the 15 Local Emergency Planning Committees—consisting of first responders and covering 373 CFATS high-risk facilities—told GAO they did not have access to CFATS data in the IP Gateway. By encouraging wider use of the IP Gateway, DHS would have greater assurance that first responders have information about high-risk facilities and the specific chemicals they possess. What GAO Recommends GAO recommends that DHS take actions to (1) measure reduction in vulnerability of high-risk facilities and use that data to assess program performance; and (2) encourage access to and wider use of the IP Gateway among first responders and emergency planners. DHS concurred with both recommendations and outlined efforts underway or planned.
gao_GAO-18-85
gao_GAO-18-85_0
Background This section provides an overview of the produce rule and describes how FDA is partnering with states to implement the rule. Overview of the Produce Rule and Compliance Dates Produce is an important part of a healthy diet but is susceptible to contamination from numerous sources, including agricultural water, animal manure, equipment, and farm workers. The produce rule established standards to help ensure the safe growing and handling of produce. For example, the rule requires that businesses take steps to ensure that agricultural water that comes into contact with produce is safe and of adequate sanitary quality for its intended use. As part of this, the rule established microbial water criteria to determine the presence of generic E. coli, which is the most commonly used indicator of fecal contamination, and referenced a testing method published by the Environmental Protection Agency to test for the presence of generic E. coli. The rule also established standards specific to sprouts, which are especially vulnerable to contamination because of the warm, moist, and nutrient-rich conditions needed to grow them. In addition to the general requirements of the produce rule, the rule also includes requirements for businesses specifically related to preventing contamination of sprouts, which have been associated with foodborne illness outbreaks. The rule applies to businesses that grow, harvest, pack, or hold produce, including produce that will be imported or offered for import, with some exemptions based on the produce commodity and the size of a business. For example, the rule does not apply to produce that is rarely consumed raw, such as asparagus or black beans, and produce that is to be consumed on the farm. In addition, the rule does not apply to businesses that have an average annual monetary value of $25,000 or less of produce sold during the previous 3-year period. FDA’s implementation of the produce rule will occur over several years. According to the rule, compliance dates are phased in from 2017 through 2022 based on business size and other factors. Compliance dates for certain agricultural water standards and for sprouts differ from the compliance dates for other provisions in the rule. For example, compliance for large businesses under certain agricultural water standards with covered activities not involving sprouts is due in January 2020; compliance for small businesses under certain agricultural water standards with covered activities not involving sprouts is due in January 2021; and compliance for very small businesses under certain agricultural water standards with covered activities not involving sprouts is due in January 2022. In 2019, FDA intends to start inspecting produce businesses, other than those growing sprouts. At that time, FDA is to assess compliance with the produce rule, with the exception of the agricultural water standards, for all produce other than sprouts. See fig. 1 for more information on implementation timelines. FDA-State Partnership in Helping to Ensure Compliance with the Rule FSMA authorized and encouraged FDA to coordinate with states in helping to ensure compliance with the produce rule. According to FDA officials, developing a working relationship with states to implement the rule is of critical importance because states may have an understanding of farming practices as a result of their historically close relationship with farms. To facilitate coordination with states, FDA established the State Produce Implementation Cooperative Agreement Program. The program is to provide funds to support a variety of state activities, including educating and providing technical assistance to produce businesses, to the 43 participating states. Through the program, FDA obligated approximately $22 million in 2016 to 42 states and approximately $31 million in 2017 to 43 states to help these states implement the rule. In addition, in September 2014, FDA entered into a 5-year cooperative agreement with the National Association of State Departments of Agriculture—an organization representing state agriculture departments in all 50 states and 4 U.S. territories. Under this cooperative agreement, the association is working with FDA to support implementation of the produce rule by, among other things, providing technical assistance to states to help them implement their produce safety programs. FDA renewed the cooperative agreement in 2016 with an expanded scope to include states’ assistance with helping businesses understand what is expected of them ahead of compliance dates. FDA Has Continued to Take Steps to Evaluate and Respond to Business Concerns and Is Reviewing the Produce Rule Water Standards Since we last reported on the produce rule, FDA has continued to use its information clearinghouse, the TAN, to take steps to evaluate and respond to questions and concerns from businesses and other stakeholders regarding the produce rule. FDA has also taken other steps, including funding training for industry, conducting visits to farms, and publishing guidance, to evaluate and respond to concerns. In addition, FDA is reviewing the produce rule agricultural water standards and in September 2017 published a proposed rule to extend compliance dates associated with those standards. FDA Continues to Evaluate and Respond to Business Concerns through Its Information Clearinghouse FDA has continued to use the TAN to evaluate and respond to questions and concerns from businesses and other stakeholders regarding all of the FSMA rules, including the produce rule. Since our last report, we found that FDA received 2,665 additional questions submitted to the TAN from September 4, 2016, through June 30, 2017. Of those 2,665 additional questions, 230 questions (about 9 percent) pertained to the produce rule. Of those 230 questions, 154 questions (about 67 percent) came from individuals who self-identified as belonging to “business/industry.” (See fig. 2.) We reviewed the full text of questions about the produce rule that were submitted to the TAN by those who identified themselves as belonging to business/industry. We reviewed all such questions submitted since September 10, 2015, when the TAN first began operating, through March 31, 2017, the date of the most recently available information when we conducted our audit work (321 total questions). Questions spanned a variety of topics related to the rule, with the most commonly asked questions pertaining to the rule’s agricultural water standards. For example, some businesses submitted questions to clarify whether a specific water testing method they intended to use was acceptable. Other commonly asked questions related to the types of produce covered by the rule and whether a particular business was subject to the produce rule or a related FSMA rule known as the preventive controls for human food rule, which mandates new food safety requirements for food facilities, such as food processing businesses. For example, one business owner who grows almonds and also processes them submitted a question about whether the business is subject to the produce rule or the preventive controls rule. In addition, we found that most submissions (281 questions, or 88 percent) contained requests for additional information or clarification from FDA about implementing the produce rule. Examples of questions about the produce rule that were submitted by businesses are shown in figure 3. According to FDA data, as of June 2017, the agency had responded to about 84 percent (312) of the 372 questions specifically about the produce rule submitted by businesses to the TAN since it began operating. The agency’s median response time to these questions was 48 business days. As of June 2017, FDA had responded to 81 percent (4,307) of all 5,291 questions submitted to the TAN, with a median response time of 16 business days. Officials we interviewed said that FDA’s longer median response time for produce rule questions submitted by businesses was because the agency needed additional time to address several unique produce rule questions that were not considered during the rulemaking process. To understand produce businesses’ concerns in detail, FDA officials said they track questions submitted to the TAN. For example, these officials said they track the number of questions requesting more information about implementing the standards in the produce rule. These officials said that FDA is using these data to inform the development of resources to help businesses comply with the rule. For example, the officials told us that they are developing a set of commonly asked TAN questions about the produce rule that businesses can examine on FDA’s website prior to submitting their questions to the TAN. FDA has already published similar commonly asked TAN questions for some of the other FSMA rules. Representatives we interviewed from two industry associations said that such a list of questions would be helpful as businesses work to comply with the produce rule. FDA Has Taken Other Steps to Evaluate and Respond to Business Concerns, Including Funding Training for Industry and Conducting Visits to Farms Since we last reported on the produce rule, FDA has taken steps in addition to the TAN to evaluate and respond to business concerns regarding the produce rule. Training: FDA has funded partnerships to deliver training to help produce businesses meet the new requirements under the produce rule. The Produce Safety Alliance (PSA)—a collaboration involving Cornell University, FDA, and the U.S. Department of Agriculture—has developed a standardized national training curriculum about the produce rule and has conducted training sessions for more than 6,100 industry participants in the United States and foreign countries. In addition to serving an educational role, PSA training sessions help FDA evaluate and respond to business concerns. For example, FDA officials told us the agency uses questions submitted to the TAN to inform PSA course content, thereby helping to ensure that the training sessions address the most commonly asked questions. In addition, FDA officials and PSA representatives we interviewed said that PSA trainers are able to respond to questions from industry participants during the training sessions. These representatives said that they forward questions that PSA trainers are not able to answer during training sessions to FDA using the TAN and through regular meetings with FDA officials. One PSA trainer we interviewed said that face-to- face interactions with businesses at training sessions are the major way her organization hears about business questions and concerns. The Sprout Safety Alliance (SSA) is a collaboration between the Illinois Institute of Technology and FDA to enhance the sprout industry’s understanding of the produce rule. SSA has developed a training curriculum to help businesses comply with produce rule standards related to sprout production. SSA has conducted training courses for over 100 industry participants in the United States and Canada. According to an SSA representative, SSA has addressed questions and concerns from sprout industry participants during trainings. This representative also said SSA communicates with FDA about questions SSA trainers are unable to answer. Table 1 provides information about trainings provided by PSA and SSA. Educational Farm Visits: FDA officials participated in educational farm visits in 2016 and 2017 across the United States. According to FDA officials we interviewed, these visits were intended to broaden FDA’s knowledge of industry practices on these farms and were not for compliance or inspection purposes. FDA officials said they learned about a variety of industry concerns during these visits, including industry’s concerns with the water standards under the produce rule. FDA conducted these visits in a number of states, including Alaska, Arizona, California, Colorado, Georgia, Maine, Maryland, Nevada, New Mexico, Oregon, Texas, Vermont, Washington, Wisconsin, and the U.S. Virgin Islands, according to agency officials. Outreach to Produce Industry Associations: According to FDA officials, the agency performs outreach to various produce industry associations to educate businesses about the produce rule, answer questions, and learn about produce business concerns. For example, FDA officials said that, since we last reported on the produce rule, they have attended industry conferences and held outreach meetings with produce industry associations and they learned about specific concerns, such as businesses’ need for additional training on the produce rule and for information on how to identify materials that are suitable to properly sanitize surfaces with which produce comes into contact. On-farm Readiness Reviews: According to agency officials, these are voluntary reviews during which state inspectors and educators, accompanied by FDA officials, review businesses’ progress toward meeting the produce rule standards to promote compliance with the rule. States and FDA piloted the program in 2016 and, according to agency officials, they plan to roll out the full program in late 2017 or early 2018. In addition to helping businesses comply with the rule, FDA officials said these reviews have helped the agency learn about businesses’ questions and concerns. For example, officials said they learned during these reviews that some businesses needed additional information regarding water testing methods under the rule, including information on the number of water samples to be collected and the locations of testing laboratories. Produce Safety Network: Recognizing regional differences in growing practices, FDA established the Produce Safety Network in 2017 to address the unique needs of produce businesses in various parts of the country, according to agency officials. This network was established, in part, to respond to business questions and concerns, according to FDA officials. The network is made up of FDA produce safety experts and specialized investigators based in different parts of the country who help evaluate and respond to questions from businesses, state regulators, and other stakeholders in their regions, according to agency officials. For example, according to FDA officials, these produce safety experts learned about business questions regarding FDA’s list of produce the agency considers rarely consumed raw and not subject to the produce rule. In response to these concerns, the network developed a fact sheet outlining FDA’s rationale for developing the list. Guidance: According to FDA officials, the agency has been working on guidance to assist businesses in complying with the produce rule. FDA officials said guidance allows FDA to respond to questions and concerns related to the rule. For example, in January 2017, FDA published draft guidance on sprout-specific requirements under the rule. FDA officials told us they conducted outreach to sprout businesses before releasing this guidance to let businesses know why the guidance was issued and that it was available for public comment. In developing the guidance, FDA also took into account public comments made during the rulemaking process, according to FDA officials. An SSA representative we interviewed confirmed this, saying that the draft guidance was responsive to comments made by sprout businesses during rulemaking that asked FDA to include specific examples of how businesses were to comply with requirements. This representative said the draft guidance contained relevant examples. In addition, in early September 2017, FDA published guidance to help small businesses comply with the produce rule. The guidance provides small businesses with information about who must comply with the rule, training required, and which businesses are eligible for qualified exemptions from the rule, among other things. See appendix I for a list of published and forthcoming FDA produce rule guidance. FDA Is Reviewing the Produce Rule Water Standards in Response to Business Concerns and Is Proposing to Extend Compliance Dates FDA announced in March 2017 that it would conduct a review of the agricultural water standards under the produce rule and, in September 2017, the agency published a proposed rule in the Federal Register that would extend the compliance dates for the water standards by an additional 2 years from the original compliance dates, depending on business size, for produce other than sprouts (see fig. 4). According to FDA, its review of the water standards is an effort to simplify the standards and make them easier for businesses to comply with. FDA also said that it would use the extended compliance period to work with produce businesses as it considers the best approach to respond to their concerns about the standards. The extended compliance period will also allow FDA to provide additional outreach and training. FDA officials we interviewed said that their decision to review the water standards and extend compliance dates was in response to industry concerns. They also said that they learned about these concerns through some of the steps they have taken, which we identify in this report. For example, FDA officials said they heard numerous questions and concerns from businesses about the water standards during educational farm visits. Also, as we note above, questions about the water standards were the most common produce rule-related questions submitted to the TAN. According to representatives we interviewed from two industry associations, some businesses did not fully understand the water standards because, among other things, they said the standards do not provide a clear definition of “agricultural water,” leaving some businesses uncertain about what water sources and water uses are subject to the rule. In addition, according to documentation from an industry meeting with FDA, some businesses have expressed concerns about costs associated with the new water testing requirements. Some businesses have also expressed concerns that the water testing method described in the standards has not traditionally been used by industry and that finding laboratories that use this method will be difficult. The standards allow for the use of alternative testing methods, but some businesses have expressed concerns that FDA has not specified these alternative testing methods, thereby leaving businesses uncertain about what methods will be acceptable to FDA. Along with its announcement of a review of the water standards, in September 2017, FDA announced a list of eight water testing methods it determined to be equivalent to the method described in the standards. According to FDA officials, the list was established in response to business concerns, and the agency will add to this list as additional equivalent methods are identified. FDA officials we interviewed did not provide specific details or a timeline for the agency’s review of the water standards. These officials said the agency is considering adding clarifying information on the standards in forthcoming guidance and, if necessary, making changes to the standards themselves by revising the produce rule. In addition, officials said they plan on hosting a water summit in early 2018 with stakeholders and technical experts. FDA Has Collected Some Survey Results to Assess the Effectiveness of the TAN and Has Continued to Develop Metrics to Assess Outcomes of Its Other Mechanisms FDA has begun collecting survey results to assess the effectiveness of its information clearinghouse, the TAN, and has continued to develop metrics that will assess outcomes related to the agency’s overall efforts to evaluate and respond to business concerns. In October 2016, FDA implemented the first part of its survey assessing the TAN. This first part of the survey, which FDA sent to businesses and other stakeholders that submitted questions to the TAN, solicited feedback about the TAN web page provided for submitting questions. This survey included questions about how stakeholders learned about the TAN web page, the clarity of the page, and how FDA could improve the page. Officials told us they have begun making changes to the TAN web page based on the survey results. For example, FDA increased the character limit for questions submitted and provided additional information about FSMA on the web page. FDA is also developing the second part of its TAN survey, which will solicit feedback from stakeholders on the timeliness and quality of answers provided by FDA through the TAN. FDA officials told us that the agency will begin sending out this survey with its responses to TAN questions in spring 2018. In addition to its assessment of the effectiveness of the TAN, FDA officials told us that the agency is continuing to develop metrics intended to assess a number of desired outcomes resulting from implementation of the rule, including outcomes related to FDA’s efforts to evaluate and respond to business concerns. These outcomes are specified in a draft strategic framework the agency has developed to monitor implementation of the produce rule. The framework includes outcomes such as businesses’ compliance with the produce rule, expanded use of incentives for compliance, and increased dissemination of good practices and other on-farm findings. According to FDA officials, outcomes in the framework that relate to FDA’s efforts to evaluate and respond to business concerns include: increased effectiveness of technical assistance provided to businesses by FDA and its partners, improved working relationships with businesses, and increased capacity of FDA partners to educate businesses. Performance metrics are to be targeted to measure these outcomes, officials said. These officials also stressed that the draft strategic framework is subject to change. Because FDA officials we interviewed said they are in the early stages of assessing the TAN and the agency’s other efforts to evaluate and respond to business concerns, we asked produce industry representatives for their perspectives on FDA’s efforts, including representatives from two produce industry associations, a farming organization, and four organizations working with FDA to implement the produce rule. Regarding the TAN, representatives we interviewed from two of these groups said that they had received timely responses from FDA to some questions they had submitted to the TAN, and most groups we interviewed said that at least some of the TAN responses they received provided useful information. However, representatives we interviewed also had two major concerns: Representatives from three groups said that responses were often slow to arrive; representatives from one of these three groups commented that response times remained largely unchanged since we last reported on the produce rule in November 2016. Representatives from another group commented that FDA’s response times to TAN questions seemed to be related to the complexity of a question. For example, questions that required straightforward answers often received faster responses, while questions requiring more complex answers often got slower responses and, in some cases, FDA responded that the question would be answered in forthcoming guidance. Representatives from four groups we interviewed also said that some responses lacked sufficient clarity or specificity to adequately address questions and that industry needed more specific, tailored responses from FDA. For example, some FDA responses restated information from the published produce rule without providing additional detail, and other responses contained “canned” language that did not directly address the question. FDA officials acknowledged that it has been challenging for the agency to provide timely and complete responses to TAN questions, especially early on in the TAN’s operation, but that the agency has to work through complex policy questions related to the rule in order to respond. These officials said they are working to respond more quickly to TAN questions and are revising the FDA review process for TAN responses. Officials also stated that they anticipate posting commonly asked produce rule questions and responses on the TAN web page to provide immediate assistance to businesses for some questions. This is similar to what the agency has done for other FSMA rules, officials said. Regarding FDA’s other efforts to evaluate and respond to business concerns, representatives from one group we interviewed told us that FDA continues to be open to hearing questions and concerns from the produce industry. Nevertheless, representatives from four groups told us that businesses need more information from FDA to comply with the produce rule and are awaiting FDA’s forthcoming guidance pertaining to the rule. Representatives from one of these groups also commented that guidance is needed to explain the produce rule in plain language so that businesses can more easily understand the rule. In addition, representatives from two of these groups said that the produce rule training available to businesses is helpful but limited in the absence of guidance. For example, some questions cannot be answered completely during trainings without additional information from guidance. FDA officials told us they are aware of businesses’ concerns about the need for additional guidance. These officials said they are working to publish guidance on various topics related to the produce rule, as we have described elsewhere in this report. For example, officials said they planned to issue draft compliance and implementation guidance near the first compliance date of January 2018 for businesses producing commodities other than sprouts (see app. I). FDA Officials Reported Facing Challenges Identifying Businesses Subject to the Produce Rule and Providing Consistent and Region-Specific Information in Their Responses Through interviews with FDA officials, we identified two key challenges that the agency faces in evaluating and responding to business concerns about the produce rule: (1) identifying businesses subject to the produce rule; and (2) providing consistent, region-specific information to businesses in response to their questions and concerns. FDA officials told us the agency’s State Produce Implementation Cooperative Agreement Program plays a key role in addressing these challenges, as does the Produce Safety Network. Identifying businesses subject to the produce rule: While the produce rule specifies the types of commodities subject to the rule, FDA does not have an inventory of farms producing those commodities and therefore does not know which businesses are subject to the rule. As we have previously reported, FDA’s existing business inventory data are drawn from information provided by businesses required to register with FDA. Farms, however, are not required to register. According to FDA officials, the lack of a registration requirement for farms limits the data the agency has to inform its implementation of the produce rule. For example, FDA officials we interviewed said that not having data regarding farms can make it difficult for FDA to connect businesses with the educational and technical assistance resources to help them comply with the rule. FDA officials told us the agency’s State Produce Implementation Cooperative Agreement Program should help address this challenge. The program, which provides resources to each participating state to support a variety of state activities related to implementing and enforcing the produce rule, includes funding for states to develop and maintain an inventory of businesses subject to the rule. According to the program’s funding announcement, inventory data will be used to determine education and outreach needs related to the produce rule as well as to plan compliance and enforcement activities. FDA officials told us that states participating in the program have started to build their inventories of farms. According to these officials, participating states plan to have their inventories completed before they begin inspections of produce businesses. For states not participating in the cooperative agreement program, FDA officials said the agency is developing farm inventories. Providing consistent and region-specific responses to business questions and concerns: FDA officials told us that it can be a challenge to ensure that FDA and its state partners provide consistent responses to businesses’ questions that are also tailored to account for regional differences in growing conditions. For example, officials said that if a business in one part of the country receives information from one of FDA’s state partners, it can be a challenge to ensure that businesses in other parts of the country also receive the same information, whether from states or from FDA. At the same time, however, information provided to businesses may need to be tailored to account for regional differences in growing conditions. FDA officials told us that, to address this challenge, FDA’s Produce Safety Network staff are stationed around the United States and work closely with states participating in FDA’s Cooperative Agreement Program. According to these officials, this relationship provides a mechanism for states and FDA to share information about the produce rule and helps ensure that information provided by states is consistent with FDA’s interpretation of the rule. In addition, these officials stated that having network staff in different growing regions allows those staff members to develop expertise in the growing conditions and practices in their regions, which in turn enhances their ability to provide outreach and technical assistance that is specifically tailored to the unique needs of those regions. For example, according to FDA officials, if a state in the Cooperative Agreement Program receives a question about the rule from a business, Produce Safety Network staff work with the state and FDA subject matter experts to craft a response that the state can provide to the business and that is tailored to the growing practices and conditions in the region. This approach helps ensure that FDA and its state partners speak with one voice about the produce rule and that the information provided is sensitive to regional differences in the produce industry, officials said. Agency Comments We provided a draft of this product to HHS. HHS provided us with technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Appendix I: FDA Outreach and Guidance Related to the Produce Rule Date(s) Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Anne K. Johnson (Assistant Director), Ramsey Asaly, Tim Bober, Kevin Bray, Alexandra Edwards, Ellen Fried, Cindy Gilbert, Hayden Huang, Dan Royer, Kiki Theodoropoulos, and Rajneesh Verma made key contributions to this report. Related GAO Products High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. Food Safety: A National Strategy Is Needed to Address Fragmentation in Federal Oversight. GAO-17-74. Washington, D.C.: January 13, 2017. Food Safety: FDA’s Efforts to Evaluate and Respond to Business Concerns Regarding the Produce Rule. GAO-17-98R. Washington, D.C.: November 28, 2016. Food Safety: FDA Coordinating with Stakeholders on New Rules but Challenges Remain and Greater Tribal Consultation Needed. GAO-16- 425. Washington, D.C.: May 19, 2016. Department of Health and Human Services, Food and Drug Administration: Standards for the Growing, Harvesting, Packing, and Holding of Produce for Human Consumption. GAO-16-299R. Washington, D.C.: December 16, 2015. Federal Food Safety Oversight: Additional Actions Needed to Improve Planning and Collaboration. GAO-15-180. Washington, D.C.: December 18, 2014.
Why GAO Did This Study Although the United States has one of the safest food supplies in the world, foodborne illness is a common public health problem; some of this illness can be linked to produce. For example, in 2017, a Salmonella outbreak linked to imported papayas sickened more than 200 people in 23 states and killed 1. FDA's produce rule, one of a number of rules required by the FDA Food Safety Modernization Act, established the first enforceable national food safety standards for produce. The Agricultural Act of 2014 required that the produce rule include “a plan to systematically…develop an ongoing process to evaluate and respond to business concerns” about the rule and a provision for GAO to report on FDA's efforts 1 year after the promulgation of the final rule and again the following year. In November 2016, GAO issued the first report. In this follow-up report, GAO examined (1) steps FDA has taken since GAO's 2016 review to evaluate and respond to business concerns regarding the produce rule, (2) steps FDA has taken to assess the effectiveness of its efforts to evaluate and respond to business concerns regarding the rule, and (3) challenges FDA officials reported facing in evaluating and responding to business concerns regarding the rule. GAO examined TAN questions submitted by businesses; interviewed FDA officials and representatives from groups, such as the Produce Safety Alliance, working with FDA to implement the rule; and interviewed representatives from produce industry associations and a farming organization. GAO is not making any recommendations. What GAO Found Since GAO's November 2016 report on the Food and Drug Administration's (FDA) 2015 produce rule, the agency has continued to use its Technical Assistance Network (TAN) to evaluate and respond to questions and concerns about the rule. GAO found that since the issuance of its 2016 report, which contained data as of September 3, 2016, 2,665 more questions were submitted to the TAN, 230 of which pertained to the produce rule, and of those 230 questions, 154 were submitted by businesses (see fig.). a The TAN also receives questions about other rules pertaining to the FDA Food Safety Modernization Act, such as rules on imported food and the sanitary transportation of food. b Others include members of academia, consumers, and federal or state regulators. Most produce rule-related TAN questions concerned agricultural water standards, such as methods for testing water. In addition to the TAN, FDA has taken other steps to evaluate and respond to business concerns, including funding training for industry and visiting farms. FDA is also reviewing the rule's water standards and published a proposed rule in September 2017 to extend the compliance dates associated with those standards in response to concerns. FDA has begun collecting survey results on the web page used for submitting TAN questions and continues to develop a survey to assess the timeliness and quality of TAN responses. FDA also continued to develop metrics intended to assess its overall efforts to evaluate and respond to business concerns, officials reported. Produce industry representatives told GAO that FDA is open to hearing questions and concerns, but businesses need more information to comply with the rule and are awaiting FDA's forthcoming guidance on parts of the rule. FDA officials reported facing two challenges in evaluating and responding to business concerns: identifying businesses subject to the rule and providing consistent, region-specific information in response to concerns. Officials said that the agency's cooperative agreement with 43 states plays a key role in addressing these challenges, as does the Produce Safety Network, a network of region-based FDA food safety experts.
gao_GAO-18-365
gao_GAO-18-365_0
Background The Freedom of Information Act establishes a legal right of access to government information on the basis of the principles of openness and accountability in government. Before FOIA’s enactment in 1966, an individual seeking access to federal records faced the burden of establishing a “need to know” before being granted the right to examine a federal record. FOIA established a “right to know” standard, under which an organization or person could receive access to information held by a federal agency without demonstrating a need or reason. The “right to know” standard shifted the burden of proof from the individual to a government agency and required the agency to provide proper justification when denying a request for access to a record. Any person, defined broadly to include attorneys filing on behalf of an individual, corporations, or organizations, can file a FOIA request. For example, an attorney can request labor-related workers’ compensation files on behalf of his or her client, and a commercial requester, such as a data broker who files a request on behalf of another person, may request a copy of a government contract. In response, an agency is required to provide the relevant record(s) in any readily producible form or format specified by the requester, unless the record falls within a permitted exemption that provides limitations on the disclosure of information. Appendix II includes a table describing the nine specific exemptions that can be applied to withhold information that, for example, is classified, confidential commercial, privileged, privacy, or falls into one or several law enforcement categories. FOIA Amendments and Guidance Call for Improvements in How Agencies Process Requests Various amendments have been enacted and guidance issued to help improve agencies’ processing of FOIA requests, including: The Electronic Freedom of Information Act Amendments of 1996 (e- FOIA amendments) strengthened the requirement that federal agencies respond to a request in a timely manner and reduce their backlogged requests. The amendments, among other things, made a number of procedural changes, including allowing a requester to limit the scope of a request so that it could be processed more quickly and requiring agencies to determine within 20 working days whether a request would be fulfilled. This was an increase from the previously established time frame of 10 business days. The amendments also authorized agencies to multi-track requests— that is, to process simple and complex requests concurrently on separate tracks to facilitate responding to a relatively simple request more quickly. In addition, the amendment encouraged online, public access to government information by requiring agencies to make specific types of records available in electronic form. Executive Order 13392, issued by the President in 2005, directed each agency to designate a senior official as its chief FOIA officer. This official was to be responsible for ensuring agency-wide compliance with the act by monitoring implementation throughout the agency and recommending changes in policies, practices, staffing, and funding, as needed. The chief FOIA officer was directed to review and report on the agency’s performance in implementing FOIA to agency heads and to Justice in such times and formats established by the Attorney General. (These are referred to as chief FOIA officer reports.) The OPEN Government Act, which was enacted in 2007, made the 2005 executive order’s requirement for agencies to have a chief FOIA officer a statutory requirement. It also required agencies to include additional statistics in their annual FOIA reports, such as more details on processing times and the agency’s 10 oldest pending requests, appeals, and consultations. The FOIA Improvement Act of 2016 addressed procedural issues, including requiring that agencies: (1) make records available in an electronic format if they have been requested three or more times; (2) notify requesters that they have a maximum of 90 days to file an administrative appeal, and (3) provide dispute resolution services at various times throughout the FOIA process. This act also created more duties for chief FOIA officers, including requiring them to offer training to agency staff regarding FOIA responsibilities. The act also revised and added new obligations for OGIS, and created the Chief FOIA Officers Council to assist in compliance and efficiency. Further, the act required OMB, in consultation with Justice, to create a consolidated online FOIA request portal that allows the public to submit a request to any agency through a single website. FOIA Authorizes Agencies to Use Other Federal Statutes to Withhold Information Prohibited from Disclosure In responding to requests, FOIA authorizes agencies to utilize one of nine exemptions to withhold portions of records, or the entire record. Agencies may use an exemption when it has been determined that disclosure of the requested information would harm an interest related to certain protected areas. These nine exemptions (described in appendix II) can be applied by agencies to withhold various types of information, such as information concerning foreign relations, trade secrets, and matters of personal privacy. One such exemption, the statutory (b)(3) exemption, specifically authorizes withholding information under FOIA on the basis of a law which: requires that matters be withheld from the public in such a manner as to leave no discretion on the issue; or establishes particular criteria for withholding or refers to particular types of matters to be withheld; and if enacted after October 28, 2009, specifically refers to section 552(b)(3) of title 5, United States Code. To account for agencies use of the statutory (b)(3) exemptions, FOIA requires each agency to submit, in its annual report to Justice, a complete listing of all statutes that the agency relied on to withhold information under exemption (b)(3). The act also requires that the agency describe for each statute identified in its report (1) the number of occasions on which each statute was relied upon; (2) a description of whether a court has upheld the decision of the agency to withhold information under each such statute; and (3) a concise description of any information withheld. Further, to provide an overall summary of the statutory (b)(3) exemptions used by agencies in a fiscal year, Justice produces consolidated annual reports that list the statutes used by agencies in conjunction with (b)(3). FOIA Request Process As previously noted, agencies are generally required by the e-FOIA amendments of 1996 to respond to a FOIA request within 20 working days. Once received, the request is to be processed through multiple phases, which include assigning a tracking number, searching for responsive records, and releasing the records response to the requester. Also, FOIA allows a requester to challenge an agency’s final decision on a request through an administrative appeal or a lawsuit. Agencies generally have 20 working days to respond to an administrative appeal. Figure 1 provides a simplified overview of the FOIA request and appeals process. In a typical agency, as indicated, during the intake phase, a request is logged into the agency’s FOIA tracking system, and a tracking number is assigned. The request is then reviewed by FOIA staff to determine its scope and level of complexity. The agency then sends a letter or email to the requester acknowledging receipt of the request, with a unique tracking number that the requester can use to check the status of the request. Next, FOIA staff (noncustodian) begin the search to retrieve the responsive records. They conduct a search if the agency’s records are centralized or route the request to the appropriate program office(s), or do both, as warranted. This step may include requesting that the custodian (owner) of the record search and review paper and electronic records from multiple locations and program offices. Agency staff then process the responsive records, which includes determining whether a portion or all of any record should be withheld based on FOIA’s exemptions. If a portion or all of any record is the responsibility of another agency, FOIA staff may consult with the other agency or may send (“refer”) the document(s) to that other agency for processing. After processing and redaction, a request is reviewed for errors and to ensure quality. The documents are then released to the requester, either electronically or by regular mail. FOIA Oversight and Implementation Responsibility for the oversight of FOIA implementation is spread across several federal offices and other entities. These include Justice’s OIP, NARA’s OGIS, and the Chief FOIA Officers Council. These oversight offices and the council have taken steps to assist agencies to address the FOIA provisions. Justice’s OIP is responsible for encouraging agencies’ compliance with FOIA and overseeing their implementation of the act. In this regard, the office, among other things, provides guidance, compiles information on FOIA compliance, provides FOIA training, and prepares annual summary reports on agencies’ FOIA processing and litigation activities. The office also offers FOIA counseling services to government staff and the public. Issuing guidance. OIP has developed guidance, available on its website, to assist federal agencies by instructing them in how to ensure timely determinations on requests, expedite the processing of requests, and reduce backlogs. The guidance also informs agencies on what should be contained in their annual FOIA reports to Justice’s Attorney General. The office also has documented ways for federal agencies to address backlog requests. In March 2009 the Attorney General issued guidance and related policies to encourage agencies to reduce their backlogs of FOIA requests. In addition, in December 2009, OMB issued a memorandum on the OPEN Government Act, which called for a reduction in backlogs and the publishing of plans to reduce backlogs. Further, in August 2014 and December 2015, OIP held best practices workshops and issued guidance to agencies on reducing FOIA backlogs and improving timeliness of agencies’ responses to FOIA requests. The OIP guidance instructed agencies to obtain leadership support, routinely review FOIA processing metrics, and set up staff training on FOIA. Overseeing agencies’ compliance. OIP collects information on compliance with the act by reviewing agencies’ annual FOIA reports and chief FOIA officer reports. These reports describe the number of FOIA requests received and processed in a fiscal year, as well as the total costs associated with processing and litigating requests. Providing training. OIP provides a full suite of FOIA training for agency FOIA professionals. This training gives instruction on all aspects of FOIA and is designed for all levels of professionals. For example, the office offers an annual training class that provides a basic overview of the act, as well as hands-on courses about the procedural requirements involved in processing a request from start to finish. In addition, it offers a seminar outlining successful litigation strategies for attorneys who handle FOIA cases. OIP also provides agencies customized training upon request. Preparing annual reports. Every year, OIP prepares three major reports for the public, the President, and/or Congress. The first report, Summary of Annual FOIA Reports, is a summary of the information contained in the annual FOIA reports that are prepared by each of the federal agencies subject to the FOIA. The report also provide a statistical breakdown of the government’s overall FOIA administration. The second report, Summary of Agency Chief FOIA Officer Reports, is a summary of the annual chief FOIA officer reports and an assessment of agencies’ progress in administering FOIA. This report summarizes government-wide efforts to improve FOIA in five key areas of FOIA administration, and it individually scores each agency on several milestones tied to these efforts. The third report, the Justice FOIA Litigation and Compliance Report, which is directed to Congress and the President, describes Justice’s efforts to oversee and encourage government-wide compliance with FOIA, and includes a list of, and information about, FOIA matters in litigation. NARA’s OGIS was established by the OPEN Government Act of 2007 as the federal FOIA ombudsman tasked with resolving federal FOIA disputes through mediation as a nonexclusive alternative to litigation. OGIS’s responsibilities include reviewing agencies’ policies, procedures, and compliance with the statute; identifying methods to improve compliance; and educating its stakeholders about the FOIA process. The 2016 FOIA amendments required agencies to update response letters to FOIA requesters to include information concerning the roles of OGIS and agency’s FOIA public liaisons. As such, OGIS and Justice worked together to develop a response letter template that includes the required language for agency letters. In addition, OGIS, charged with reviewing agency’s compliance with FOIA, launched a FOIA compliance program in 2014. OGIS also developed a FOIA compliance self- assessment program, which is intended to help OGIS look for potential compliance issues across federal agencies. The Chief FOIA Officers Council is co-chaired by the Director of OIP and the Director of OGIS. Council members include senior representatives from OMB, OIP, and OGIS, together with the chief FOIA officers of each agency, among others. The council’s FOIA-related responsibilities include: developing recommendations for increasing FOIA compliance and disseminating information about agency experiences, ideas, best practices, and innovative approaches; identifying, developing, and coordinating initiatives to increase transparency and compliance; and promoting the development and use of common performance measures for agency compliance. Selected Agencies Collect and Maintain Records That Can Be Subject to FOIA Requests The 18 agencies selected for our review are charged with a variety of operations that affect many aspects of federal service to the public. Thus, by the nature of their missions and operations, the agencies have responsibility for vast and varied amounts of information that can be subject to a FOIA request. For example, the Department of Homeland Security’s (DHS) mission is to protect the American people and the United States homeland. As such, the department maintains information covering, among other things, immigration, border crossings, and law enforcement. As another example, the Department of the Interior’s (DOI) mission includes protecting and managing the nation’s natural resources and, thus, providing scientific information about those resources. Table 2 provides details on each of the 18 selected agencies’ missions and the types of information they maintain. The 18 selected agencies reported that they received and processed more than 2 million FOIA requests from fiscal years 2012 through 2016. Over this 5-year period, the number of reported requests received fluctuated among the agencies. In this regard, some agencies saw a continual rise in the number of requests, while other agencies experienced an increase or decrease from year to year. For example, from fiscal years 2012 through 2014, DHS saw an increase in the number of requests received (from 190,589 to 291,242), but in fiscal year 2015, saw the number of requests received decrease to 281,138. Subsequently, in fiscal year 2016, the department experienced an increase to 325,780 requests received. In addition, from fiscal years 2012 through 2015, the reported numbers of requests processed by the selected agencies showed a relatively steady increase. However, in fiscal year 2016, the reported number of requests processed by these agencies declined. Further, figure 2 provides a comparison of the total number of requests received and processed in this 5-year period. Selected Agencies Implemented the Majority of FOIA Requirements Reviewed Among other things, the FOIA Improvement Act of 2016 and the OPEN Government Act of 2007 call for agencies to (1) update response letters, (2) implement tracking systems, (3) provide FOIA training, (4), provide records online, (5) designate chief FOIA officers, and (6) update and publish timely and comprehensive regulations. The 18 agencies that we included in our review had implemented the majority of the 6 selected FOIA requirements. Specifically, 18 agencies updated response letters, 16 agencies implemented tracking that was compliant with requirements for people with disabilities 18 agencies provided FOIA training for agency staff 15 agencies provided records online, 13 agencies designated chief FOIA officers, and 5 agencies published their updated FOIA regulations by the required due date, and 8 agencies did so after the due date. Figure 3 summarizes the extent to which the 18 agencies implemented the selected FOIA requirements. Beyond these selected agencies, Justice’s OIP and OMB also had taken steps to develop a government-wide FOIA request portal that is intended to allow the public to submit a request to any agency from a single website. Selected Agencies Had Updated Their FOIA Response Letters The 2016 amendments to FOIA required agencies to include specific information in their responses when making their determinations on requests. If part of a request is denied, for example, agencies must inform requesters that they may seek assistance from the FOIA public liaison of the agency or OGIS, file an appeal to an adverse determination within a period of time that is not less than 90 days after the date of such adverse determination; and seek dispute resolution services from the FOIA public liaison of the agency or OGIS. Among the 18 selected agencies, all had updated their FOIA response letters to include this required information. All Selected Agencies Had Implemented FOIA Tracking Systems and Most Were Compliant with Requirements for People with Disabilities Various FOIA amendments and guidance call for agencies to use automated systems to improve the processing and management of requests. In particular, the OPEN Government Act of 2007 amended FOIA to require that federal agencies establish a system to provide individualized tracking numbers for requests that will take longer than 10 days to process and establish telephone or Internet service to allow requesters to track the status of their requests. Further, the President’s January 2009 Freedom of Information Act memorandum instructed agencies to use modern technology to inform citizens about what is known and done by their government. In addition, FOIA processing systems, like all automated information technology systems, are to comply with the requirements of Section 508 of the Rehabilitation Act of 1973 (Rehabilitation act (as amended)). This act requires federal agencies to make their electronic information accessible to people with disabilities. Each of the 18 selected agencies had implemented a system that provides capabilities for tracking requests received and processed, including an individualized number for tracking the status of a request. Specifically, Ten agencies used commercial automated systems, (DHS, EEOC, FDIC, FTC, Justice, NARA, NASA, NTSB, Pension Benefit Guaranty Corporation, and USAID). Three agencies developed their own agency systems (State, DOI, and TVA). Five agencies used Microsoft Excel or Word to track requests (Administrative Conference of the United States, American Battle Monuments Commission, Broadcasting Board of Governors, OMB, and U.S. African Development Foundation). Further, all of the agencies had established telephone or Internet services to assist requesters in tracking the status of requests; and they used modern technology (e.g., mobile applications) to inform citizens about FOIA. For example, the commercial systems allow requesters to submit a request and track the status of that request online. In addition, DHS developed a mobile application that allows FOIA requesters to submit requests and check the status of existing requests. However, while 16 agencies FOIA tracking systems were compliant with requirements of Section 508 of the Rehabilitation Act (as amended), two agencies—TVA and DOI—had systems that were not compliant. According to TVA officials, the agency does not have a 508 compliance certification. DOI officials stated that its FOIA system will undergo 508 compliance testing but did provide a date for completion of the testing. Having systems that are compliant with Section 508 of the Rehabilitation Act (as amended) is essential to ensure that the department’s electronic information is accessible to all individuals, including those with disabilities. Agencies’ Chief FOIA Officers Have Offered FOIA Training The 2016 FOIA amendments require agencies’ chief FOIA officers to offer training to agency staff regarding their responsibilities under FOIA. In addition, Justice’s OIP has advised every agency to make such training available to all of their FOIA staff at least once each year. The office has also encouraged agencies to take advantage of FOIA training opportunities available throughout the government. The 18 selected agencies’ chief FOIA officers offered FOIA training opportunities to staff in fiscal years 2016 and 2017. For example: Twelve agencies provided training that gave an introduction and overview of FOIA (the American Battle Monuments Commission, Broadcasting Board of Governors, EEOC, Justice, FDIC, FTC, NARA, Pension Benefit Guaranty Corporation, State, TVA, U.S. African Development Foundation, and USAID). Four agencies offered training for their agencies’ online FOIA tracking and processing systems (DOI, EEOC, NTSB, and Pension Benefit Guaranty Corporation). Five agencies provided training on responding to, handling, and processing FOIA requests (DHS, DOI, EEOC, Justice, and State). Seven agencies offered training on understanding and applying the exemptions under FOIA (the Broadcasting Board of Governors, EEOC, FDIC, FTC, Justice, State, and U.S. African Development Foundation). Four agencies offered training on the processing of costs and fees (EEOC, Justice, NASA and TVA). The Majority of Selected Agencies Posted Required Records Online Memorandums from both the President and the Attorney General in 2009 highlighted the importance of online disclosure of information and further directed agencies to make information available without a specific FOIA request. Further, FOIA required online access to government information and required agencies to make information available to the public in electronic form for four categories: agency final opinions and orders, administrative staff manuals and staff instructions that affect the frequently requested records. While all 18 agencies that we reviewed posted records online, only 15 of them had posted all categories of information, as required by the FOIA. Specifically, 7 agencies—the American Battle Monuments Commission, the Pension Benefit Guaranty Corporation, and EEOC, FDIC, FTC, Justice, and State—had, as required, made records in all four categories publicly available online. In addition, 5 agencies that were only required to publish online records in 3 categories—the Administrative Conference of the United States, Broadcasting Board of Governors, DHS, OMB, and USAID— had done so. Further, 3 agencies that were only required to publish online records in two of the categories—U.S. African Development Foundation, NARA, and TVA—had done so. The remaining 3 agencies—DOI, NASA, and NTSB—had posted records online for three of four required categories. Regarding why the three agencies did not post all of their four required categories of online records, DOI officials stated that the agency does not make publicly available all FOIA records that have been requested three or more times, as it does not have the time to post all such records that have been requested. NASA officials explained that, while the agency issues final opinions, it does not post them online. NTSB officials said they try to post information that is frequently requested, but they do not post the information on a consistent basis. Making the four required categories of information available in electronic form is an important step in allowing the public to easily access to government documents. Until these agencies make all required categories of information available in electronic form, they cannot ensure that they are providing the required openness in government. Most Agencies Designated a Senior Official as a Chief FOIA Officer In 2005, the President issued an executive order that established the role of a chief FOIA officer. In 2007, amendments to FOIA required each agency to designate a chief FOIA officer who shall be a senior official at the assistant secretary or equivalent level. Of the 18 selected agencies, 13 agencies have chief FOIA officers who are senior officials at the assistant secretary or equivalent level. The assistant secretary level is comparable to senior executive level positions at levels III, IV, and V. Specifically, State has designated its Assistant Secretary of Administration, Bureau DOI and NTSB had designated their Chief Information Officers; Administrative Conference of the United States, Broadcasting Board of Governors, FDIC, NARA, and U.S. African Development Foundation have designated their general counsels; Justice, NASA, TVA, and USAID designated their Associate Attorney General, Associate Administrator for Communications, the Vice President for Communications, and the Assistant Administrator for the Bureau of Management, respectively; and DHS designated its Chief Privacy Officer. However, 5 agencies—American Battle Monuments Commission, EEOC, Pension Benefit Guaranty Corporation, FTC, and OMB—do not have chief FOIA officers who are senior officials at the assistant secretary or equivalent level. According to officials from 4 of these agencies, the agencies all have chief FOIA officers and officials believed they had designated the appropriate officials. Officials at FTC acknowledged that the chief FOIA officer position is not designated at a level equivalent to an assistant secretary but a senior position within the agency. However, while there are chief FOIA officers at these agencies, until the chief FOIA officers are designated at the Assistant Secretary or equivalent level, they will lack assurance regarding the necessary authority to make decisions about agency practices, personnel, and funding. Most Selected Agencies Updated Regulations as Required to Inform the Public of Their FOIA Operations FOIA requires federal agencies to publish regulations in the Federal Register that inform the public of their FOIA operations. Specifically, in 2016, FOIA was amended to require agencies to update their regulations regarding their FOIA operations. To assist agencies in meeting this requirement, OIP created a FOIA regulation template. Among other things, OIP’s guidance encouraged agencies to: describe their dispute resolution process, describe their administrative appeals process for response letters of denied requests, notify requesters that they have a minimum of 90 days to file an include a description of what happens when there are unusual circumstances, as well as restriction on agencies’ abilities to charge certain fees when FOIA's times limits are not met; and update the regulations in a timely manner (i.e., update regulations by 180 days after the enactment of the 2016 FOIA amendment). Five agencies in our review—DHS, DOI, FDIC, FTC, and USAID— addressed all five requirements in updating their regulations. In addition, seven agencies addressed four of the five requirements: the Administrative Conference of the United States, EEOC, Justice, NARA, NTSB, Pension Benefit Guaranty Corporation, and TVA did not update their regulations in a timely manner. Further, four agencies addressed three or fewer requirements (U.S. African Development Foundation, State, NASA, and Broadcasting Board of Governors) and two agencies (American Battle Monuments Commission and OMB) did not address any of the requirements. Figure 4 indicates the extent to which the 18 agencies had addressed the five selected requirements. Agencies that did not address all five requirements provided several explanations as to why their regulations were not updated as required: American Battle Monuments Commission officials stated that while they updated their draft regulation in August 2017, it is currently unpublished due to internal reviews with the commission’s General Counsel in preparation for submission to the Federal Register. No new posting date has been established. American Battle Monuments Commission last updated its regulation in February 26, 2003. State officials noted that their regulation was updated 2 months prior to the new regulation requirements but did not provide a specific reason for not reissuing their regulation. As such, they explained that they have a working group reviewing their regulation for updates, with no timeline identified. State last updated its regulation on April 6, 2016. NASA officials did not provide a reason for not updating their regulation as required. Officials did, however, state that their draft regulation is with NASA’s Office of General Counsel for review. NASA last updated its regulations on August 11, 2017. Broadcasting Board of Governors officials did not provide a reason for not updating their regulation as required. Officials did, however, note that the agency is in the process of updating its regulation and anticipates it will complete this update by the end of 2018. The Broadcasting Board of Governors last updated its regulation on February 2, 2002. OMB officials did not provide a reason for not updating the agency’s regulation as required. Officials did, however, state that due to a change in leadership they do not have a time frame for updating their regulation. OMB last updated its regulation on May 27, 1998. The chief FOIA officer at the U.S. African Development Foundation stated that, while the agency had updated and submitted its regulation to be published in December 2016, the regulation was unpublished due to an error that occurred with the acknowledgement needed to publish the regulation in the Federal Register. The regulation was subsequently published on February 3, 2017. The official further noted that when the agency responds to FOIA requests, it has not charged a fee for unusual circumstances, and, therefore, agency officials did not believe they had to disclose information regarding fees in their regulation. Until these six agencies publish updated regulations that address the necessary requirements, as called for in FOIA and OIP guidance, they likely will be unable to provide the public with required regulatory and procedural information to ensure transparency and accountability in the government. Justice and OMB Have Taken Steps to Develop an Online FOIA Request Portal The 2016 FOIA amendments required OMB to work with Justice to build a consolidated online FOIA request portal. This portal is intended to allow the public to submit a request to any agency from a single website and include other tools to improve the public’s access to the benefits of FOIA. Further, the act required OMB to establish standards for interoperability between the consolidated portal and agency FOIA systems. The 2016 FOIA amendments did not provide a time frame to develop the portal and standards. With OMB's support, Justice has developed an online portal. In this regard, Justice’s OIP officials stated that the National FOIA Portal provides the functionality required by FOIA, including the ability to make a request to any agency and the technical framework for interoperability. According to OIP officials, in partnership with OMB, OIP was able to identify a dedicated funding source to operate and maintain the portal to ensure its success in the long term, with major agencies sharing in the costs to operate, maintain, and fund any future enhancements designed to improve FOIA processes. The first iteration of the National FOIA Portal launched on Justice’s FOIA.gov website on March 8, 2018. Agencies Have Methods to Reduce Backlogged Requests, but Their Efforts Have Shown Mixed Results The 18 selected agencies in our review had FOIA request backlogs of varying sizes, ranging from no backlogged requests at some agencies to 45,000 or more of requests at other agencies. Generally, the agencies with the largest backlogs had received the most requests. In an effort to aid agencies in reducing their backlogs, Justice’s OIP identified key practices that agencies can use. However, while the agencies reported using these practices and other methods, few of them managed to reduce their backlogs during the period from fiscal year 2012 through 2016. In particular, of the four agencies with the largest backlogs, only one— NARA—reduced its backlog. Agencies attributed their inability to decrease backlogs to the increased number and complexity of requests, among other factors. However, agencies also lack comprehensive plans to implement practices on an ongoing basis. Agencies Have FOIA Request Backlogs of Varying Sizes, and Most Increased from Fiscal Year 2012 through 2016 The selected agencies in our review varied considerably in the size of their FOIA request backlogs. Specifically, from fiscal year 2012 through 2016, of the 18 selected agencies 10 reported a backlog of 60 or fewer requests, and of these 10 agencies, 6 reported having no backlog in at least 1 year. 4 agencies had backlog numbers between 61 and 1,000 per year; and 4 agencies had backlogs of over 1,000 requests per year. The four agencies with backlogs of more than 1,000 requests for each year we examined were Justice, NARA, State and DHS. Table 3 shows the number of requests and the number of backlogged request for the 18 selected agencies during the 5-year period. Over the 5-year period, 14 of the 18 selected agencies experienced an increase in their backlogs in at least 1 year. By contrast, 2 agencies (Administrative Conference of the United States and the U.S. African Development Foundation) reported no backlogs, and 3 agencies (American Battle Monument Commission, NASA and NARA) reported reducing their backlogs. Further, of the 4 agencies with the largest backlogs (DHS, State, Justice, and NARA) only NARA reported a backlog lower in fiscal year 2016 than in fiscal year 2012. Figure 5 shows the trends for the 4 agencies with the largest backlogs, compared with the rest of the 18 agencies. In most cases, agencies with small or no backlogs (60 or fewer) also received relatively few requests. For example, the Administrative Conference of the United States and the U.S. African Development Foundation reported no backlogged requests during any year but also received fewer than 30 FOIA requests a year. The American Battle Monuments Commission also received fewer than 30 requests a year and only reported 1 backlogged request per year in 2 of the 5 years examined. However, the Pension Benefit Guaranty Corporation and FDIC received thousands of requests over the 5-year period, but maintained zero backlogs in a majority of the years examined. PBGC received a total of 19,120 requests during the 5-year period and only reported a backlog of 8 requests during 1 year, fiscal year 2013. FDIC received a total of 3,405 requests during the 5-year period and reported a backlog of 13 requests in fiscal year 2015 and 4 in fiscal year 2016. The four agencies with backlogs of 1,000 or more (Justice, NARA, State, and DHS) received significantly more requests each year. For example, NARA received between about 12,000 and 50,000 requests each year, while DHS received from about 190,000 to 325,000 requests. In addition, the number of requests NARA received in fiscal year 2016 was more than double the number received in fiscal year 2012. DHS received the most requests of any agency—a total of 1,320,283 FOIA requests over the 5- year period. Agencies Identified a Variety of Methods to Reduce Backlogs, but Few Saw Reductions The Attorney General’s March 2009 memorandum called on agency chief FOIA officers to review all aspects of their agencies’ FOIA administration and report to Justice on steps that have been taken to improve FOIA operations and disclosure. Subsequent Justice guidance required agencies to include in their chief FOIA officer reports information on their FOIA request backlogs, including whether the agency experienced a backlog of requests; whether that backlog decreased from the previous year; and, if not, reasons the backlog did not decrease. In addition, agencies that had more than 1,000 backlogged requests in a given year were required to describe their plans to reduce their backlogs. Beginning in calendar year 2015, these agencies were to describe how they implemented their plans from the previous year and whether that resulted in a backlog reduction. In addition, Justice’s OIP identified best practices for reducing FOIA backlogs. The office held a best practices workshop on reducing backlogs and improving timeliness. The office then issued guidance in August 2014 that highlighted key practices to improve the quality of a FOIA program. OIP identified the following methods in its best practices guidance. Utilize resources effectively. Agencies should allocate their resources effectively by using multi-track processing, making use of available technology, and shifting priorities and staff assignments to address needs and effectively manage workloads. Routinely review metrics. Agencies should regularly review their FOIA data and processes to identify challenges or barriers. Additionally, agencies should identify trends to effectively allocate resources, set goals for staff, and ensure needs are addressed. Emphasize staff training. Agencies should ensure FOIA staff are properly trained so they can process requests more effectively and with more autonomy. Training and engagement of staff can also solidify the importance of the FOIA office’s mission. Obtain leadership support. Agencies should ensure that senior management is involved in and supports the FOIA function in order to increase awareness and accountability, as well as make it easier to obtain necessary resources or personnel. Agencies identified a variety of methods that they used to address their backlogs. These included both the practices identified by Justice, as well as additional methods. Ten agencies maintained relatively small backlogs of 60 or fewer requests and were thus not required to develop plans for reducing backlogs. However, 2 of these 10 agencies, who both received significant numbers of requests, described various methods used to maintain a small backlog: PBGC officials credit their success to training, not only for FOIA staff, but all Incoming personnel, while also awarding staff for going above and beyond in facilitating FOIA processing. Pension Benefit Guaranty Corporation has incorporated all the best practices identified by OIP, including senior leadership involvement that supports FOIA initiatives and program goals, routine review of metrics to optimize workflows, effective utilization of resources and staff training. According to FDIC officials, their overall low backlog numbers are attributed to a trained and experienced FOIA staff, senior management involvement, and coordination among FDIC divisions. However, FDIC stated the reason for the increase in backlogs in fiscal year 2015 was due to increased complexity of requests. The 4 agencies with backlogs greater than 60 but fewer than 1,000 (EEOC, DOI, NTSB, and USAID) reported using various methods to reduce their backlogs. However, all 4 showed an increase over the 5-year period. EEOC officials stated that they had adopted practices recommended by OIP, such as multi-track processing, reviewing workloads to ensure sufficient staff, and using temporary assignments to address needs. However, EEOC has seen a large increase in its backlog numbers, going from 131 in fiscal year 2012 to 792 in fiscal year 2016. EEOC attributed the rise in backlogs to an increase in requests received, loss of staff, and the complex and voluminous nature of requests. DOI, according to agency officials, has also tried to incorporate reduction methods and best practices, including proactively releasing information that may be of interest to the public, thus avoiding the need for a FOIA request; enhanced training for its new online FOIA tracking and processing system; improved interoffice collaboration; production of monthly reports on backlogs and of weekly charts on incoming requests, to heighten awareness among leadership; and monitoring trends. Yet DOI has seen an increase in its backlog, from 449 in fiscal year 2012 to 677 in fiscal year 2016, an increase of 51 percent. DOI attributed the increase to the loss of FOIA personnel, an increase in the complexity of requests, an increase in FOIA-related litigation, an increase in incoming requests, and the fact that staff have additional duties. Officials at NTSB stated that the board utilized contractors and temporary staff assignments to augment staffing and address backlogs. Despite the effort, NTSB saw a large increase in backlogs, from 62 in fiscal year 2012 to 602 in fiscal year 2016. Officials stated that the reason for the increase was an increased complexity of requests, including requests for “any and all” documentation related to a specific subject, often involving hundreds to thousands of pages per request. According to USAID officials, the agency conducts and reviews inventories of its backlog and requests to remove duplicates and closed cases; groups and classifies requests by necessary actions and responsive offices; and initiates immediate action. In addition, USAID seeks to identify tools and solutions to streamline records for review and processing. However, its backlog numbers have continually increased, from 201 in fiscal year 2012 to 318 in fiscal year 2016. USAID attributes that increase to an increase in the number of requests, the loss of FOIA staff, an increased complexity and volume of requests, competing priorities, and world events that may drive surges in requests. Of the four agencies with the largest backlogs, all reported taking steps that, in some cases, included best practices identified by OIP; however, only NARA successfully reduced its backlog by the end of the 5-year period. Justice officials noted that the department made efforts to reduce its backlog by incorporating best practices. Specifically, OIP worked with components within Justice through the Component Improvement Initiative to identify causes contributing to a backlog and assist components in finding efficiencies and overcoming challenges. The chief FOIA officer continued to provide top-level support to reduction efforts by convening the department’s FOIA Council to manage overall FOIA administration. In addition, many of the components created their own reduction plans, which included hiring staff, utilizing technology, and providing more training, requester outreach, and multitrack processing. However, despite these efforts, the number of backlogs steadily increased during the 5-year period, from 5,196 in fiscal year 2012 to 10,644 in fiscal year 2016, an overall increase of 105 percent. Justice attributes the increase in backlogs to several challenges, including an increase in incoming requests and an increase in the complexity of those requests. Other challenges that Justice noted were staff shortages and turnover, reorganization of personnel roles, time to train incoming staff, and the ability to fill positions previously held by highly qualified professionals. NARA officials stated that one key step NARA took was to make corrections in its Performance Measurement and Reporting System. They noted that this system previously comingled backlogged requests with the number of pending FOIA requests, skewing the backlog numbers higher. The improvements included better accounting for pending and backlogged cases, distinguishing between simple and complex requests, and no longer counting as “open” cases that were closed within 20 days, but not until the beginning of the following fiscal year. In addition, officials also stated that the FOIA program offices have been successful at working with requesters to narrow the scope of requests. NARA also stated that it was conducting an analysis of FOIA across the agency to identify any barriers in the process. Officials also identified other methods, including using multi-track processing, shifting priorities to address needs, improved communication with agencies, proactive disclosures, and the use of mediation services. NARA has shown significant progress in reducing its backlog. In fiscal year 2012 it had a backlog of 7,610 requests, which spiked to 9,361 in fiscal year 2014. However, by fiscal year 2016, the number of backlogged requests had dropped to 2,932, even though the number of requests received more than doubled for that fiscal year. However, NARA did note challenges to reducing its backlog numbers, namely, the increase in the number of requests received. State developed and implemented a plan to reduce its backlog in fiscal year 2016. The plan incorporated two best practices by focusing on identifying the extent of the backlog problem and developing ways to address the backlog with available resources. According to State officials, the effort was dedicated to improve how FOIA data were organized and reported. Expedited and ligation cases were top priorities, whereas in other cases a “first in, first out” method was employed. Even with these efforts, however, State experienced a 117 percent increase in its backlog over the 5-year period. State’s backlog doubled from 10,045 in fiscal year 2014 to 22,664 in fiscal year 2016. Among the challenges to managing its backlog, State reported an increase in incoming requests, a high number of litigation cases, and competing priorities. Specifically, the number of incoming requests for State increase by 51 percent during the 5-year period. State has also reported that it has allocated 80 percent of its FOIA resources to meet court-ordered productions associated with litigation cases, resulting in fewer staff to work on processing routine requests. This included, among other efforts, a significant allocation of resources in fiscal year 2015 to meet court-imposed deadlines to process emails associated with the former Secretary of State, resulting in a surge of backlogs. In 2017 State began an initiative to actively address its backlogs. The Secretary of State issued an agency-wide memorandum stating the department’s renewed efforts by committing more resources and workforce to backlog reduction. The memo states new processes are to be implemented for both the short- and long-term, and the FOIA office has plans to work with the various bureaus to outline the tasks, resources, and workforce necessary to ensure success and compliance. With renewed leadership support, State has reported significant progress in its backlog reduction efforts. DHS, in its chief FOIA officer reports, reported that it implemented several plans to reduce backlogs. The DHS Privacy Office, which is responsible for oversight of the department’s FOIA program, worked with components to help eliminate the backlog. The Privacy Office sent monthly emails to component FOIA officers on FOIA backlog statistics, convened management meetings, conducted oversight, and reviewed workloads. Leadership met weekly to discuss the oldest pending requests, appeals, and consultations, and determined needed steps to process those requests. In addition, several other DHS components implemented actions to reduce backlogs. Customs and Border Protection hired and trained additional staff, encouraged requesters to file requests online, established productivity goals, updated guidance, and utilized better technology. U.S. Citizenship and Immigration Services, National Protection and Programs Directorate, and Immigration and Customs Enforcement increased staffing or developed methods to better forecast future workloads ensure adequate staffing. Immigration and Customs Enforcement also implemented a commercial off-the-shelf web application, awarded a multimillion-dollar contract for backlog reduction, and detailed employees from various other offices to assist in the backlog reduction effort. Due to efforts by the Privacy Office and other components, the backlog dropped 66 percent in fiscal year 2015, decreasing to 35,374. Yet, despite the continued efforts in fiscal year 2016, the backlog numbers increased again, to 46,788. DHS attributes the increases in backlogs to several factors, including an increase in the number of requests received, increased complexity and volume of responsive records for those requests, loss of staff and active litigation with demanding production schedules. One reason the eight agencies with significant backlogs may be struggling to consistently reduce their backlogs is that they lack documented, comprehensive plans that would provide a more reliable, sustainable approach to addressing backlogs. In particular, they do not have documented plans that describe how they will implement best practices for reducing backlogs over time, including specifying how they will use metrics to assess the effectiveness of their backlog reduction efforts and ensure that senior leadership supports backlog reduction efforts, among other best practices identified by OIP. While agencies with backlogs of 1,000 or more FOIA requests are required to describe backlog reduction efforts in their chief FOIA officer reports, these consist of a high-level narrative and do not include a specific discussion of how the agencies will implement best practices over time to reduce their backlog. In addition, agencies with backlogs of fewer than 1,000 requests are not required to report on backlog reduction efforts; however, the selected agencies in our review with backlogs in the hundreds still experienced an increase over the 5-year period. Without a more consistent approach, agencies will continue to struggle to reduce their backlogs to a manageable level, particularly as the number and complexity of requests increase over time. As a result, their FOIA processing may not respond effectively to the needs of requesters and the public. Various Types of Statutory Exemptions Exist and Many Have Been Used by Agencies FOIA requires agencies to report annually to Justice on their use of statutory (b)(3) exemptions. This includes specifying which statutes they relied on to exempt information from disclosure and the number of times they did so. To assist agencies in asserting and accounting for their use of these statutes, Justice instructs agencies to consult a running list of all the statutes that have been found to qualify as proper (b)(3) statutes by the courts. However, agencies may also use a statute not included in the Justice list, because many statutes that appear to meet the requirements of (b)(3) have not been identified by a court as qualifying statutes. If the agency uses a (b)(3) statute that is not identified in the qualifying list, Justice guidance instructs the agency to include information about that statute in its annual report submission. Justice reviews the statute and provides advice to the agency, but does not make a determination on the appropriateness of using that statute under the (b)(3) exemption. Based on data agencies reported to Justice, during fiscal years 2010 to 2016, agencies claimed 237 statutes as the basis for withholding information. Of these statutes, 75 were included on Justice’s list of qualifying statutes under the (b)(3) exemption (see appendix III for a list of these statutes). Further, we identified 140 additional statutes that were not identified in our 237 statutes claimed by agencies during fiscal years 2010 to 2016, but have similar provisions to other (b)(3) statutes authorizing an agency to withhold information from the public (see appendix IV for a list of these additional statutes). We found that the 237 statutes cited as the basis for (b)(3) exemptions during the period from fiscal years 2010 to 2016 fell into 8 general categories of information. These categories were (1) personally identifying information, (2) national security, (3) commercial, (4) law enforcement and investigations, (5) internal agency, (6) financial regulation, (7) international affairs, and (8) environmental. Figure 6 identifies the eight categories and the number of agency-claimed (b)(3) statutes in each of the categories. Of the 237 (b)(3) statutes cited by agencies, the majority—178—fell into 4 of the 8 categories: Forty-nine of these statutes related to withholding personally identifiable information including, for example, a statute related to withholding death certificate information provided to the Social Security Administration. Forty-five statutes related to the national security category. For example, one statute exempted files of foreign intelligence or counterintelligence operations of the National Security Agency. Forty-two statutes were in the law enforcement and investigations category, including a statute that exempts from disclosure information provided to Justice pursuant to civil investigative demands pertaining to antitrust investigations. Forty-two statutes fell into the commercial category. For example, one statute in this category related to withholding trade secrets and other confidential information related to consumer product safety. The remaining 59 statutes were in four categories: internal agency functions and practices, financial regulation, international affairs, and environmental. The environmental category contained the fewest number of statutes and included, for example, a statute related to withholding certain air pollution analysis information. As required by FOIA, agencies also reported the number of times they used each (b)(3) statute. In this regard, 33 FOIA-reporting agencies indicated that they had used 10 of the 237 (b)(3) statutes more than 200,000 times. Of these 10 most-commonly used statutes, the single most-used statute (8 U.S.C § 1202(f)) related to withholding records pertaining to the issuance or refusal of visas to enter the United States. It was used by 4 agencies over 58,000 times. Further, of the 10 most-commonly used statutes, the statute used by the greatest number of agencies (26 U.S.C § 6103) related to the withholding of certain tax return information; it was used by 24 FOIA-reporting agencies about 30,000 times. By contrast, some statutes were only used by a single agency. Specifically, the Department of Veterans Affairs used a statute related to withholding certain confidential veteran medical records (38 U.S.C. § 7332) more than 16,000 times. Similarly, EEOC used a statute related to employment discrimination on the basis of disability (42 U.S.C. § 12117) more than 10,000 times. Table 4 shows the 10 most-used statutes under the (b)(3) exemption, the agency that used each one most frequently, and the number of times they were used by that agency for the period covering fiscal years 2010 through 2016. The OPEN FOIA Act of 2009 Limitation on (b)(3) Exemptions Has Had an Uneven Impact on Subsequent Legislation The OPEN FOIA Act of 2009 amended FOIA to require that any federal statute subsequently enacted must specifically cite paragraph (b)(3) of FOIA to qualify as a (b)(3) exemption statute. Prior to 2009, a federal statute qualified as a statutory (b)(3) exemption if it (1) required that the matters be withheld from the public in such a manner as to leave no discretion on the issue, or (2) established particular criteria for withholding or referred to particular types of matters to be withheld. According to statements by the sponsor of the legislation during the Senate debate, (b)(3) statutory exemptions should be clear and unambiguous, and vigorously debated by Congress before they are enacted into law. In response to the amendment, in 2010, Justice released guidance to agencies stating that any statute enacted after 2009 must specifically cite to the (b)(3) exemption to qualify as a withholding statute under FOIA. Further, the guidance encouraged agencies to contact Justice with questions regarding the implementation of the amendment. In our review of the 237 (b)(3) statutes claimed by agencies during fiscal years 2010 through 2016, 21 of these statutes were initially enacted and 82 were amended after 2009. Of the 21 statutes initially enacted after 2009, 9 cited (b)(3). Further, of the 82 statutes amended, 9 cited (b)(3). While reflecting provisions of law authorizing or requiring the withholding of agency information from the public, the number of these statutes not having a reference to the (b)(3) exemption is evidence of the OPEN FOIA Act’s uneven impact on the establishment of statutory FOIA exemptions. Agencies Received and Processed FOIA Requests for Information Related to the Trouble Asset Relief Program As previously noted, FOIA requires federal agencies to provide the public with access to various types of information that can contribute to the understanding of government operations. One of these areas has related to the 2008 financial crisis, in which the Emergency Economic Stabilization Act of 2008 played a significant role in stabilizing the federal financial system. The act initially authorized $700 billion to assist financial institutions and markets, businesses, and homeowners through TARP, although that authorization was later reduced to $475 billion. Treasury, which was given authority under the act, established the Office of Financial Stability to carry out the program’s activities. These activities included injecting capital into key financial institutions, implementing programs to address problems in the securitization markets, providing assistance to the automobile industry, and offering incentives for modifying residential mortgages. In addition, federal financial regulators— FDIC, the Federal Reserve Board, and the Office of the Comptroller of the Currency—each played a key role in regulating and monitoring financial institutions. Following the law’s enactment, in certain periods from 2008 through 2014, three corporations—AIG, GM, and Ally—received federal financial assistance that amounted to 50 percent or more ownership by the federal government. The actions with regard to TARP subsequently led to the Treasury and the three financial regulatory agencies receiving FOIA requests for government records related to the three corporations. Specifically, the Federal Reserve Board, FDIC, the Office of the Comptroller of the Currency, and Treasury received 166 FOIA requests for information about these three corporations from September 2008 through January 2014. The requests asked for various agency records related to the corporations, for example, records related to Treasury’s stewardship and oversight of AIG and its subsidiaries; records related to the Federal Reserve Board and Ally specific to the individual submitting the FOIA request’s review; records concerning GM’s contract with the Stillwater Mining Company; and all communications between the Office of the Comptroller of the Currency and AIG from June 2007 through March 2009. Of the 166 requests, 88 were processed as full grant, partial grant, or full denial; 34 were withdrawn by the requester; 24 were closed because the agency responded that it had no records regarding the requests; and 20 fell into other disposition categories. Table 5 summarizes the disposition/resolution of the FOIA requests that each of the four federal agencies received relating to information on AIG, GM and Ally for certain periods from September 2008 to January 2014 (the time frame for which the government held 50 percent or more of the corporations’ common stock), and the type of disposition used most often to close the requests. Conclusions The 18 agencies we reviewed had fully implemented half of the six key FOIA requirements and the vast majority of agencies implemented two additional requirements. However, 5 agencies published and updated their FOIA regulations in a timely and comprehensive manner. Fully implementing FOIA requirements will better position agencies to provide the public with necessary access to government records and ensure openness in government. Selected agencies varied considerably in the size of their backlogs. While 10 reported a backlog of 60 or fewer requests, 4 had backlogs of over 1,000 per year. Agencies identified a variety of methods that they used to address their backlogs, including practices identified by Justice, as well as additional methods. However, the selected agencies varied in the success achieved for reducing their backlogs. This was due, in part, to a lack of plans that describes how the agencies will implement best practices for reducing backlogs over time. Until agencies develop plans to reduce backlogs, they will be limited in their ability to respond effectively to the needs of requesters and the public. Recommendations for Executive Action We are making a total of 24 recommendations to 16 agencies in our review. Specifically: The Secretary of the American Battle Monuments Commission should designate a chief FOIA officer at the assistant secretary level or equivalent. (Recommendation 1) The Secretary of the American Battle Monuments Commission should update and publish comprehensive FOIA regulations that include requirements established by law and Justice guidance. (Recommendation 2) The Chief Executive Officer and Director of the Broadcasting Board of Governors should update and publish comprehensive FOIA regulations that include requirements established by law and Justice guidance. (Recommendation 3) The Secretary of DHS should take steps to develop and document a plan that fully addresses best practices with regards to reduction of backlogged FOIA requests. (Recommendation 4) The Secretary of DOI should ensure its FOIA tracking system is compliant with Section 508 requirements. (Recommendation 5) The Secretary of DOI should provide frequently requested records online. (Recommendation 6) The Secretary of DOI should take steps to develop and document a plan that fully addresses best practices with regards to reduction of backlogged FOIA requests. (Recommendation 7) The Chair of EEOC should designate a chief FOIA officer at the assistant secretary level or equivalent. (Recommendation 8) The Chair of EEOC should take steps to develop and document a plan that fully addresses best practices with regards to reduction of backlogged FOIA requests. (Recommendation 9) The Chairman of the FTC should designate a chief FOIA officer at the assistant secretary level or equivalent. (Recommendation 10) The Attorney General of the United States should take steps to develop and document a plan that fully addresses best practices with regards to reduction of backlogged FOIA requests. (Recommendation 11) The Archivist of the United States should take steps to develop and document a plan that fully addresses best practices with regards to reduction of backlogged FOIA requests. (Recommendation 12) The Administrator of NASA should update and publish comprehensive FOIA regulations that describe dispute resolution services, and notifies requesters of the 90 days for appeals. (Recommendation 13) The Administrator of NASA should provide agency records of final opinions online. (Recommendation 14) The Chairman of NTSB should provide frequently requested records online. (Recommendation 15) The Chairman of NTSB should take steps to develop and document a plan that fully addresses best practices with regards to reduction of backlogged FOIA requests. (Recommendation 16) The Director of OMB should update and publish comprehensive FOIA regulations that include requirements established by law and Justice guidance. (Recommendation 17) The Director of OMB should designate a chief FOIA officer at the assistant secretary level or equivalent. (Recommendation 18) The Director of Pension Benefit Guaranty Corporation should designate a chief FOIA officer at the assistant secretary level or equivalent. (Recommendation 19) The Secretary of State should update and publish comprehensive FOIA regulations that describe dispute resolution services, and notifies requesters of the 90 days for appeals. (Recommendation 20) The Secretary of State should take steps to develop and document a plan that fully addresses best practices with regards to reduction of backlogged FOIA requests. (Recommendation 21) The President of TVA should ensure its FOIA tracking system is compliant with section 508 requirements. (Recommendation 22) The Administrator of USAID should take steps to develop and document a plan that fully addresses best practices with regards to reduction of backlogged FOIA requests. (Recommendation 23) The President of the U.S. African Development Foundation should update and publish comprehensive FOIA regulations that inform a requester of limited unusual circumstances fees. (Recommendation 24) Agency Comments and Our Evaluation We requested comments on a draft of this report from the 21 agencies included in our review. Of the 16 agencies to which we made recommendations, 9 agencies agreed with all of the recommendations directed to them; 1 agency agreed with two and disagreed with one recommendation; 2 agencies disagreed with all of the recommendations; and 4 agencies did not state whether they agreed or disagreed with our recommendations. In addition, 5 agencies to which we did not make recommendations stated that they had no comments on the report. Multiple agencies also provided technical comments, which we have incorporated, as appropriate. The following 9 agencies agreed with our recommendations: In emails received from the American Battle Monuments Commission and the Broadcasting Board of Governors, the two agencies stated that they agreed with the recommendations in our report. In written comments, reprinted in appendix V, DHS stated that it concurred with our recommendations. Regarding the recommendation to designate a chief FOIA Officer, the department stated that it had delegated the full authority and responsibility of DHS’s FOIA operations and programs to the chief privacy officer. The department asserted that its chief privacy officer is the equivalent of an assistant secretary, as required, because the official is appointed by the Secretary under 6 U.S.C § 142 without Senate confirmation in accordance with the Appointments Clause to the U.S. Constitution. Further, the department stated that the chief privacy officer position meets the senior executive service standard under 5 U.S.C § 3132(a)(2) and, accordingly, is comparable to a senior executive level position. Thus, the department believes it is already in compliance with the requirement to designate a chief FOIA officer at the assistant secretary level or equivalent. For the reasons that it cited, DHS requested that GAO consider this recommendation to be resolved and closed. Based on our analysis of the additional information that the department provided to explain the senior executive level position of the chief privacy officer, we are in agreement with DHS regarding the position’s equivalency to an assistant secretary within the department. Accordingly, we have removed this recommendation from our report. Concerning the second recommendation, to develop and document a plan that fully addresses practices with regard to the reduction of backlogged requests, DHS stated that it plans to initiate a department- wide compliance assessment of FOIA operations to identify the components with the most significant backlog problems and the “root causes” for these problems. The department said it then intends to develop a proposed plan for backlog reduction. In written comments, reprinted in appendix VI, Justice stated that it agreed with our recommendation and will develop a plan to address its backlog of FOIA requests to the fullest extent possible. Justice added that, in fiscal year 2017, it was able to improve all of its processing times and close all 10 of the department’s oldest requests, appeals, and consultations, thus, reducing the overall age of its backlog. In written comments, reprinted in appendix VII, NARA stated that it is currently working to develop and document a plan that is intended to fully address best practices to reduce its backlog of FOIA requests, as we recommended. The agency said it expects to complete its plan by the end of December 2018. In written comments, reprinted in appendix VIII, NASA said that it concurred with our two recommendations. With regard to the first recommendation, the agency stated that it is currently working to update its FOIA regulations, and that the revisions are to include the 90-day appeal rights, as well as describe requesters’ rights to obtain dispute resolution services from NASA’s FOIA public liaisons and OGIS. With regard to the second recommendation, the agency stated that it is currently working to identity subject matter areas on which the department can reach final opinions as interpreted under FOIA. The agency added that, upon identification, it will begin posting final opinions online. In written comments, reprinted in appendix IX, State concurred with our two recommendations and, accordingly, noted that it is currently working to update its FOIA regulations and evaluate methods to improve its backlog reduction efforts. In written comments, reprinted in appendix X, USAID stated that it concurred with our recommendation and will develop a formal plan that delineates currently employed best practices to reduce its FOIA backlog. In comments provided via email, the United States African Development Foundation’s General Counsel concurred with our recommendation. The foundation stated that it will take steps to update its FOIA regulations. This is to include, informing requesters about limited unusual circumstances fees, and publishing the updated regulation in the Federal Register. One agency agreed with two recommendations, and disagreed with one other recommendation: In written comments, reprinted in appendix XI, DOI concurred with the recommendation to make its FOIA tracking system Section 508- compliant and stated that it is currently testing its system for compliance. The department also concurred with the recommendation that it provide frequently requested records online. However, the department did not concur with our recommendation to develop and document a plan that fully addresses best practices for the reduction of backlogged FOIA requests. The department stated that, in Justice’s OIP guidance, the creation of a formal backlog reduction plan only applies to agencies with more than 1,000 backlogged requests in a given year. The department said that DOI did not fall into this category and, therefore, was not required to develop such a plan. Although DOI’s existing backlog of FOA requests did not meet the threshold identified in Justice’s guidance, the department, nonetheless, experienced a 51 percent increase in backlogged FOIA requests from fiscal years 2012 to 2016. Thus, having a plan and practices for reducing backlogged requests could help the department ensure that its backlog remains manageable, and that DOI is effectively positioned to respond to the needs of requesters and the public. Accordingly, we believe that our recommendation to develop a plan that addresses best practices to reduce the backlog is still warranted. In addition, 2 agencies disagreed with our recommendations: In written comments, reprinted in appendix XII, the Pension Benefit Guaranty Corporation disagreed with our recommendation that it designate a chief FOIA officer at the assistant secretary level or equivalent. The agency said it does not have assistant secretary positions. The agency added that it believes its current chief FOIA officer’s position is equivalent to the assistant secretary level and that this official is an appropriate designee. We disagree that the current chief FOIA officer’s position is equivalent to the assistant secretary level. However, the Pension Benefit Guaranty Corporation’s General Counsel position is at a level that is equivalent to an assistant secretary. As such, assigning the position to the General Counsel could help ensure that the chief FOIA officer has the necessary authority to make decisions about agency practices, personnel, and funding. As such, we believe our recommendation is still warranted. In written comments, reprinted in appendix XIII, TVA disagreed with our recommendation to ensure that its FOIA tracking system is compliant with Section 508 of the Rehabilitation Act. The agency stated that, based on the January 18, 2017, revised Section 508 standards, its current FOIA tracking system meets the standard related to having a user interface, but does not meet the criteria for accessibility of electronic content. The agency added that, the current single user of its system does not require accessibility accommodations; thus, it would be an undue burden for the agency to make the system comply with the Section 508 requirements. While TVA’s current FOIA system does not require accessibility accommodations and, in the agency’s view, would be unduly burdensome to modify, as the agency undertakes further modernization of its IT systems and software, it should ensure that its FOIA system is compliant with Section 508 requirements. Accordingly, we stand by our recommendation to the agency. Further, 4 agencies did not state whether they agreed or disagreed with the report, although 2 of them offered other comments: In emails received from EEOC and NTSB, the agencies did not agree or disagree with the draft report. EEOC offered technical comments, which we incorporated, as appropriate, while NTSB said it had no comment. In written comments, reprinted in appendix XIV, FTC acknowledged that its chief FOIA officer is not at the assistant secretary level. FTC also noted that it is a small agency in which there are no position titles of assistant secretary-level or equivalent. Further, the agency stated that it believes its chief FOIA officer holds a sufficiently senior position (associate general counsel) with the necessary authority to fulfill the functions of the chief FOIA officer. Nevertheless, FTC stated that it would take our recommendation (to designate a chief FOIA officer at the assistant secretary level or equivalent) under advisement. Although FTC is a small agency and does not have positions at the assistant secretary level, we disagree that the current chief FOIA officer’s position is sufficiently senior to fulfill the functions required of this position. However, assigning the chief FOIA officer position to the General Counsel, or an equivalent level position, could help ensure that the chief FOIA officer will have the necessary authority to make decisions about the agency’s practices, personnel, and funding for the implementation of FOIA. As such, we believe our recommendation is still warranted. In comments provided via email from its GAO liaison, OMB stated that it does not have a position in its organization with the specific title of assistant secretary. However, the agency noted that, on March 7, 2018, the OMB Director designated the OMB General Counsel to serve as the agency’s chief FOIA officer. According to OMB, the chief FOIA officer reports to the Director. Based on the documentation received, we are in agreement with OMB that the position of General Counsel is equivalent to an assistant secretary within the department. Accordingly, we consider this recommendation to be closed. The remaining 5 agencies to which we did not make recommendations stated that they did not have any comments on our report. These agencies were: the Administrative Conference of the United States, FDIC, the Federal Reserve Board, OCC, and Treasury. We are sending copies of this report to the Secretaries of the American Battle Monuments Commission, Homeland Security, Interior, State, and the Treasury; the Attorney General of the United States; the Archivist of the United States; the Comptroller of the Currency; Administrators of the National Aeronautics Space Administration and United States Agency for International Development; Board of Governors of the Federal Reserve System; Chairmen of the Administrative Conference of the United States, Equal Employment Opportunity Commission, Federal Deposit Insurance Corporation, and National Transportation Safety Board; Chief Executive Officer and Director of the Broadcasting Board of Governors; Directors of the Office of Management and Budget and Pension Benefit Guaranty Corporation; the Presidents of the Tennessee Valley Authority, and United States African Development Foundation, and the Acting General Counsel for the Federal Trade Commission. In addition, this report is available at no charge on the GAO website at http://www.gao.gov If you or your staff have questions about this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix XV. Appendix I: Objectives, Scope, and Methodology Our objectives were to determine (1) determine the extent to which agencies have implemented selected Freedom of Information Act (FOIA) requirements; (2) describe the methods established by agencies to reduce backlogged requests and the effectiveness of those methods; (3) identify any statutory (b)(3) exemptions that have been used by agencies as the basis for withholding (redacting) information; and (4) determine what FOIA requests, if any, agencies received and processed that related to entities that received government assistance during the 2008 financial crisis. To address the first and second objectives, we selected 18 agencies to review based on the number of FOIA requests received, the sizes of FOIA backlogs, and the average time of processing FOIA requests for fiscal years 2012 through 2016. We also chose the agencies to represent a range of sizes (by number of employees)—large (10,000 or more), medium (1,000 to 9,999), and small (999 or fewer). Large agencies selected were the Departments of Homeland Security, Justice, State, and the Interior; the National Aeronautics and Space Administration, and the Tennessee Valley Authority. Medium agencies were the National Archives and Records Administration, the Federal Deposit Insurance Corporation, the Equal Employment Opportunity Commission, the Broadcasting Board of Governors, the U.S. Agency for International Development, and the Federal Trade Commission. Small agencies were the National Transportation Safety Board, the American Battle Monuments Commission, the Pension Benefit Guaranty Corporation, the U.S. African Development Foundation, the Office of Management and Budget, and the Administrative Conference of the United States. For our first objective, to determine the extent to which agencies had implemented FOIA requirements, we examined six FOIA requirements outlined in the FOIA Improvement Act of 2016 and the OPEN Government Act of 2007. These requirements were for agencies to (1) update response letters, (2) implement tracking systems, (3) provide FOIA training, (4), provide records online, (5) designate chief FOIA officers, and (6) update and publish timely and comprehensive regulations. For these six requirements, we reviewed (1) agencies’ FOIA regulations to determine if they included updates from the 2016 FOIA amendments and 2007 OPEN Government Act; and if they were updated by the required deadline; (2) agencies’ FOIA systems to determine if the systems provided individualized tracking numbers for requests that will take longer than 10 days to process, if agencies’ established telephone or Internet service to allow requesters to track the status of their requests; (3) if agencies’ had designated a chief FOIA officer and what position they held within the agency; (4) if agencies chief FOIA officers provided annual FOIA training opportunities to agency staff; (5) if agencies had appropriately updated response letters in compliance with the 2016 FOIA amendments; and (6) if agencies were providing electronic documents publicly available online and posting frequently requested documents as required by the 2016 FOIA amendments. Since we selected a nonprobability sample of FOIA reporting agencies, the results of this analysis are not generalizable to all FOIA reporting agencies. In addition, we also reviewed the requirement for the development of a government-wide FOIA request portal and met with Office of Management and Budget (OMB) officials, and Department of Justice (Justice) officials in the Office of Information Policy (OIP) to the discuss the status of development. Further, we met the Chief FOIA Officers Council, OIP, and National Archives and Records Administration’s (NARA) Office of Government Information Services (OGIS) to determine what, if any, actions they have taken to assist agencies with not violating the provisions of FOIA. For our second objective, to determine the methods established by agencies to reduce backlogged requests and the effect of those methods, we reviewed agency documentation to evaluate if the selected agencies had developed methods for reducing backlogged FOIA requests. We identified requirements for agencies to produce backlog reduction plans and determined if agencies developed such plans as required. We analyzed agencies’ FOIA.gov data to determine if there was a correlation between the presence of a backlog reduction plan and a reduction in backlog numbers. We compared a set of identified best practices for reducing backlogs with agency procedures to determine the extent to which the best practices are used. In addition, we interviewed agency officials to determine the reasons for changes in agency backlog numbers and what actions they are taking to reduce backlogs or implement reduction plans. The results of this analysis are not generalizable to all FOIA reporting agencies. For our third objective, to identify statutory (b)(3) exemptions that have been used by agencies as the basis for withholding information, we developed a catalog of (b)(3) statutes that agencies previously have used to withhold information in FOIA records. To do that, we retrieved all data on agency use of (b)(3) statutes that were readily accessible on Justice’s FOIA.gov website. The data on FOIA.gov are for fiscal years 2008 to 2016; however, Justice acknowledged that data prior to 2010 were not available on FOIA.gov for all agencies. Therefore, we reviewed data for fiscal years 2010 to 2016. In total, there were 117 distinct agencies that provided annual report data for at least 1 fiscal year, and that were represented in fiscal years 2010 through 2016. We developed a catalog by extracting information from the aggregate of agency annual FOIA reports that report, among other things, usage of (b)(3) statute, including the statute’s citation and the number of times the statute was to used withhold information in a fiscal year. To assess the reliability of the data we retrieved from FOIA.gov, we supplemented our analysis with interviews of FOIA officials in Justice’s OIP on steps they have taken to ensure the consistency of data in FOIA.gov on agencies’ use of (b)(3) statutes. Our analysis did not include assessing the reliability of (b)(3) statute data submitted by agencies— Justice guidance states it is the responsibility of each agency to ensure quality data in their reports. We also electronically tested the data by identifying outliers, missing values, and syntactical discrepancies. We found the data to be sufficiently reliable for purposes of our reporting objective. To facilitate our analysis, we refined our catalog listing of agencies’ use of (b)(3) statutes by developing a standardized statute notation assigned to each agency-used statute in our list. Specifically, our standardization of agency-used statutes consisted of removing any typographical errors, ensuring statutes were noted in a consistent U.S. Code format and referred to existing U.S. Code section, and verifying the existence of each statute through legal research, as well as standardizing any current notations of the statute such as those transferred within the U.S. Code by later legislation. If no current notation existed, then that statute was listed as is, such as “15 U.S.C. § 80a-30(c)”, which was used by an agency, and repealed during our review period. No replacement notation could be found. For some U.S. Code statutes, we standardized statutes to an entire section or subsection to reference nondisclosure provisions that contain a description of the type of information withheld by that statute. Further, for some U.S. Code statutes that agencies used as a range of statutes, such as 7 U.S.C. §§ 7411-7425, we determined whether the range contained a single or multiple (b)(3) statute section(s) and developed a standardized statute for each (b)(3) section to assign the original agency statute. In some cases, where agencies used a smaller ranger of statutes, such as 21 U.S.C. §§ 1903-1905, we retained the notation and assigned a standardized version of the range to the original agency-used statute range. Additionally, for some U.S. Code statutes that agencies used that contained two (b)(3) statutes, such as 26 U.S.C. §§ 6103 and 6105, we developed a standardized statute for each (b)(3) section to assign the original agency statute. For those agency-used statutes that could not be immediately standardized or seemed to be noted in error, we either assigned that statute to a related section (or sections) containing a nondisclosure provision, retained the notation and assigned a standardized version of the statute to the original agency-used statute, or removed that statute from our catalog. For example, an agency claimed 15 U.S.C. § 7301 as a (b)(3) statue; however, the statute was a purpose section and 15 U.S.C. § 7306 was the only related nondisclosure provision in that chapter or subchapter of the Code. Therefore, § 7301 was assigned to the standardized citation § 7306. Each standardized statute was counted as one single statute, regardless of the number of sections it represented, resulting in a total of 237 statutes. Following our standardization exercise, we developed descriptions of each statute’s subject matter. We also compared our standardized statutes list to Justice’s list of qualified statutes to identify those statutes that qualified if a court has approved of the statute as being a (b)(3) statute. Next, we classified these statutes into 10 general categories based on their descriptions. To determine usage of (b)(3) statutes by agencies, we calculated the number of times an agency used original agency-used statutes and assigned those numbers to its associated standardized statute in our catalog. In cases where an agency appeared to cite multiple statutes, such as 26 U.S.C. §§ 6103 and 6105, we counted the statutes separately if we determined they were different. For example, if an agency used 26 U.S.C. §§ 6103 and 6105 500 times during fiscal years 2010 to 2016, we would assign that number to each standardized statute in our catalog to ensure that 26 U.S.C. § 6103 and 26 U.S.C. § 6105 each received 500 as the number of times used. We compiled and sorted these data to obtain information on which agencies were using the statute, which agency used it the most, and the approximate number of times the statute was used by an agency. To identify which statutes qualified as a (b)(3) exemption under the OPEN FOIA Act of 2009, we determined the date of the most recent legislative action for each standardized statute by identifying the dates of enactment and the most recent amendments of the statutes. We then identified those statutes enacted or amended after 2009 and we determined if they cited FOIA’s paragraph (b)(3) by including a citation to 5 U.S.C. 552(b)(3) or “paragraph (b)(3) of section 552 of title 5, United States Code,” or a similar citation that includes a reference to paragraph (b)(3). To identify any additional statutes that the reviewed agencies did not claim during fiscal years 2010 to 2016, we developed another catalog of statutes that have similar provisions as other (b)(3) statutes that authorize an agency to withhold information from the public. Specifically, we utilized various sources to compile our list of statutes, including annual Justice reports on statutes determined by courts to constitute a (b)(3) statute, the National Institute of Standards and Technology’s Guide for Mapping Types of Information and Information Systems to Security Categories, and two external nongovernmental organizations (American University Washington College of Law and ProPublica). In addition, we separately searched the U.S. Code for the keyword “552(b)(3)” using Lexis Nexis, to identify any additional statutes for our catalog. However, this additional catalog does not serve as an definitive or comprehensive list of (b)(3) statutes available for agencies to claim. Specifically, FOIA gives agencies broad discretion in deciding whether they can withhold information on the basis of a statute. For example, FOIA allows for agencies to assert a federal statute under the (b)(3) exemption if that statute establishes particular criteria or refers to particular types of matters to be withheld. Therefore, the statutes we identified may undercount the total number of exemptions available to agencies. For our fourth objective, to determine the number and types of FOIA requests related to private corporations that received funds under the Troubled Assess Relief program (TARP), we reviewed the Department of Treasury’s (Treasury) Monthly Reports to Congress (October 2008 and November 2014) and prior GAO reports relating to TARP. We identified the corporations that received TARP funds and the federal agencies that received FOIA requests related to these corporations by reviewing Treasury’s monthly reports for the time period in which Treasury held 50 percent or more common stock in corporations that were under the TARP agreement. We also reviewed prior GAO reports on TARP to verify the corporations and time period. In addition, we met with Treasury officials to verify the entities and time period. The three corporations that received TARP funds were American International Group, General Motors, and Ally. The agencies that received FOIA requests about these corporations were Treasury, the Federal Deposit Insurance Corporation (FDIC), the Federal Reserve Board, and the Office of the Comptroller of the Currency. We met with these agencies to identify their involvement in providing assistance to companies related to TARP. Next, we reviewed FOIA requests received by these four agencies during the period in which Treasury owned at least 50 percent or more common shares in the corporations. We reviewed the FOIA requests to determine the resolution of the request and the length of time it took the agency to respond. Lastly, we interviewed agency officials to better understand if and how FOIA requests were received and processed. We conducted this performance audit from January 2017 through June 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Freedom of Information Act Exemptions The Freedom of Information Act (FOIA) prescribes nine specific categories of information that are exempt from disclosure. These exemptions are described in the table below. Appendix III: Catalog of (b)(3) Exemption Statutes Agencies Claimed during Fiscal Years 2010 through 2016 Table 7 describes 237 (b)(3) exemption statutes used by FOIA reporting agencies during fiscal years 2010 through 2016 and indicates whether that statute has been found by a court to qualify as a (b)(3) exemption. Specifically, the Department of Justice, in its oversight role, identified 78 statutes that courts have ruled qualify as a (b)(3) statute. During fiscal years 2010 through 2016, when responding to FOIA requests, agencies used 75 of these statutes as the basis for withholding information. Appendix IV: Catalog of Statutes Authorizing the Withholding of Information but Not Used by Agencies under the (b)(3) Exemption during Fiscal Years 2010 through 2016 Table 8 identifies 140 additional statutes outside of our agency used catalog that we did not identify as used by agencies during our fiscal year 2010 through 2016 review period. These statutes have similar provisions to other (b)(3) exemption statutes, authorizing an agency to withhold information from the public. Appendix V: Comments from the Department of Homeland Security Appendix VI: Comments from the Department of Justice Appendix VII: Comments from the National Archives and Records Administration Appendix VIII: Comments from the National Aeronautics and Space Administration Appendix IX: Comments from the Department of State Appendix X: Comments from the U.S. Agency for International Development Appendix XI: Comments from the Department of the Interior Appendix XII: Comments from the Pension Benefit Guaranty Corporation Appendix XIII: Comments from the Tennessee Valley Authority Appendix XIV: Comments from the Federal Trade Commission Appendix XV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Anjalique Lawrence (assistant director), Lori Martinez (analyst in charge), Gerard Aflague, Melina Asencio, David Blanding, Kami Brown, Christopher Businsky, Caitlin Cusati, Haley Dunn, Elena Epps, Rebecca Eyler, Nancy Glover, James Andrew Howard, Saida Hussain, Robert Letzler, Lee McCracken, Carlo Mozo, Brian Palmer, David Plocher, Di’Mond Spencer, Sukhjoot Singh, Henry Sutanto, and Priscilla Smith made key contributions to this report.
Why GAO Did This Study FOIA requires federal agencies to provide the public with access to government records and information based on the principles of openness and accountability in government. Each year, individuals and entities file hundreds of thousands of FOIA requests. In the last 9 fiscal years, federal agencies subject to FOIA have received about 6 million requests. GAO was asked to review federal agencies' compliance with FOIA requirements. Our objectives, among others, were to (1) determine the extent to which agencies have implemented selected FOIA requirements; (2) describe the methods established by agencies to reduce backlogged requests and the effectiveness of those methods; and (3) identify any statutory exemptions that have been used by agencies as the basis for withholding (redacting) information from requesters. To do so, GAO selected 18 agencies based on their size and other factors and assessed their policies against six FOIA requirements. GAO also reviewed the agencies' backlog reduction plans and developed a catalog of statutes that agencies have used to withhold information. What GAO Found All 18 selected agencies had implemented three of six Freedom of Information Act (FOIA) requirements reviewed. Specifically, all agencies had updated response letters to inform requesters of the right to seek assistance from FOIA public liaisons, implemented request tracking systems, and provided training to FOIA personnel. For the three additional requirements, 15 agencies had provided online access to government information, such as frequently requested records, 12 agencies had designated chief FOIA officers, and 12 agencies had published and updated their FOIA regulations on time to inform the public of their operations. Until these agencies address all of the requirements, they increase the risk that the public will lack information that ensures transparency and accountability in government operations. The 18 selected agencies had backlogs of varying sizes, with 4 agencies having backlogs of 1,000 or more requests during fiscal years 2012 through 2016. These 4 agencies reported using best practices identified by the Department of Justice, such as routinely reviewing metrics, as well as other methods, to help reduce their backlogs. Nevertheless, these agencies' backlogs fluctuated over the 5-year period (see figure). The 4 agencies with the largest backlogs attributed challenges in reducing their backlogs to factors such as increases in the number and complexity of FOIA requests. However, these agencies lacked plans that described how they intend to implement best practices to reduce backlogs. Until agencies develop such plans, they will likely continue to struggle to reduce backlogs to a manageable level. Agencies used various types of statutory exemptions to withhold information when processing FOIA requests during fiscal years 2010 to 2016. The majority of these fell into the following categories: personally identifiable information, national security, law enforcement and investigations, and confidential and commercial business information. What GAO Recommends GAO is making recommendations to 16 agencies to post records online, designate chief FOIA officers, update regulations, and develop plans to reduce backlogs. Nine agencies agreed with the recommendations, 1 both agreed and disagreed, 2 disagreed, and 4 neither agreed nor disagreed. GAO continues to believe the recommendations are valid.
gao_GAO-18-339SP
gao_GAO-18-339SP_0
Background To help manage its multi-billion dollar acquisition investments, DHS has established policies and processes for acquisition management, requirements development, test and evaluation, and resource allocation. The department uses these policies and processes to deliver systems that are intended to close critical capability gaps, helping enable DHS to execute its missions and achieve its goals. Acquisition Management Policy DHS policies and processes for managing its major acquisition programs are primarily set forth in its Acquisition Management Directive 102-01 and Acquisition Management Instruction 102-01-001. DHS issued the initial version of this directive in November 2008 in an effort to establish an acquisition management system that effectively provides required capability to operators in support of the department’s missions. DHS’s Under Secretary for Management is currently designated as the department’s Chief Acquisition Officer and, as such, is responsible for managing the implementation of the department’s acquisition policies. DHS’s Under Secretary for Management serves as the acquisition decision authority for the department’s largest acquisition programs, those with LCCEs of $1 billion or greater. Component Acquisition Executives—the most senior acquisition management officials within each of DHS’s components—may be delegated acquisition decision authority for programs with cost estimates between $300 million and less than $1 billion. Table 1 identifies how DHS has categorized the 28 major acquisition programs we review in this report, and table 7 in appendix III specifically identifies the programs within each level. DHS acquisition management policy establishes that a major acquisition program’s decision authority shall review the program at a series of predetermined acquisition decision events to assess whether the major program is ready to proceed through the acquisition life-cycle phases. Depending on the program, these events can occur within months of each other, or be spread over several years. Figure 1 depicts the acquisition life cycle established in DHS acquisition management policy. An important aspect of an acquisition decision event is the decision authority’s review and approval of key acquisition documents. See table 2 for a description of the type of key acquisition documents requiring department-level approval before a program moves to the next acquisition phase. DHS acquisition management policy establishes that the APB is the agreement between program, component, and department-level officials establishing how systems will perform, when they will be delivered, and what they will cost. Specifically, the APB establishes a program’s schedule, costs, and key performance parameters. DHS defines key performance parameters as a program’s most important and non- negotiable requirements that a system must meet to fulfill its fundamental purpose. For example, a key performance parameter for an aircraft may be airspeed and a key performance parameter for a surveillance system may be detection range. The APB schedule, costs, and key performance parameters are defined in terms of an objective and minimum threshold value. According to DHS policy, if a program fails to meet any schedule, cost, or performance threshold approved in the APB, it is considered to be in breach. Programs in breach are required to notify their acquisition decision authority and develop a remediation plan that outlines a time frame for the program to return to its APB parameters, re-baseline—that is, establish new schedule, cost, or performance goals—or have a DHS-led program review that results in recommendations for a revised baseline. In addition to the acquisition decision authority, other bodies and senior officials support DHS’s acquisition management function: The Acquisition Review Board reviews major acquisition programs for proper management, oversight, accountability, and alignment with the department’s strategic functions at acquisition decision events and other meetings as needed. The board is chaired by the acquisition decision authority or a designee and consists of individuals who manage DHS’s mission objectives, resources, and contracts. The Office of Program Accountability and Risk Management (PARM) is responsible for DHS’s overall acquisition governance process, supports the Acquisition Review Board, and reports directly to the Under Secretary for Management. PARM develops and updates program management policies and practices, reviews major programs, provides guidance for workforce planning activities, provides support to program managers, and collects program performance data. Components, such as U.S. Customs and Border Protection, the Transportation Security Administration, and the U.S. Coast Guard sponsor specific acquisition programs. The head of each component is responsible for oversight of major acquisition programs once the programs complete delivery of all planned capabilities to end users. Component Acquisition Executives within the components are responsible for overseeing the execution of their respective portfolios. Program management offices, also within the components, are responsible for planning and executing DHS’s individual programs. They are expected to do so within the cost, schedule, and performance parameters established in their APBs. If they cannot do so, programs are considered to be in breach and must take specific steps, as noted above. Figure 2 depicts the relationship between acquisition managers at the department, component, and program level. Requirements Development Process DHS established a Joint Requirements Council (JRC) to develop and lead a component-driven joint requirements process for the department. The JRC has issued policies outlining a process for analyzing and validating capability gaps, needs, and requirements. The JRC consists of a chair and 14 members who are senior executives or officers that represent key DHS headquarters offices and seven of the department’s operational components. The JRC chair rotates annually among the seven operational components. JRC members represent the views of their components or office leadership, endorse and prioritize validated capability needs and operational requirements (user-defined performance parameters outlining what a system must do), and make recommendations that are supported by analytical rigor. Figure 3 depicts the current headquarters and component members of the JRC. The JRC provides input to two senior-level entities: The Acquisition Review Board—as a member, the JRC chair advises the board on capability gaps, needs, and requirements at key milestones in the acquisition life cycle. The Deputy’s Management Action Group, which the Secretary established in April 2014, is a decision-making body that is chaired by the Deputy Secretary. Its membership consists of the DHS Chief of Staff, DHS Under Secretaries, senior operational component deputies and select support component deputies, and the Chief Financial Officer. The group provides recommendations to the Deputy Secretary for consideration in the annual resource allocation process that reflects DHS’s investment priorities. The group reviews JRC- validated capability needs and recommendations, provides direction and guidance to the JRC, and endorses or directs related follow-on JRC activities. The JRC is responsible for validating proposed capability needs and requirements for all major acquisitions, as well as for programs that are joint or of interest to the Deputy’s Management Action Group, regardless of level. See table 3 for a description of the key requirements documents requiring JRC validation. In general, the DHS requirements development process moves from broad mission needs and capability gaps to operational requirements. See figure 4. Test and Evaluation Policy In May 2009, DHS established policies that describe processes for testing the capabilities delivered by the department’s major acquisition programs. The primary purpose of test and evaluation is to provide timely, accurate information to managers, decision makers, and other stakeholders to reduce programmatic, financial, schedule, and performance risks. We provide an overview of each of the 28 programs’ test activities in the individual program assessments presented in appendix I. DHS testing policy assigns specific responsibilities to particular individuals and entities throughout the department: Program managers have overall responsibility for planning and executing their programs’ testing strategies, including scheduling and funding test activities and delivering systems for testing. They are also responsible for controlling developmental testing, which is used to assist in the development and maturation of products, manufacturing, or support processes. Developmental testing includes engineering- type tests used to verify that design risks are minimized, substantiate achievement of contract technical performance, and certify readiness for operational testing. Operational test agents are responsible for planning, conducting, and reporting on operational test and evaluation, which is intended to identify whether a system can meet its key performance parameters and provide an evaluation of the operational effectiveness, suitability, and cybersecurity of a system in a realistic environment. Operational effectiveness refers to the overall ability of a system to provide a desired capability when used by representative personnel. Operational suitability refers to the degree to which a system can be placed into field use and sustained satisfactorily. The operational test agents may be organic to the component, another government agency, or a contractor, but must be independent of the developer in order to present credible, objective, and unbiased conclusions. The Director, Office of Test and Evaluation is responsible for approving major acquisition programs’ operational test agent and test and evaluation master plans, among other things. A program’s test and evaluation master plan must describe the developmental and operational testing needed to determine technical performance and operational effectiveness, suitability, and cybersecurity. As appropriate, the Director is also responsible for observing operational tests, reviewing operational test agents’ reports, and assessing the reports. Prior to a program’s acquisition decision event 3, the Director provides the program’s acquisition decision authority a letter of assessment that includes an appraisal of the program’s operational test, a concurrence or non-concurrence with the operational test agent’s evaluation, and any further independent analysis. As an acquisition program proceeds through its life cycle, the testing emphasis moves gradually from developmental testing to operational testing. See figure 5. Resource Allocation Process DHS has established a planning, programming, budgeting, and execution process to allocate resources to acquisition programs and other entities throughout the department. DHS uses this process to produce the department’s annual budget request and multi-year funding plans presented in the FYHSP, a database that contains, among other things, 5-year funding plans for DHS’s major acquisition programs. According to DHS guidance, the 5-year plans should allow the department to achieve its goals more efficiently than an incremental approach based on 1-year plans. DHS guidance also states that the FYHSP articulates how the department will achieve its strategic goals within fiscal constraints. At the outset of the annual resource allocation process, the department’s Offices of Policy and Chief Financial Officer provide planning and fiscal guidance, respectively, to the department’s components. In accordance with this guidance, the components should submit 5-year funding plans to the Chief Financial Officer. These plans are subsequently reviewed by DHS’s senior leaders, including the DHS Secretary and Deputy Secretary. DHS’s senior leaders are expected to modify the plans in accordance with their priorities and assessments, and they document their decisions in formal resource allocation decision memorandums. DHS submits the revised funding plans to the Office of Management and Budget, which uses them to inform the President’s annual budget request—a document sent to Congress requesting new budget authority for federal programs, among other things. In some cases, the funding appropriated to certain accounts in a given fiscal year can be carried over to subsequent fiscal years. Figure 6 depicts DHS’s annual resource allocation process. Federal law requires DHS to submit an annual FYHSP report to Congress at or about the same time as the President’s budget request. This report presents the 5-year funding plans in the FYHSP database at that time. Two offices within DHS’s Office of the Chief Financial Officer support the annual resource allocation process: The Office of Program Analysis and Evaluation (PA&E) is responsible for establishing policies for the annual resource allocation process and overseeing the development of the FYHSP. In this role, PA&E develops the Chief Financial Officer’s planning and fiscal guidance, reviews the components’ 5-year funding plans, advises DHS’s senior leaders on resource allocation issues, maintains the FYHSP database, and submits the annual FYHSP report to Congress. The Cost Analysis Division is responsible for reviewing, analyzing, and evaluating acquisition programs’ LCCEs to ensure the cost of DHS programs are presented accurately and completely, in support of resource requests. This division also supports affordability assessments of the department’s budget, in coordination with PA&E, and develops independent cost estimates for major acquisition programs upon request by DHS’s Under Secretary for Management or Chief Financial Officer. During 2017, 10 of the 24 Programs with Approved Schedule and Cost Goals Were on Track Of the 24 programs we assessed with approved schedule and cost goals, 10 were on track to meet those goals during 2017. The other 14 programs were not on track because they changed or breached their schedule goals, cost goals, or both. We found that most programs updated their cost estimates in response to requirements DHS established in January 2016 that are intended to provide decision makers with more timely information. These actions are in accordance with GAO’s best practice to regularly update cost estimates and we plan to use these updated estimates to measure programs’ cost changes going forward. Based on our April 2014 recommendation, DHS revised the format of its fiscal year 2018–2022 FYHSP report to Congress to include acquisition affordability tables for select major acquisition programs. However, the report shows—and our analysis of programs’ current cost estimates confirms— that some programs face acquisition funding gaps in fiscal year 2018. We also reviewed 4 programs that were early in the acquisition process and planned to establish department-approved schedule and cost goals in calendar year 2017. However, these programs were delayed in getting department approval for their initial APBs for various reasons and, therefore, we excluded them from our assessment of whether programs were on track to meet their schedule and cost goals during 2017. DHS leadership subsequently approved initial APBs for 2 particularly complex and costly programs—a border wall system along the southwest U.S. border and the Coast Guard’s Heavy Polar Icebreaker—in January 2018. We plan to assess these programs in next year’s review, but provide more details on all 4 additional programs we reviewed in the individual assessments in appendix I. Table 4 summarizes our findings and we present more detailed information after the table. Ten Programs Were on Track during 2017 From January 2017 to January 2018, 10 of the 24 programs we assessed with department-approved APBs were on track to meet their schedule and cost goals. This is fewer than our last annual review in which we found that 17 of the 26 programs we assessed were on track during 2016. Three of the 10 programs on track during 2017 were on track against initial schedule and cost goals; that is, the schedule and cost estimates in the baseline DHS leadership initially approved after the department’s acquisition management policy went into effect in November 2008. The other 7 programs had re-baselined prior to January 2017 and were on track against revised schedules and cost estimates that reflected past schedule slips, cost growth, or both. However, some of the programs on track in 2017 identified risks that may lead to schedule slips or cost growth in the future. For example, officials from the Technology Infrastructure Modernization program told us that staffing challenges may impede their ability to execute the program in accordance with its current APB. We also identified 2 programs that are in the process of re-baselining or plan to re-baseline in the near future to account for significant program changes or to add capabilities. For example, the Next Generation Networks Priority Services program plans to update its APB to establish schedule, cost, and performance goals for the next increment, which is intended to address landline capabilities for providing government officials emergency telecommunication services. Fourteen Programs Were Not on Track during 2017 During 2017, 14 of the 24 programs we assessed with department- approved APBs were not on track. Twelve of these programs had at least one major acquisition milestone that slipped, including 6 of these programs that also changed or breached their cost goals. Two additional programs changed or breached only their cost goals. Programs with Schedule Slips during 2017 As of January 2018, 6 of the 12 programs that experienced a schedule slip were in breach and had not yet revised their goals. Therefore, the magnitude of the schedule slips is unknown. For the remaining 6 programs, the change in schedule during 2017 ranged from a delay of 6 months to 66 months. Figure 7 identifies the programs that experienced schedule slips and the extent to which their major milestones slipped in 2017, as well as—for additional context—in prior years. While there are various reasons for schedule delays, the result is that end users may not get needed capabilities when they originally anticipated. Examples of the reasons why these key milestones slipped in 2017 include the following: New requirements: For example, the Passenger Screening Program re-baselined in May 2017 for the fifth time since its initial APB was approved in January 2012. This latest re-baseline was to remediate a 17-month breach caused by delays in incorporating new cybersecurity requirements in one of the program’s transportation security equipment technologies, known as the Credential Authentication Technology. The program now plans to achieve full operational capability for this system by December 2023—more than 9 years later than it initially planned. In another example, the Tactical Communications Modernization program re-baselined in November 2017—4 months after the program notified DHS leadership that it would not achieve full operational capability as planned. The reason for this re-baseline was to resolve issues related to federal information security requirements. The program now plans to achieve this milestone by March 2019, which is more than a year later than its initial APB threshold. Technical challenges: For example, the Continuous Diagnostics and Mitigation program re-baselined in June 2017 to account for significant coverage gaps identified during the deployment of phase 1 sensors and to establish cost, schedule, and performance goals for phase 3 tools. The program’s full operational capability date slipped almost 4 years after this milestone was redefined as the point in time at which phase 1–3 tools are available to all participating civilian agencies. Additionally, the Automated Commercial Environment program declared a schedule breach in April 2017—its second in less than a year—after encountering difficulties developing its remaining functionality. These difficulties have caused further delays to the program’s final acquisition milestone decision. External factors: Officials from the Logistics Supply Chain Management System program notified DHS leadership in September 2017 that the program would not complete all required activities to achieve acquisition decision event 3 and subsequent events, including full operational capability. The primary reason for the delay was because program staff were deployed to support response and recovery efforts during the 2017 hurricane season. Additionally, the Medium Lift Helicopter program experienced delays in getting key acquisition documents approved in time to achieve its acquisition decision event 3. These delays were attributed, in part, to DHS leadership directing Customs and Border Protection to develop a comprehensive border plan that included the helicopter’s capabilities. We elaborate on the reasons for all 12 programs’ schedule slips in the individual assessments in appendix I. Programs with Cost Goal Changes or Breaches during 2017 Of the 14 programs not on track during 2017, 8 revised or breached their established cost goals. Four of these 8 programs revised their cost goals when they re-baselined to address new requirements and technical challenges, among other things. When the Passenger Screening Program re-baselined in May 2017, the program’s APB threshold for its life-cycle costs increased $418 million (8 percent) over its previous APB. However, the revised threshold is $1 billion below the threshold established in the program’s initial APB, which was approved in January 2012. From 2012 to 2015, the program’s scope was reduced in response to funding constraints. However, emerging threats drove the program to increase capability requirements, which has subsequently increased costs. When the Continuous Diagnostics and Mitigation program re- baselined in June 2017, the APB threshold for life-cycle costs decreased by $15 million (1 percent). However, the program shifted some acquisition costs to operations and maintenance (O&M) to be consistent with DHS’s new common appropriations structure. This, in addition to other changes, increased the APB threshold for O&M by $631 million (3,712 percent). When the National Security Cutter program re-baselined in November 2017 to account for a ninth ship—as directed by Congress—the APB cost thresholds for acquisition and O&M increased by $453 million (8 percent) and $123 million (1 percent), respectively. When the Immigration and Customs Enforcement’s TECS Modernization program re-baselined in November 2017 in preparation for acquisition decision event 3, the APB cost thresholds increased overall. Specifically, the acquisition cost threshold decreased by $14 million (6 percent) when the program included actual costs through fiscal year 2016, among other things, and the O&M cost threshold increased by $147 million (92 percent) when the program extended the estimate by 4 years and included support costs for an additional 11 years. The other 4 programs breached their established cost goals during 2017. The Medium Lift Helicopter and Electronic Baggage Screening programs breached certain APB cost thresholds when they shifted costs between categories, such as O&M to acquisitions or vice versa, to be consistent with DHS’s new common appropriations structure. The Tactical Communications Modernization program experienced a cost breach primarily because of increases in costs for contractor labor and support for facilities and infrastructure. The program’s APB cost threshold for O&M increased by $110 million (23 percent) when it re-baselined in November 2017. The Automated Commercial Environment program experienced a cost breach because it had to extend its contracts to address the development difficulties discussed above. The magnitude of the program’s cost goal changes is not yet known because the program does not plan to revise its APB until August 2018. We elaborate on the reasons for all 8 programs’ cost goal changes or breaches in the individual program assessments in appendix I. DHS Has Taken Steps to Enhance Cost Reporting While Some Programs Still Face Funding Gaps In January 2016, based on several of our past recommendations, DHS required major acquisition programs to begin submitting to headquarters (1) detailed data on program affordability, such as updates to the program’s LCCE and funding source information, to help inform the department’s annual resource allocation process, and (2) an annual LCCE update. These requirements are intended to provide more timely information that may improve DHS’s efforts to address acquisition program affordability issues, as well as internal and external oversight of programs’ progress against its cost goals. These actions are in accordance with GAO’s cost estimating best practices, which state that cost estimates should be updated with actual costs so that they are always relevant and current. As a result, we have used these sources to provide the programs’ current estimate in the individual assessments in appendix I, as appropriate, and plan to use these data sources to measure programs’ cost changes going forward. According to officials from the Cost Analysis Division, a program’s annual LCCE update should inform the affordability submission to support the annual resource allocation process and can be completed at any point during the fiscal year leading up to this process. We examined documentation to ascertain whether the programs we reviewed complied with the two requirements. For the 24 programs we assessed with department-approved APBs, we found the following: All 24 programs submitted the detailed data on program affordability to headquarters by June 2017 to inform the fiscal year 2019 resource allocation cycle. Most programs’ submissions accounted for changes since the program’s last LCCE was approved by DHS’s Chief Financial Officer, except three. For example, the Long Range Surveillance Aircraft program’s submission reflected no updates from its November 2011 LCCE because the program was in the process of re-baselining to account for significant changes. The program began re-baselining nearly 3 years ago and has been delayed for various reasons, including challenges with the vendor hired to complete a revision of the program’s LCCE. Eighteen of the 24 programs submitted annual LCCE updates. Three programs—Automated Commercial Environment, H-65, and Transformation—did not submit an annual LCCE update because they were in breach. The other 3 programs—all within the Coast Guard—did not submit an annual LCCE because, according to Coast Guard officials, they have limited internal cost estimating capability and rely on outside sources for this service, which led to delays in completing the annual LCCEs for these programs. Coast Guard officials said they are reviewing options to resolve these delays and improve the Coast Guard’s cost estimating capability. Cost Analysis Division officials anticipate the Coast Guard will increase compliance with the annual LCCE requirement in fiscal year 2018. They also plan to update the annual LCCE template to include additional information, such as comparisons of the updated estimates to the program’s APB cost goals and projected funding. In addition, DHS revised the format of its FYHSP report to Congress, improving insight into major programs’ acquisition funding, but decreasing insight into O&M funding. In April 2014, we found that DHS could better communicate its funding needs for acquisition programs to Congress and recommended that DHS enhance the content for future FYHSP reports by presenting programs’ annual cost estimates and any anticipated funding gaps, among other things. DHS concurred with the recommendation and, for the first time, included acquisition affordability tables that presented programs’ annual acquisition cost estimates compared to projected acquisition funding for select major acquisition programs in its FYHSP report for fiscal years 2018–2022. However, DHS no longer reported O&M funding for individual programs. DHS reported in the FYHSP that it focused on acquisition information because O&M funding estimates are generally stable year-to-year and components manage O&M in various ways, such as by individual program or across a portfolio of programs. By removing O&M funding information in the FYHSP for all programs, DHS presents an incomplete picture of programs’ full funding needs and affordability. In April 2018, we assessed the extent to which DHS had accounted for O&M costs and funding in greater detail and recommended that DHS reverse the exclusion of O&M funding at the acquisition program level in its FYHSP report to Congress for all components. DHS officials stated that they plan to re-introduce O&M funding for major acquisition programs in the FYHSP report for fiscal years 2019–2023 based on multiple internal discussions about the best way to present a more comprehensive view of programs’ total costs and feedback from key stakeholders, such as the Office of Management and Budget. Based on the information presented in the FYHSP report for fiscal years 2018–2022, DHS’s acquisition portfolio is not affordable over the next 5 years. For example, the report contained acquisition affordability tables for 18 of the 24 programs we assessed that have approved APBs. Of these 18 programs, 9 were projected to have an acquisition affordability gap in fiscal year 2018. However, some of these projections are outdated since the FYHSP report—which was issued in September 2017—relied on cost information as of April 2016. Therefore, we updated these tables using the programs’ current acquisition cost estimate presented in the individual assessments in appendix I. Based on our assessment of programs’ current cost estimates, we also found that a total of 9 programs are projected to have an acquisition affordability gap in fiscal year 2018. However, 3 of these 9 programs were different programs than those identified based on the FYHSP report. Of the 9 programs we identified with a projected acquisition affordability gap in fiscal year 2018, we found the following: Five programs identified other funding, such as funding from previous fiscal years that remained available for obligation—known as carryover funding—which would address their projected acquisition funding gap. For example, in the FYHSP report, DHS projected allocating approximately $16 million in funding for the Technology Infrastructure Modernization program in fiscal year 2018 to cover an estimated $16 million in acquisition costs. However, in its November 2017 annual LCCE update, this program’s acquisition cost increased to almost $30 million, resulting in a projected acquisition affordability gap of almost 45 percent. The program plans to realign $57 million in O&M carryover funding to cover this and any future acquisition shortfalls. Four programs did not identify other funding that would address their projected acquisition funding gap, which increases the likelihood that they will cost more and take longer to deliver capabilities to end users than expected. For example, in the FYHSP report, DHS projected allocating $109 million in funding for the Non-Intrusive Inspection Systems program in fiscal year 2018 to cover an estimated $103 million in acquisition costs. However, in its April 2017 annual LCCE update, this program’s acquisition costs increased to nearly $186 million, resulting in a projected acquisition affordability gap of 41 percent. The program identified only $2.5 million in fiscal year 2017 acquisition carryover funding. Further, 5 of the 24 programs we assessed were not included in the fiscal years 2018–2022 FYHSP report because they were no longer expected to receive acquisition funding. Officials from 3 of these 5 programs projected funding gaps that could cause future program execution challenges, such as schedule slips or cost growth. For example, the National Bio and Agro-Defense Facility anticipates a projected funding shortfall of approximately $90 million over the next 5 years, which officials said could delay a number of activities to make the facility operational. We elaborate on programs’ affordability over the next 5 years in the individual program assessments in appendix I. DHS’s Policies Generally Reflect Key Portfolio Management Practices, but Opportunities Exist to Leverage Programs’ Post-Implementation Results We assessed DHS’s policies outlining the department’s processes for acquisition management, resource allocation, and requirements and found that, when considered collectively, they generally reflect key portfolio management practices. In March 2007, we examined the practices that private sector entities use to achieve a balanced mix of new projects and found that successful commercial companies use a disciplined and integrated approach to prioritize needs and allocate resources when making investments. This approach, known as portfolio management, requires companies to view each of their investments as contributing to a collective whole, rather than as independent and unrelated. With this perspective, companies can effectively (1) identify and prioritize opportunities, and (2) allocate available resources to support the highest priority—or most promising—opportunities. Based on this and other work, we identified four key practice areas for portfolio management in September 2012. We previously assessed DHS’s acquisition management and resource allocation policies against our key portfolio management practices in September 2012 and April 2014, respectively. We found that the policies in place at the time of our reviews did not fully reflect all of the key portfolio management practices and recommended that DHS revise its policies to do so. DHS concurred with our recommendations and subsequently took actions to mature and solidify the department’s portfolio management processes and policies. In April 2014, the Secretary of Homeland Security issued a memorandum titled Strengthening Departmental Unity of Effort, which aimed to strengthen DHS’s structures and processes to improve departmental cohesiveness and operational effectiveness, among other things. The memorandum identified several initial focus areas intended to build organizational capacity, one of which centered on improving and integrating the department’s processes for acquisition oversight, resource allocation, and joint requirements analysis. To improve these processes, the memorandum directed senior DHS leaders to update the existing acquisition management and resource allocation processes, as well as lead an expedited review to provide alternatives for developing and facilitating a component-driven joint requirements process, which ultimately led to the re-establishment of the JRC. In response to our recommendations and the Unity of Effort memorandum, DHS issued new policies outlining the acquisition management, resource allocation, and requirements processes in 2016. We assessed these policies and found that, when considered collectively, they generally reflect the key portfolio management practices, as shown in table 5. Because DHS’s new policies were issued in 2016, we did not specifically assess DHS’s implementation of them. However, we did review documentation resulting from the acquisition management, resource allocation, and requirements processes since January 2016 to get a sense of how the department began implementation. Examples of how DHS’s policies reflect the key portfolio management practices and their implementation status are outlined below. Clearly define and empower leadership: the policies identify the roles and responsibilities for decision makers in the acquisition management, resource allocation, and requirements processes, as well as establish cross-functional teams to support those decision makers. For example, to fulfill the role of acquisition decision authority, the Under Secretary for Management is supported by the Acquisition Review Board, which consists of key DHS senior leaders responsible for managing the department’s finances, contracts, and testing, among other things. We reviewed the memorandums issued since January 2016 that document Acquisition Review Board decisions and found that, through this group, DHS has taken steps to manage across programs through its acquisition management process. For example, after reviewing the status of several individual Customs and Border Protection programs in 2016, the Acquisition Review Board identified the need for a comprehensive border plan that depicts the component’s current land, maritime, and air domain awareness capabilities. In October 2016, the Deputy Under Secretary for Management—who was serving as acquisition decision authority at the time—directed Customs and Border Protection to develop such a plan. The plan is to consist of separate analyses for each of the three domains—starting with land— that reflect end users’ capability requirements for systems, such as Integrated Fixed Towers, Multi-Role Enforcement Aircraft, and Medium Lift Helicopter, that address relevant domain threats. As of February 2018, Customs and Border Protection had not yet completed the analysis for land domain awareness capabilities. Establish standard assessment criteria and demonstrate comprehensive knowledge of the portfolio: the policies establish standard criteria for assessing major acquisition programs through the acquisition management, resource allocation, and requirements processes. For example, the updated resource allocation handbook established that PA&E conduct annual assessments of all major investments using standard criteria in five main categories— contribution to DHS’s mission, program health, risk, resources, and governance—to assess the portfolio of investments and present alternatives for leadership decision. PA&E officials told us they used these criteria when assessing components’ resource allocation requests during development of the President’s fiscal year 2018 budget to develop funding options for the Deputy’s Management Action Group, which is responsible for making resource allocation recommendations for the Secretary’s approval. PA&E presented its funding options by DHS mission, which, according to officials associated with the Deputy’s Management Action Group, allowed the group to make cross-component allocation decisions that directly aligned with the department’s strategic goals. We could not verify these officials’ assertions based on the documentation we were provided, but will continue to monitor PA&E’s assessment of major acquisition programs against the standard criteria as the department’s implementation of its resource allocation policies matures. In addition, PARM formally established its Acquisition Program Health Assessments in October 2016 after more than a year of development and pilot efforts. These assessments are intended to monitor major acquisition programs quarterly (both on an individual program level and in aggregate) by rating programs against standard criteria in several categories—such as program management, financial management, and human capital—that DHS deemed important for successful program execution. We reviewed the quarterly reports issued from January 2016 to April 2017 and found that they primarily focused on individual programs. The portfolio-level information contained in these reports was limited to program results grouped in various categories, such as by component, by acquisition life-cycle phase, and by investment type (e.g., information technology). PARM officials said they plan to use the health assessments as a portfolio management tool in the future and are working to determine how to best to analyze and present portfolio-level data. We will continue to track PARM’s implementation of the health assessment process moving forward through GAO’s High Risk work to determine DHS’s progress in demonstrating that major acquisition programs are on track to achieve their established goals. Prioritize investments by integrating the requirements, acquisition, and budget processes: the policies identify areas where DHS’s requirements, acquisition management, and resource allocation processes are integrated and establish processes for prioritizing investments. For example, the updated resource allocation policies require reviews of DHS’s major acquisition portfolio during this annual process. When the portfolio faces a funding gap, programs are to be returned to their respective components for scope or funding adjustments, or prioritized by department leadership to identify an affordable set of programs. For the fiscal year 2018 resource allocation cycle, PA&E officials provided an example where DHS leadership directed components to identify funding from alternative sources to fund specific purposes related to DHS’s mission to prevent terrorism and enhance security. However, as previously discussed, the resulting FYHSP report for fiscal years 2018–2022 showed that DHS’s portfolio of major acquisition programs is not affordable over the next 5 years. In addition, the requirements policies established the Joint Assessment of Requirements, an annual process to prioritize emerging and existing requirements to inform the department’s resource allocation decisions. As we found in October 2016, the JRC plans to implement the Joint Assessment of Requirements through a 3-year phased approach that is expected to be fully implemented in time to inform DHS’s fiscal year 2021 budget request. In fiscal year 2016, the JRC completed the first phase, which included (1) developing initial criteria to evaluate emerging requirements, and (2) evaluating and prioritizing a sample of those requirements against the initial criteria. Based on these results, JRC officials told us in September 2017 that they are working to develop assessment metrics for the criteria as part of the next phase. We will continue to track the JRC’s progress through GAO’s High Risk work to determine DHS’s progress to effectively operate the JRC. Continually make go/no go decisions to rebalance the portfolio: the requirements policies outlining the Joint Assessment of Requirements process also reflected the key practices to conduct reviews (1) annually to make requirement scoping adjustments as priorities change and (2) when new investments are identified. However, as previously discussed, the JRC is still in the process of implementing this process. We consider this overall key practice area to be partially met because DHS’s policies do not reflect the key practice (3) to reassess programs that breach established thresholds within the context of the portfolio to determine if the program remains relevant and affordable. PARM officials told us that—in practice—DHS reassesses programs in the context of their component’s overall acquisition portfolio based on a certification of funds memorandum submitted to DHS’s Chief Financial Officer when programs re-baseline as a result of a cost, schedule, or performance breach. The memorandum is intended to enable the Acquisition Review Board to discuss affordability by certifying a program’s funding levels and identifying trade-offs necessary to address any projected funding gaps. We previously found that the certification of funds memorandum was an effective tool for DHS leadership to assess program affordability. However, DHS’s acquisition management policy requires components to submit this memorandum prior to most acquisition decision events, but not when a program re-baselines as a result of a cost, schedule, or performance breach. During our review of programs’ progress against schedule and cost goals in 2017, we found one instance where a component did not follow the practice to submit this memorandum when one of its programs re-baselined as a result of a breach. Specifically, Customs and Border Protection did not submit a certification of funds memorandum when the Tactical Communications Modernization program re-baselined in November 2017 as a result of a schedule and cost breach. Nevertheless, DHS leadership approved the program’s revised APB and removed it from breach status, even though DHS’s Chief Financial Officer identified that the program’s revised LCCE was not affordable. PARM officials stated that this instance was an oversight because, at the time, the department was still determining when certification of funds memorandums should be submitted. According to the federal standards for internal control, documentation of internal control practices is necessary so that they can be implemented effectively. By amending its acquisition management policy to require a certification when a program re-baselines as a result of a cost, schedule, or performance breach, DHS can ensure that leadership receives the necessary information to reassess that program’s affordability in the context of a larger portfolio. PARM officials stated that, moving forward, components will be required to submit a certification of funds memorandum for each program when a new APB is submitted for DHS leadership approval. In contrast, the acquisition management policy does reflect the key practice (4) to use information gathered from post-implementation reviews to fine tune investment processes and the portfolio to achieve strategic outcomes. For example, DHS’s acquisition management policy requires programs to conduct post-implementation reviews 6 to 18 months after initial operational capability to identify and document any deployment or implementation and coordination issues, how they were resolved, and how they could be prevented in the future. These reviews are intended to help identify capability gaps that may inform future acquisitions, among other things. However, PARM officials said that they do not consider the results of the post-implementation reviews when managing the department’s current acquisition portfolio because these reviews are typically conducted after program oversight shifts from PARM to the component. While post-implementation reviews are conducted later in the acquisition life cycle, the insights they provide could be leveraged by other programs in the acquisition portfolio, not just the program under review. For example, the Integrated Fixed Towers program completed a post-implementation review in June 2016 after its initial deployment of capabilities to the Arizona border. The review found that changes in illegal traffic patterns as a result of the program’s deployment may be predicted, and other technologies may be able to compensate for changes in these patterns. This information could help other programs under development plan for similar outcomes or enable DHS to change deployment plans for existing programs to address changes in threats. PARM has an opportunity to use the results from programs’ post- implementation reviews since it is responsible for overseeing the department’s acquisition portfolio by monitoring each investment’s cost, schedule, and performance against established baselines. Federal standards for internal control state that management should obtain data on a timely basis so that they can be used for effective monitoring and that separate evaluations may provide feedback on the effectiveness of ongoing monitoring. By leveraging the results from post-implementation reviews in its monitoring efforts, PARM may be better able to ensure that programs in the current acquisition portfolio achieve their baselines. PARM officials stated they have generally focused on leveraging information gathered from canceled acquisition programs, such as where and why plans went wrong. However, they agreed that they could better leverage post- implementation review information gathered from programs that complete planned capability deployments. Conclusions DHS’s mission to safeguard the American people and homeland requires a broad portfolio of acquisitions. However, the performance of DHS’s major acquisition portfolio during 2017 did not improve compared to our last review because we found that more programs will require more time and may require more money to complete than initially planned. DHS is collecting more timely cost estimate information on its acquisition programs to make more informed investment decisions. Yet DHS continues to face challenges in funding its acquisition portfolio, which highlights the need for disciplined policies that reflect best practices to ensure that the department does not pursue more programs than it can afford. DHS leadership has taken positive steps in recent years by strengthening its policies for acquisition management and resource allocation, and establishing policies related to requirements. Collectively, these policies reflect an integrated approach to managing investments. However, opportunities remain to further strengthen the acquisition management policy by documenting DHS’s current practice to reassess programs that breach their established cost, schedule, or performance thresholds to ensure they are still worth pursuing within the context of the portfolio. Additionally, leveraging information learned once programs complete deployment across the acquisition portfolio could help ensure that programs stay on track against their baselines in the first place. This is particularly relevant because DHS is initiating a number of complex and costly acquisition programs, such as development of a wall system along the southwest border and the Coast Guard’s Heavy Polar Icebreaker, which could benefit from this type of information. Recommendations for Executive Action We are making the following two recommendations to DHS: The Under Secretary for Management should update DHS’s acquisition management policy to require components to submit a certification of funds memorandum when a major acquisition program re-baselines in response to a breach. (Recommendation 1) The Under Secretary for Management should require PARM to assess the results of major acquisition programs’ post-implementation reviews and identify opportunities to improve performance across the acquisition portfolio. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to DHS for review and comment. In its comments, reproduced in appendix IV, DHS concurred with both of our recommendations and identified actions it planned to take to address them. DHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Program Assessments This appendix presents individual assessments for each of the 28 programs we reviewed. Each assessment presents information current as of January 2018. They include standard elements, such as an image, a program description, and summaries of the program’s progress in meeting cost and schedule goals, performance and testing activities, and program management-related issues, such as staffing. Each assessment also includes the following figures: Fiscal Years 2018–2022 Affordability. This figure compares the funding plan presented in the Future Years Homeland Security Program report to Congress for fiscal years 2018–2022 to the program’s current cost estimate. We use this funding plan because the data are approved by the Department of Homeland Security (DHS) and Office of Management and Budget, and was submitted to Congress to inform the fiscal year 2018 budget process. The figure only presents acquisition funding because DHS did not report operations and maintenance (O&M) funding for individual programs in its funding plan to Congress. In addition, the data do not account for other potential funding sources, such as carryover. Acquisition Program Baseline (APB) vs. Current Estimate. This figure compares the program’s cost thresholds from the initial APB approved after DHS’s acquisition management policy went into effect in November 2008 and the program’s current DHS-approved APB to the program’s expected costs as of January 2018. The source for the current estimate is the most recent cost data we collected (i.e., a department-approved life-cycle cost estimate, updated life-cycle cost estimates submitted during the resource allocation process to inform the fiscal year 2019 budget request, or a fiscal year 2017 annual life- cycle cost estimate update). Schedule Changes. This figure consists of two timelines that identify key milestones for the program. The first timeline is based on the initial APB DHS leadership approved after the department’s current acquisition management policy went into effect. The second timeline identifies when the program expected to reach its major milestones as of January 2018 and includes milestones introduced after the program’s initial APB. Dates shown are based on the program’s APB threshold dates or updates provided by the program office. Test Status. This table identifies key recent and upcoming test events. It also includes DHS’s Director, Office of Test and Evaluation’s assessment of programs’ test results, if an assessment was conducted. Staffing Profile. This figure identifies the total number of staff a program needs (measured in full time equivalents) including how many are considered critical and how many staff the program actually has. Lastly, each program assessment summarizes comments provided by the program office and identifies whether the program provided technical comments. AUTOMATED COMMERCIAL ENVIRONMENT (ACE) CUSTOMS AND BORDER PROTECTION (CBP) The ACE program is developing software that will electronically collect and process information submitted by the international trade community. ACE is intended to provide private and public sector stakeholders access to information, enhance the government’s ability to determine whether cargo should be admitted into the United States, and increase the efficiency of operations at U.S. ports by eliminating manual and duplicative trade processes, and enabling faster decision making. Final deployment and operational testing of ACE functionality delayed. Program plans to identify an approach to address collections functionality in March 2018. We last reported on this program in March 2018 and April 2017 (GAO-18-271, GAO-17-346SP). Not included CBP declared a cost and schedule breach in April 2017—5 months after re-baselining the program in response to a prior breach—because of difficulties developing the collections aspect of ACE’s remaining functionality, which collects and processes duties owed on imported goods. CBP reported that its officials were not versed in the complexities of collections in the legacy system and underestimated the level of effort required to integrate collections capabilities into ACE. As a result, the program delayed final deployment of ACE functionality several times and missed the deadlines for completing the remaining milestones in its current acquisition program baseline (APB), including achieving acquisition decision event (ADE) 3 and full operational capability (FOC) by the revised dates of June 2017 and September 2017, respectively. Additional coding and testing to complete ACE development also required contract extensions that exceeded the current APB cost thresholds. The program subsequently decoupled collections from ACE’s remaining functionality to permit deployment of the other post-release capabilities—such as liquidations and reconciliation—using a phased approach between September 2017 and February 2018. In November 2017, CBP officials estimated that efforts to decouple collections from post-release functionality would be an additional $32 million in acquisition costs. CBP officials plan to cover these costs with $18 million in fiscal year 2017 carryover funding and by reprogramming $14 million from ACE disaster recovery funding. CBP is in the process of determining a path forward for collections, which is due to Department of Homeland Security (DHS) leadership by the end of March 2018. CBP then plans to update the program’s acquisition documentation, including APB and life- cycle cost estimate, by August 2018. Until then, the time frame for completing ACE’s remaining milestones and true cost of the program, including the cost to complete collections development is unknown. The program was not included in DHS’s funding plan to Congress for fiscal years 2018 to 2022 because DHS did not report operations and maintenance (O&M) funding for individual programs. CBP officials anticipate receiving approximately $535 million in O&M funding over this 5-year period. Customs and Border Protection (CBP) AUTOMATED COMMERCIAL ENVIRONMENT (ACE) In June 2017, CBP officials reported meeting three of ACE’s four KPPs, including its KPP on availability. However, DHS’s Director, Office of Test and Evaluation has not assessed these results. ACE did not meet its KPP for transmitting data to a separate tracking system because, according to CBP officials, there was confusion about which data ACE was required to send. CBP officials plan to reassess this KPP in March 2018 to determine next steps. When DHS leadership re-baselined ACE’s cost, schedule, and performance parameters in 2013, the program adopted an agile software development methodology to accelerate software creation and increase flexibility in the development process. As of October 2017, the ACE program office oversees 11 agile teams that conduct development and O&M activities. CBP officials said they extended the program’s agile development contracts in 2017 to permit further development of the collections function. In identifying a path forward for collections, CBP officials stated there are three main options: 1. leave collections in the legacy system, 2. continue to pursue development and deployment in ACE, or 3. move collections to a different program altogether. The program previously experienced a schedule breach in June 2016 because it delayed events to address external stakeholders’ concerns about transitioning to ACE. According to CBP officials, CBP has signed a memorandum of understanding with each of the 22 partner agencies responsible for clearing or licensing cargo that provides access to ACE. As of February 2018, 21 of the partner agencies had transitioned to ACE and the program was piloting a solution for the remaining partner. In September 2017, CBP reported that ACE continued to lack a director of testing and evaluation. CBP officials said they do not plan to fill this vacancy despite plans to conduct further testing because existing staff have successfully covered the workload and a large portion of testing has already been completed. CBP officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. CUSTOMS AND BORDER PROTECTION (CBP) The Biometric Entry-Exit Program is developing capabilities to enhance traveler identification upon departure from the U.S. at air, land, and sea ports of entries by collecting biometric data, such as fingerprints and facial recognition. The program plans to match this data to biometric data obtained from travelers upon their arrival into the U.S. to identify foreign nationals that stay in the U.S. beyond their authorized periods of admission and verify the identities of travelers leaving the U.S. CBP completed four biometric pilot programs and selected a solution for development. DHS has explored biometric exit capabilities since 2009, but was directed to expedite implementation in March 2017. GAO last reported on this program in February 2017 (GAO- 17-170). In June 2017, the Department of Homeland Security’s (DHS) Under Secretary for Management (USM) granted the Biometric Entry-Exit Program acquisition decision event (ADE) 1 approval after CBP completed several pilot initiatives to study the feasibility of proposed biometric exit solutions at air and land ports of entry. The USM also authorized the program to continue testing a pilot exit solution at Hartsfield-Jackson Atlanta International Airport and conduct technology demonstrations as needed, but directed the program to achieve ADE 2A prior to deploying a solution to the 20 U.S. airports with the most international flights. CBP officials initially planned to achieve ADE 2A approval in September 2017—the point at which the program would establish cost, schedule, and performance goals in a DHS-approved acquisition program baseline (APB)—and pursue separate ADE 2B decisions to initiate development of a biometric solution for each type of port of entry, starting with air. As of December 2017, the program had yet to conduct its ADE 2A because CBP officials have had to resolve several issues identified by the Joint Requirements Council that has delayed approval of the program’s operational requirements document (ORD). In January 2018, CBP officials said the program plans to conduct ADE 2A in February or March 2018 and is aiming for ADE 2B for the biometric air solution in December 2018. In December 2015, Congress established an account to be used for the development and implementation of the biometric entry-exit system starting in 2017. Specifically, Congress provided that half the amount collected from fee increases for certain visa applications from fiscal years 2016 through 2025—up to $1 billion—would be available to DHS until expended. In February 2017, DHS leadership approved the program to use about $73 million of this funding in fiscal year 2017 for information technology investments and programmatic and operational support, among other things. In September 2017, DHS’s Chief Financial Officer approved the program’s life-cycle cost estimate (LCCE), which CBP expects to refine as the program progresses to meet the fee-funding limit. According to CBP officials, the current funding structure poses challenges because the fees will fluctuate based on immigration rates. Customs and Border Protection (CBP) Since 2015, CBP has conducted a series of biometric pilot programs intended to inform the acquisition of a biometric entry-exit system that included the following types of technologies: • Facial and iris scanning technology at an outdoor land border crossing. • Mobile fingerprint readers for flights departing the U.S. • Two facial recognition matching technologies that compared a real-time photo of a traveler to different sources—one technology compared the photo to the traveler’s passport upon entrance to the U.S.; the other technology compared the photo to a gallery of photos based on the outbound flight manifest during an airline’s boarding process. According to CBP officials, the facial recognition technology that matched photos during an airline’s boarding process was the most viable approach and served as the foundation for its development of the ADE 2A acquisition documents. Officials stated a similar approach may be feasible for land border crossings, but will require further planning. In January 2018, CBP officials stated they were developing a test and evaluation master plan—which will outline the developmental and operational test approach—for the biometric exit air solution. DHS’s Director, Office of Test and Evaluation will need to review and approve this plan prior to the program’s ADE 2B. Since 1996, several federal statutes have required development of an entry and exit system for foreign nationals. DHS has been exploring biometric exit capabilities since 2009 and an Executive Order issued in March 2017 directed DHS to expedite the implementation of the biometric entry-exit system. The Biometric Entry-Exit Program plans to develop a capability to match a traveler’s biometric data against data contained in existing DHS biometric data repositories— primarily the National Protection and Program Directorate’s IDENT system. DHS is in the process of replacing and modernizing IDENT through the Homeland Advanced Recognition Technology (HART) program because IDENT is at risk of failure. However, HART has experienced delays, which could affect the Biometric Entry-Exit Program’s development progress. For the air biometric solution, CBP plans to pursue a public/private partnership in which airlines and airports invest in the equipment to collect biometric data. According to CBP officials, this approach could reduce program costs and improve the passenger boarding process. In August 2017, CBP officials told GAO that several airlines have expressed interest in partnering with the program, including one that expanded CBP’s pilot of facial recognition matching for outbound flights to additional gates at the Hartsfield-Jackson Atlanta International Airport. CBP officials reported a staffing gap of 14 full time equivalent staff which the program plans to fill once partnerships with airlines are established. CBP officials stated that authorized funds are collected from visa fee increases that expire in fiscal year 2025. Beyond 2025, officials stated that additional funding will need to be appropriated or the fee increases extended to continue the program. They added that fee collections are currently below forecasted levels and may come under the current $1 billion limit. CBP officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. CUSTOMS AND BORDER PROTECTION (CBP) The border wall system is intended to prevent the illegal entry of people, drugs, and other contraband by enhancing and adding to the 654 miles of existing barriers along the U.S. southwest border. CBP plans to create a border enforcement zone between a primary barrier—such as a fence—and a secondary barrier. To establish the enforcement zone, the wall system may also include detection technology, surveillance cameras, lighting, and roads for maintenance and patrolling. CBP has evaluated prototypes for new barrier designs, but risks with planned detection technologies exist. CBP is leveraging staff and the contracting strategy from prior border fencing programs. GAO last reported on the existing Southwest border barriers in February 2017 (GAO-17-331). Not included In April 2017, Department of Homeland Security (DHS) leadership granted CBP permission to procure barrier prototypes to inform new design standards and approved the construction of the first segment of the wall system. CBP subsequently awarded 8 task orders with a total value of over $3 million for the development of prototypes and selected San Diego as the first segment. CBP plans to replace an existing 14 miles of primary and secondary barriers in San Diego. DHS plans to use fiscal year 2017 funding for the replacement of the primary barrier, which it plans to rebuild to existing design standards. DHS has requested funding for replacement of the secondary barrier beginning in fiscal year 2018 that it plans to rebuild to new design standards once established. DHS leadership plans to approve acquisition documentation—including an acquisition program baseline (APB) and a life-cycle cost estimate (LCCE)—for each segment to determine affordability prior to authorizing construction. However, CBP officials said they do not plan to develop an APB for the San Diego segment because DHS already approved construction. In January 2018, DHS leadership approved an APB establishing cost, schedule, and performance goals for a second segment in the Rio Grande Valley (RGV), which will extend an existing barrier by 60 miles. To inform leadership’s decision, DHS headquarters conducted an independent cost estimate, which CBP adopted as the program’s LCCE. The LCCE includes costs for both the San Diego and RGV segments. However, DHS officials stated that the amounts in the LCCE are not releasable until CBP evaluates the prototypes, determines, and designs a final solution for the San Diego secondary barrier, and updates the LCCE—which is not expected to be complete until June 2018. The costs presented here are only for the RGV segment. CBP reported that construction of the RGV segment would be sufficiently funded if it receives $1.3 billion of acquisition funding in fiscal year 2018. However, CBP identified a shortfall in operations and maintenance (O&M) funding from fiscal years 2019 to 2022 that it plans to cover with existing funding from the Tactical Infrastructure program, which will be responsible for maintenance of the wall system as segments are complete. If funded, the program expects to achieve full operational capability for the RGV segment in March 2023. Customs and Border Protection (CBP) In December 2017, CBP completed testing of 8 barrier prototypes—4 constructed from concrete and 4 from other materials—which are intended to help refine the requirements and identify new design standards for barriers. CBP evaluated the prototypes in five areas: breachability, scalability, constructability, design, and aesthetics. CBP officials said the prototype evaluation results are not expected until February 2018. The Science and Technology Directorate’s Office of Systems Engineering completed a technical assessment on the program in November 2017, and identified risks related to the integration and operation of enforcement zone technologies— such as cameras and sensors—which had not been clearly defined or planned for within the wall system. It made several recommendations, including that the program coordinate with an ongoing CBP study of land domain awareness capabilities, which DHS leadership directed CBP to conduct in October 2016 to inform a comprehensive border plan. The Border Wall System Program was initiated in response to an Executive Order issued in January 2017 stating that the executive branch is to secure the southern border through the immediate construction of a physical wall on the southern border of the U.S. To expedite the acquisition planning process, CBP officials said they leveraged expertise from staff that worked on previous border fencing programs and were familiar with implementation challenges, such as land access. CBP intends to prioritize segments based on threat levels, land ownership, and geography, among other things. From fiscal years 2007 to 2015, CBP spent approximately $2.3 billion to construct pedestrian and vehicle fencing along the southwest border. CBP’s Tactical Infrastructure program is responsible for sustaining this fencing and other infrastructure—such as gates, roads, and bridges— over its lifetime. CBP plans to continue coordinating with the U.S. Army Corps of Engineers (USACE) for engineering support and for awarding and oversight of construction contracts. CBP anticipates that all contract awards issued by USACE in support of the RGV segment will be firm fixed price. If appropriations are received, the program plans to award construction contracts for the first portion of RGV in May 2018 and for the secondary barrier in San Diego in August 2018. In February 2018, CBP officials stated that staffing the program office is a challenge because funding has not yet been received. CBP officials said that existing work for the program is being handled by current CBP staff. CBP officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. INTEGRATED FIXED TOWERS (IFT) CUSTOMS AND BORDER PROTECTION (CBP) The IFT program helps the Border Patrol detect, track, identify, and classify illegal entries in remote areas. IFT consists of fixed surveillance tower systems equipped with ground surveillance radar, daylight and infrared cameras, and communications systems linking the towers to command and control centers. CBP plans to deliver or upgrade approximately 53 IFT systems across six areas of responsibility (AoR) in Arizona: Nogales, Douglas, Sonoita, Ajo, Tucson, and Casa Grande. System acceptance test completed in Douglas AoR and requirements were met. Program is adequately staffed, but simultaneous deployments in the future may have a negative impact. GAO last reported on this program in November and April 2017 (GAO-18-119, GAO-17-346SP). In December 2017, CBP declared a schedule breach of the IFT program’s current acquisition program baseline (APB) because the program did not receive the funding needed to complete planned deployments on time to achieve its full operational capability (FOC) date of September 2020. The program’s FOC date previously slipped 5 years because of delays in the initial contract award process and funding shortfalls. CBP completed IFT deployments to the Douglas AoR in June 2017 and anticipates completing deployments to the Sonoita AoR in December 2017, as scheduled. However, in September 2017, CBP officials stated that they requested—but did not receive—additional funding from the Department of Homeland Security (DHS) to address new IFT requirements, including camera upgrades and replacement of existing tower systems deployed under a legacy program. In January 2015, Border Patrol requested the program prioritize replacement of the legacy systems in the Tucson and Ajo AoRs because the technology was obsolete and more expensive to maintain than the IFT technology planned for deployment in other AoRs. Without additional funding, CBP officials stated that they would be unable to exercise the contract options for the remaining AoRs on time. In June 2017, the program updated its life-cycle cost estimate (LCCE), which is slightly less than its current APB cost thresholds. This LCCE update includes estimated costs for the new requirements. The affordability gap from fiscal years 2018 to 2022 may be overstated because DHS’s funding plan to Congress no longer contained operations and maintenance (O&M) funding for individual programs. CBP identified $8 million in acquisition carryover funding for fiscal year 2018 and officials anticipate receiving $126 million in O&M funding to cover $100 million in O&M costs over the next 5 years. The program plans to submit a revised APB to DHS leadership by June 2018. However, the FOC date may be further delayed because of land access issues. CBP officials told GAO that they have not yet reached an agreement with the Tohono O’odham Nation—a sovereign Native American Nation—to access tribal lands, which these officials said is necessary for the construction of IFTs in the Ajo and Casa Grande AoRs. 10/15 Initial operational capability (Nogales) Customs and Border Protection (CBP) INTEGRATED FIXED TOWERS (IFT) Border Patrol certified IFT capabilities met operational requirements in March 2016, but added conditions including that the program seek improvements to optimize video capability. In response, the program plans to install an upgraded high definition camera suite starting with the Sonoita AoR. However, the program has not received funding to complete these upgrades. When CBP initiated the IFT program, it decided to procure a non-developmental system, and it required that prospective contractors demonstrate their systems prior to CBP awarding the contract. The program awarded the contract to EFW, Inc. in February 2014, but the award was protested. GAO sustained the protest and CBP had to reevaluate the offerors’ proposals before it again decided to award the contract to EFW, Inc. As a result, EFW, Inc. could not initiate work at the deployment sites until fiscal year 2015. According to CBP officials, the number of IFT systems deployed to a single AoR is subject to change based on assessments by the Border Patrol. DHS leadership directed CBP to develop a comprehensive border plan in October 2016 that includes IFT capabilities and—when preparing for the last budget cycle—the program estimated costs for expansion to the southwest border beginning in fiscal year 2019. In September 2017, CBP officials told GAO that they did not have any current staffing gaps. However, CBP officials added that if the program receives full funding and reaches an agreement with the Tohono O’odham Nation to initiate IFT deployments to the Ajo and Casa Grande AoRs, while concurrently deploying capability to the Sonoita and Tucson sectors, they will be short on government and contracted staff. CBP officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. MEDIUM LIFT HELICOPTER (UH-60) CUSTOMS AND BORDER PROTECTION (CBP) UH-60 is a medium-lift helicopter that CBP uses for law enforcement and border security operations, air and mobility support and transport, search and rescue, and other missions. CBP’s UH-60 fleet consists of 20 aircraft acquired from the U.S. Army in three different models. CBP previously acquired 4 modern UH-60M aircraft and converted 6 of its older 16 UH-60A aircraft into more capable UH-60L models. CBP is replacing the remaining 10 UH-60A with reconfigured Army HH-60L aircraft. CBP test agent and the Army completed testing of reconfigured HH-60L prototype. CBP has initiated efforts to acquire additional converted HH-60L aircraft from the Army. GAO last reported on this program in April 2017 (GAO-17-346SP). The program breached the cost and schedule goals in its acquisition program baseline (APB) and, as of December 2017, CBP officials stated they were in the process of developing the breach notification required under the Department of Homeland Security’s (DHS) acquisition policy. In its annual life-cycle cost estimate (LCCE) update, the program shifted some operations and maintenance (O&M) costs to acquisitions to be consistent with DHS’s new appropriation structure. For example, the program shifted costs for recurring upgrades from O&M to acquisition because these upgrades require development and production. As a result, the program’s updated acquisition cost estimate exceeded the APB acquisition cost threshold, which constitutes a cost breach under DHS’s acquisition policy. CBP officials stated that they did not initially declare a cost breach because the program’s total LCCE was within the APB threshold. The program also did not hold its acquisition decision event (ADE) 3 by the APB deadline of September 2017. The ADE 3 is intended to approve the transfer of CBP’s remaining UH-60A aircraft for reconfigured Army HH60-L aircraft based on an evaluation of a reconfigured prototype. According to CBP officials, the program did not complete the required acquisition documentation by the ADE 3 deadline, in part, because DHS leadership directed CBP to develop a comprehensive border plan in October 2016 that includes UH-60 capabilities. It is unclear when the ADE 3 will occur because, as of December 2017, several documents were pending validation by the Joint Requirements Council. The affordability gap from fiscal years 2018 to 2022 may be overstated because DHS’s funding plan to Congress no longer contained O&M funding for individual programs. In addition, CBP officials previously told GAO that UH-60 O&M is funded through a separate, central funding account for all of CBP’s air and marine assets. CBP officials stated that the projected acquisition funding gap in fiscal years 2019 and 2020 is primarily for replacing obsolete parts that were previously considered O&M. According to these officials, the Army conducts an annual obsolescence study that will help CBP identify and prioritize replacements across the UH-60 fleet based on available funding levels. Customs and Border Protection (CBP) MEDIUM LIFT HELICOPTER (UH-60) CBP determined that the converted UH-60L and UH-60M aircraft met all five of the program’s key performance parameters (KPP) through operational test and evaluation (OT&E) conducted in fiscal years 2012 and 2014. However, DHS’s Director, Office of Test and Evaluation (DOT&E) did not validate these results because UH-60 was not considered a major acquisition when the tests were conducted. In January 2016, DHS leadership directed the program to conduct acceptance functional flight checks—which consist of component- and system-level tests—on at least one reconfigured HH-60L prototype prior to receiving approval to proceed with the remaining transfers. According to CBP officials, the program’s OTA and the Army successfully conducted the functional flight check and additional testing in October 2017. DOT&E plans to review the flight test data in support of the program’s ADE 3. CBP does not plan to conduct formal operational test and evaluation on the reconfigured UH-60L because, according to CBP officials, the reconfigured HH-60L has minimal differences from the UH-60L aircraft previously tested. CBP officials also stated that the program has been able to leverage Army test data, which reduces the risk and testing costs associated with the program. These officials noted that CBP pilots will perform additional inspections prior to accepting the aircraft, which is now anticipated to occur in January 2018—up to 5 months earlier than the APB threshold date. CBP previously acquired UH-60 as a part of its Strategic Air and Marine Program (StAMP). In July 2016, DHS leadership designated UH-60 as a separate and distinct major acquisition program. CBP initially planned to convert all 16 of its UH-60A aircraft into UH-60L models, but changed its strategy once it learned the Army planned to divest several HH-60L aircraft that could more easily be converted into UH-60L aircraft for CBP missions. CBP officials anticipated the new strategy could reduce the program’s costs by an estimated $70 million, accelerate its schedule, and result in newer aircraft since the Army’s HH-60L airframes had fewer operating hours than CBP’s existing UH-60A aircraft. In September 2017, CBP officials told GAO they had initiated efforts to acquire additional HH-60L aircraft by conducting a study of current capability gaps and drafting a mission need statement. As of September 2017, program officials confirmed that they maintain a consolidated program office where the same staff from StAMP continue to support all remaining acquisitions, including the UH-60. However, these officials stated that they plan to realign staff to a dedicated asset over time. Program officials also stated that the program has hired a dedicated cost estimator and would like to hire additional staff to focus on procuring spare parts and common component issues, such as radio replacements, for CBP’s air and marine assets. CBP officials reiterated that the changes in acquisition costs were primarily a result of cost realignment and that the program’s total life-cycle cost is still within the initial APB LCCE goals. CBP officials also stated that—to supplement Army test data—the program’s OTA participated in the flight tests and will provide a formal report on the results. MULTI-ROLE ENFORCEMENT AIRCRAFT (MEA) CUSTOMS AND BORDER PROTECTION (CBP) MEA are fixed-wing, multi-engine aircraft that can be configured to perform multiple missions including maritime, air, and land interdiction, as well as signals detection to support law enforcement. The current MEA configuration is equipped with marine search radar and an electro-optical/infrared sensor to support maritime and land surveillance and airborne tracking missions. MEA will replace CBP’s fleet of aging C-12, PA-42, and BE-20 aircraft. Testing of new configuration planned for May 2018, but requirements not yet defined. Began retrofitting accepted MEA with new mission system in fiscal year 2017. GAO last reported on this program in April 2017 (GAO-17- 346SP). According to CBP officials, the program is on track to meet the cost and schedule goals in its current acquisition program baseline (APB) for 16 maritime interdiction MEA and is actively pursuing additional aircraft. In April 2016, CBP developed a report that identified capability needs in three mission areas and proposed increasing the program’s total to 38 aircraft by adding 13 air and 6 land interdiction MEA, and 3 signals detection MEA. The Joint Requirements Council endorsed CBP’s findings, but recommended CBP develop a number of requirements documents—including an operational requirements document—to fully validate the findings. As of September 2017, CBP officials told GAO they were in the process of updating these documents to focus on air interdiction capabilities—the next MEA configuration. These officials stated that completing these documents has been delayed, in part, because Department of Homeland Security (DHS) leadership directed CBP to develop a comprehensive border plan in October 2016 that includes MEA capabilities. Despite not yet completing all the updated documents, DHS leadership approved CBP’s request to procure MEA 17 in September 2017 after the congressional conferees agreed to an additional aircraft beyond DHS’s budget request. CBP anticipates delivery of MEA 17 by September 2018, which is within the program’s full operational capability (FOC) date. However, if the program receives approval to acquire additional aircraft, the FOC date will be extended. The program completed an annual life-cycle cost estimate update, which exceeds the program’s current APB cost thresholds, because it reflects costs for all 38 aircraft, among other reasons. The affordability gap from fiscal years 2018 to 2022 may be overstated because DHS’s funding plan to Congress no longer contained operations and maintenance (O&M) funding for individual programs. In addition, CBP officials previously told GAO that MEA’s O&M is funded through a separate, central funding account for all of CBP’s air and marine assets. In September 2017, CBP officials said that the program was fully funded for 17 aircraft but had some affordability challenges with spare parts, which they are working with CBP and DHS headquarters to address. Customs and Border Protection (CBP) MULTI-ROLE ENFORCEMENT AIRCRAFT (MEA) The MEA program has met all five of its key performance parameters (KPP) for the maritime interdiction configuration and plans to establish additional KPPs for future MEA configurations. CBP is replacing the mission system processor on the MEA with a system used by the U.S. Navy and U.S. Coast Guard that is intended to enhance operator interface and sensor management, as well as replace obsolete equipment. CBP’s OTA tested a prototype of the processor during an operational assessment in July 2015. The OTA found that the MEA had resolved issues found during prior testing, but also made 29 additional recommendations and findings to improve the aircraft and new mission system’s effectiveness. DHS’s Director, Office of Test and Evaluation (DOT&E) concurred with the OTA’s findings. The program plans to begin testing MEA air interdiction capabilities in May 2018. According to CBP officials, the only difference between the maritime and air interdiction configurations is the radar software. The program initially planned to modify and test the new configuration prior to delivery, but CBP officials stated they now plan to do so after delivery to reduce risk by allowing more time for development of the air-to-air radar software. DHS’s DOT&E plans to review the test plan for the air interdiction configuration. However, completing development before finalizing KPPs for the new configuration increases the risk that the aircraft will not meet operator’s requirements. CBP previously acquired MEA as a part of its Strategic Air and Marine Program (StAMP). In July 2016, DHS leadership designated MEA as a separate and distinct major acquisition program. CBP initially planned to procure 50 MEA and awarded the first production contract in September 2009. However, the aircraft did not perform well during testing. In October 2014, DHS leadership said CBP could not procure or accept transfer of additional MEA without approval. CBP procured 12 aircraft under the initial contract and—with DHS approval—CBP awarded a new indefinite delivery, indefinite quantity contract in September 2016 for 1 base year and four 1-year options to support procurement of additional aircraft. In December 2017, CBP officials said the program had received 12 aircraft and awarded contracts for 5 more. According to program officials, MEA 13-16 will be delivered with the new mission system and CBP began retrofitting previously delivered aircraft in fiscal year 2017. As of September 2017, program officials confirmed that they maintain a consolidated program office where the same staff from StAMP continue to support all remaining acquisitions, including MEA. However, these officials stated that they plan to re-align staff to a dedicated asset over time. Program officials also stated that the program has hired a dedicated cost estimator and would like to hire additional staff to focus on procuring spare parts and common component issues, such as radio replacements, for CBP’s air and marine assets. CBP officials stated that delays in receiving approval of the program’s requirements documents may pose a risk to exercising options for additional MEA on an existing contract, which could stop production and increase contract costs associated with procuring future aircraft. CBP officials added that air and marine requirements officers continue to produce documentation requested by the Joint Requirements Council to provide sufficient context for the mission need and border security. CBP officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. CUSTOMS AND BORDER PROTECTION (CBP) The NII Systems Program supports CBP’s interdiction of weapons of mass destruction, contraband such as narcotics, and illegal aliens being smuggled into the United States, while facilitating the flow of legitimate commerce. CBP officers use large- and small-scale NII systems at air, sea, and land ports of entry; border checkpoints; and international mail facilities to examine the contents of containers, railcars, vehicles, baggage, and mail. CBP initiated efforts for future NII requirements and procurements. 66 percent staffing gap contributed to delays in NII deployments. GAO last reported on this program in April 2017 (GAO-17-346SP). The NII Systems Program is on track to meet its approved schedule and cost goals. The estimates in the program’s annual life-cycle cost estimate (LCCE) update continued to decrease overall compared to its approved acquisition program baseline (APB) cost thresholds. Specifically, compared to the prior year’s estimate, the program’s acquisition costs decreased by $96 million and operations and maintenance (O&M) costs increased by $22 million. However, the LCCE update only estimated costs through fiscal year 2026—9 years short of the program’s final year. The LCCE primarily decreased because of a reduction of 1,977 planned additional and replacement NII systems. CBP officials said fewer large- and small-scale systems are needed because some systems have longer estimated lives than expected, and systems procured have better capability. CBP officials do not anticipate that the reduction in quantities will have an adverse effect on operations because they stated that the new systems can provide dual purpose capabilities (i.e., one system can replace multiple separate systems). The affordability gap from fiscal years 2018 to 2022 may be overstated because DHS’s funding plan to Congress no longer contained O&M funding for individual programs. CBP officials anticipate receiving approximately $605 million of O&M funding over this 5-year period to cover about $626 million in estimated O&M costs, which includes $100 million to operate and maintain radiation detection equipment acquired by the Domestic Nuclear Detection Office. These officials also identified $37 million in carryover funding to cover the remaining $21 million of O&M estimated costs. However, the program is projected to have a $266 million acquisition funding gap from fiscal years 2018 to 2022.The program has a plan to address funding shortfalls but, according to CBP officials, it has not yet needed to implement the strategies in this plan because of several factors, including cost reductions achieved through combined life-cycle contracts and lower-than-expected actual technology costs in fiscal year 2016. Customs and Border Protection (CBP) NII systems are commercial-off-the-shelf products, and for this reason, DHS leadership decided that the NII Systems Program does not need a test and evaluation master plan. However, the program continues to test NII systems to inform future acquisitions. For example, in calendar years 2017 and 2018, CBP officials told us they plan to conduct demonstrations and testing activities on the following type of technology: • Two NII systems—one mobile, one fixed—that are designed to examine moving vehicles for contraband. • Mobile systems that use high dose X-ray imaging to inspect stationary cargo vehicles at ports-of-entry. • Multi-energy portals that use different levels of X-ray imaging to inspect cargo trucks as they are driven through the inspection portals—low dose X-ray to inspect the truck cab and high dose X-ray to inspect the cargo trailer. In March 2017, the Joint Requirements Council validated a capability analysis report that assessed current capability gaps in NII operations to assist with identifying potential upgrades to existing systems and developing requirements for future systems. According to program officials, CBP plans to review and update, as necessary, the mission need statement in fiscal year 2018. Additionally, program officials are preparing a consolidated acquisition plan for future procurements. These officials said CBP has not yet determined whether future procurements would be included into the current NII Systems Program of record or constitute a new acquisition program. CBP’s ability to successfully execute the existing NII Systems Program and plan for future efforts may be at risk because of understaffing. As of January 2018, the NII Systems Program continued to face a staffing gap of approximately 66 percent, including critical vacancies such as the acquisition program manager and a logistics program manager. Officials also noted that a lack of adequate personnel to procure, test, and deploy NII systems forces the program to prioritize its acquisitions, which can result in delays of NII deployments and testing efforts. For example, one manufacturer increased its output rate of NII systems, but the program did not have the staff to accept the systems at the increased rate. Officials anticipate the program may remain understaffed until CBP completes a reorganization that started more than a year ago, in which acquisition programs are realigned from a mission-support office to their operational entity. CBP officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. REMOTE VIDEO SURVEILLANCE SYSTEM (RVSS) CUSTOMS AND BORDER PROTECTION (CBP) The RVSS program helps the Border Patrol detect, track, identify and classify illegal entries across U.S borders. RVSS consists of daylight and infrared video cameras mounted on fixed towers and buildings with communications systems that link to command and control centers. From 1995 to 2005, CBP deployed approximately 310 RVSS towers along the U.S. northern and southern borders, and initiated efforts to upgrade legacy RVSS towers in Arizona in 2011. Program does not plan to conduct additional operational testing on future deployments. Once funded, program plans to award a new contract for deployments in sectors along the southwest border. GAO last reported on this program in November 2017 (GAO-18-119). In April 2016, Department of Homeland Security (DHS) leadership elevated RVSS from a level 3 program—which focused on upgrading legacy RVSS in Arizona—to a level 1 program after approving CBP’s plan to expand deployments to the Rio Grande Valley (RGV) sector and adding an additional 6 sectors along the southwest border. At this time, DHS leadership approved the program to move forward with deployments to two RGV stations, which can be completed as options under the program’s existing contract. However, the program was required to re-baseline to account for its expanded scope and conduct an acquisition decision event (ADE) to obtain approval for additional deployments. As of January 2018, the program had not yet conducted its ADE or obtained DHS approval for an acquisition program baseline (APB) that established cost, schedule, and performance goals for the expanded program. In September 2017, CBP officials told us that they had drafted the APB and other required documentation, such as a life-cycle cost estimate (LCCE), but were unsure when the ADE would occur because the program had not received funding for the additional deployments. In addition, the ADE may have been delayed because DHS leadership directed CBP to develop a comprehensive border plan in October 2016 that includes RVSS capabilities. In September 2017, DHS leadership approved the RVSS’s revised LCCE which totaled nearly $4 billion for all program costs from fiscal years 2011 through 2042, including expansion along the southwest border and new initiatives such as a pilot for relocatable RVSS towers. DHS conducted an independent cost estimate for the program, which DHS cost estimating officials stated was within 2 percent of the program’s LCCE. RVSS was not included in DHS’s funding plan to Congress for fiscal years 2018 to 2022 because it had not yet been elevated to a level 1 program at the time the plan was developed. CBP officials stated that the program has received acquisition funding to cover the approved RGV deployments. However, CBP officials told GAO that the program may also assume responsibility for maintaining all legacy RVSS, but has not received adequate operations and maintenance funding to do so. Customs and Border Protection (CBP) REMOTE VIDEO SURVEILLANCE SYSTEM (RVSS) CBP officials said the RVSS program initiated a pilot of relocatable RVSS towers in the RGV sector. The program plans to assess the results of the pilot by March 2018. In July 2013, CBP awarded a firm fixed-price contract for a commercially available, non-developmental system. This contract covered the program’s initial scope to deploy upgraded RVSS in Arizona and two stations within the RGV sector, which can be completed as options. According to CBP officials, the program will need to award a new contract to cover expansion to the remaining six sectors along the southwest border. In September 2017, CBP officials said that the request for proposals for the new contract had been drafted but it cannot be released until the program receives funding. CBP officials told GAO that RVSS is coordinating with CBP’s Border Wall System Program on some planned deployments within the RGV sector. For example, CBP is considering moving 2 of the planned RVSS towers to be co-located with the planned barrier, which officials stated may provide better surveillance. If the Border Wall System Program does not receive funding, CBP officials said the towers will be placed in the originally planned locations. CBP officials stated that the RVSS program requires additional staff for contracting activities, maintenance activities for legacy RVSS, and for relocatable tower pilot deployments. To mitigate the staffing gap, CBP officials said they prioritize responsibilities of current personnel to meet program execution needs. CBP officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. CUSTOMS AND BORDER PROTECTION (CBP) The TACCOM program is intended to upgrade land mobile radio infrastructure and equipment to support approximately 95,000 users at CBP and other federal agencies. It is replacing obsolete radio systems with modern digital systems across various sectors located in 19 different service areas, linking these service areas to one another through a nationwide network, and building new communications towers to expand coverage in 5 of the 19 service areas. Issues related to security requirements have delayed full operational capability by more than a year. Program is being re-organized under Border Patrol, but still faces staffing challenges. GAO last reported on this program in April 2017 (GAO-17-346SP). Not included In November 2017, Department of Homeland Security (DHS) leadership re-baselined the TACCOM program, removing it from breach status after the program experienced a schedule slip and cost growth. In July 2017, CBP officials notified DHS leadership that the program would not achieve full operational capability (FOC) as planned due to issues related to federal information security requirements. The program now plans to achieve FOC by March 2019—more than a year later than its initial acquisition program baseline (APB) deadline. According to CBP officials, FOC will include planned upgrades to the San Diego system, which requires transitioning management of the legacy system from the Department of Justice to DHS. In August 2017, CBP officials stated that both agencies were reviewing an agreement with plans to complete the transition in fiscal year 2018. CBP officials stated that the program realized it would exceed its initial APB cost thresholds as it was developing its annual life-cycle cost estimate (LCCE) update and subsequently submitted a revised LCCE for DHS leadership approval. The program’s costs primarily grew because of increases in costs for contractor labor and support for facilities and infrastructure. CBP officials said the program’s initial estimates were immature; however, DHS leadership approved the initial LCCE in December 2015—4 years after the program began sustaining capabilities. DHS’s Chief Financial Officer (CFO) approved the program’s revised LCCE in November 2017, but noted that the program’s estimate exceeded its available funding and requested that the program address the affordability gap before it was re-baselined. CBP officials said that they are conducting an affordability analysis, which they anticipate will be completed by March 2018. Nevertheless, DHS leadership approved the program’s re-baseline in November 2017. CBP officials subsequently identified errors in the approved APB cost threshold tables and provided revised amounts, which are presented here. The program was not included in DHS’s funding plan to Congress for fiscal years 2018 to 2022 because DHS did not report operations and maintenance (O&M) funding for individual programs. CBP officials anticipate receiving approximately $120 million in O&M funding over this 5-year period. Customs and Border Protection (CBP) In July 2017, an analysis of the program’s operations showed that the program was meeting mission needs, but technical issues and vulnerabilities could cause schedule delays. That same month, the program declared a schedule breach because of issues related to federal information security requirements. The TACCOM program first identified these issues in February 2016, but efforts to address them within the established APB schedule were unsuccessful. CBP officials said that, since the program’s inception, they have held weekly and quarterly meetings with the vendor to identify and address any issues and that they anticipate the vendor will address all remaining issues by March 2018. They added that both the vendor and CBP will conduct security scanning and acceptance testing after deployment to each sector; however, the program does not have plans for future operational testing. CBP officials told GAO that in January 2018, the program will move from a mission support office to a joint program office under Border Patrol as a part of CBP’s reorganization that started more than a year ago. The goal of this move is to make CBP land mobile radio capabilities seamless by combining the mission critical voice functions of Air and Marine Operations, the Border Patrol, and the Office of Field Operations—the TACCOM program’s primary customers—under one organizational leader, the Border Patrol Chief. CBP officials anticipate that the current TACCOM program structure will remain in place after this move with the exception of the program’s engineers, which will move to CBP’s Office of Information and Technology but be assigned to support TACCOM full time. In August 2017, CBP officials told GAO they were in the process of hiring staff to fill the program’s vacant positions. They added that the fiscal year 2019 budget contains plans for additional infrastructure enhancements, which will require technical staff to assist in the planning and execution of these efforts and may put additional strain on the program’s limited government technical staff. They noted that the hiring and retention of qualified land mobile radio engineers and information technology technical staff is a challenge because of competition with the private sector, among other factors. In addition to maintenance of the CBP Land Mobile Radio System that provides critical communication needs for CBP agents and officers protecting U.S. borders, CBP officials stated the TACCOM program is providing infrastructure, such as building an engineering lab to facilitate design, development, test, and evaluation activities, to support improvements in CBP’s current and future Land Mobile Radio Systems. CBP officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. CUSTOMS AND BORDER PROTECTION (CBP) TECS (not an acronym) is a law-enforcement information system that has been in place since the 1980s and helps CBP officials determine the admissibility of persons entering the United States at border crossings, ports of entry, and prescreening sites located abroad. CBP initiated efforts to modernize TECS to provide users with enhanced capabilities for accessing and managing data. Immigration and Customs Enforcement has a separate TECS Modernization program. System operationally effective and suitable, but cybersecurity testing needed. CBP working to address and prevent major system outages. GAO last reported on this program in April 2017 (GAO-17-346SP). In July 2017, Department of Homeland Security (DHS) leadership granted the program acquisition decision event (ADE) 3 approval, but required CBP to conduct follow-on operational test and evaluation (OT&E) before declaring full operational capability (FOC). This is more than a 2-year delay from CBP’s initial FOC date and a 9-month delay from its most recent revised FOC date. DHS approved the fourth version of the program’s acquisition program baseline (APB) in July 2016. In this APB, CBP split FOC into two separate operational capability milestones at its data centers to better reflect the program’s activities. CBP delivered operational capability at the primary data center in Decemberas scheduled—which provides redundant TECS access to minimize downtime during system maintenance or unscheduled outages. However, not all test results were available in time for the program’s ADE 3 decision, which contributed to DHS leadership’s decision to delay declaring FOC. 2016, which included transitioning all TECS users to the modernized system. CBP delivered operational capability at the secondary data center in June 2017—The program updated its life-cycle cost estimate (LCCE) for ADE 3, which is within its current APB cost thresholds. However, the LCCE only included costs through fiscal year 2021—7 years short of DHS’s guidance that states program cost estimates should cover at least 10 years from the FOC date. Nevertheless, DHS granted the program ADE 3 approval without an understanding of the program’s full life-cycle costs, as required by its acquisition policy. CBP officials plan to update the LCCE by the end of calendar year 2018 to include costs for future years and other items, such as costs associated with follow-on OT&E and moving the data centers to a cloud environment—a CBP-wide initiative. The program was not included in DHS’s funding plan to Congress for fiscal years 2018 to 2022 because DHS did not report operations and maintenance (O&M) funding for individual programs. CBP officials anticipate receiving approximately $205 million in O&M funding over the next 4 years and have identified carryover for each year. However, CBP officials said there may be a small funding gap starting in fiscal year 2020, but they expect to achieve savings by migrating the data centers to a cloud environment. Customs and Border Protection (CBP) In July 2017, DHS’s Director, Office of Test and Evaluation (DOT&E) determined that the modernized TECS system was operationally effective and operationally suitable, but that the tests were not adequate to assess operational cybersecurity. The test results validated that the program had met all eight of its key performance parameters (KPP), but the test team identified several deficiencies related to mission support and CBP users identified operational considerations for system or process improvements. DOT&E recommended that CBP conduct a threat assessment, threat- based cybersecurity operational testing, and follow-on OT&E to reassess known deficiencies and user operational considerations. In August 2017, DHS leadership directed CBP to complete these actions by the end of February 2018. In January 2018, CBP officials stated that they continue to work with the OTA to address the deficiencies and develop a plan for follow-on OT&E. They noted that completion of this plan is dependent on the scope for cybersecurity testing and they are working with DOT&E to define the scope since the requirements have been evolving. CBP officials also stated that they monitor the program’s KPPs monthly and plan to conduct monthly tests and quarterly maintenance checks to ensure operational functionality is maintained at both data centers. Since the program has completed development, CBP is focused on ensuring that the modernized TECS system works as intended by addressing operational issues as they are identified. For example, on January 2, 2017, a primary TECS Modernization application experienced a major outage that resulted in long airport delays. In August 2017, CBP officials said they continually monitor system health through a 24/7 operations center and have established a group dedicated to addressing the issues related to the January 2, 2017, outage. In September 2017, DHS’s Office of Inspector General (OIG) found that nearly 100 outages, periods of latency, or degraded service were reported for three TECS Modernization applications between June 2016 and March 2017. The OIG also found that CBP’s monthly reports on TECS system availability did not include periods of slowness or service interruptions that were caused by external factors. For example, the January 2, 2017, incident was identified in CBP outage reports, but was not captured in the monthly report because it was caused by a change to an external feed to the TECS system. CBP officials clarified that the monthly reports only account for interruptions that result in a full loss of operations for all TECS system users. The OIG recommended that CBP develop a plan to address factors that contributed to challenges regarding availability of primary traveler screening applications, among other things. CBP concurred with the recommendations. On January 1, 2018, the TECS system experienced another major outage that caused long airport delays; CBP officials said this incident is under review. CBP officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. LOGISTICS SUPPLY CHAIN MANAGEMENT SYSTEM (LSCMS) FEDERAL EMERGENCY MANAGEMENT AGENCY (FEMA) LSCMS is a computer-based tracking system that FEMA officials use to track shipments during disaster-response efforts. It is largely based on commercial-off-the- shelf software. FEMA initially deployed LSCMS in 2005, and initiated efforts to enhance the system in 2009. According to FEMA officials, LSCMS can identify when a shipment leaves a warehouse and the location of a shipment after it reaches a FEMA staging area near a disaster location. FEMA now anticipates reaching full operational capability by June 2019, up to 6 months late. Recent testing shows progress, but additional operational testing delayed to May 2018. GAO last reported on this program in April 2017 (GAO-17-346SP). Not included In November 2017, Department of Homeland Security (DHS) leadership approved a revised acquisition program baseline (APB) after the LSCMS program experienced a schedule breach. In September 2017, FEMA officials notified DHS leadership that it would not complete all required activities—including follow-on operational test and evaluation (OT&E)—to achieve acquisition decision event (ADE) 3 and full operational capability (FOC) by its initial APB dates of September 2018 and December 2018, respectively. According to FEMA officials, the delay was primarily caused by the need to deploy LSCMS program personnel in support of response and recovery efforts during the 2017 hurricane season. The program now plans to achieve FOC by June 2019—up to 6 months later than initially planned. DHS leadership authorized LSCMS to resume all development and acquisition efforts in March 2016 after a nearly 2-year program pause following program management issues. In October 2017, FEMA officials told GAO that they had completed several development efforts—such as integration with DHS’s asset management system—and were in the process of adding Electronic Data Interchange (EDI) to allow LSCMS to interface with its partners’ information systems. The program’s annual life-cycle cost estimate (LCCE) update continued to be within its APB cost thresholds. However, the program’s APB thresholds are not adjusted to account for risk, which increases the chance that the program could experience a cost breach. As of November 2017, FEMA officials did not anticipate that its schedule delays would lead to a cost breach. Federal Emergency Management Agency (FEMA) LOGISTICS SUPPLY CHAIN MANAGEMENT SYSTEM (LSCMS) All seven of the KPPs will be assessed as part of follow-on OT&E, which has been delayed from January 2018 as a part of the schedule breach. FEMA officials reported that they now plan to complete follow-on OT&E by May 2018, once the addition of EDI is complete. The LSCMS program previously experienced significant execution challenges because of poor governance. FEMA initially deployed the enhanced LSCMS in 2013 without DHS leadership approval, a DOT&E letter of assessment, or a DHS-approved APB documenting the program’s costs, schedule, and performance parameters, as required by DHS’s acquisition policy. DHS’s Office of Inspector General also found that neither DHS nor FEMA leadership ensured the program office identified all mission needs before selecting a solution. In response, DHS leadership paused all LSCMS development efforts in April 2014 until the program addressed these issues, among others. FEMA subsequently completed an analysis of alternatives and developed an APB based on this assessment. DHS leadership approved the APB in December 2015 and authorized FEMA to resume all LSCMS development and acquisition efforts in March 2016. In October 2017, FEMA officials told GAO that the LSCMS program had minimal staffing shortages and was working to recruit additional staff. Officials previously attributed the program’s governance and testing challenges, in part, to staffing shortages and we previously found that it only had 7 of the 22.5 full time equivalents it needed in fiscal year 2014. Although the program has obtained more staff since then, FEMA officials noted in October 2017 that during disasters—such as 2017 hurricanes Harvey, Irma, and Maria—LSCMS program personnel are deployed to support response and recovery efforts, which leave program positions vacant for the duration of the deployment. FEMA officials stated that during the response to hurricanes Harvey, Irma and Maria in 2017, LSCMS processed supply chain transactions that exceeded the total number of transactions from the preceding 12 years—which includes the response to Hurricane Katrina. They added that the program provided support for nearly 130 million meals in 2017 compared to a total of approximately 84 million from the 12 previous years. FEMA officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. IMMIGRATION AND CUSTOMS ENFORCEMENT (ICE) Since the 1980s, TECS (not an acronym) has provided case management, intelligence reporting, and information sharing capabilities to support ICE’s mission to investigate and enforce border control, customs, and immigration laws. ICE initiated efforts to modernize TECS in 2009 to replace aging functionality and provide end users with additional functionality to meet mission needs. Customs and Border Protection (CBP) executes a separate TECS Modernization program. Conducted additional testing of a revised key performance parameter and cybersecurity. Program has improved integration with external systems. GAO last reported on this program in April 2017 (GAO-17-346SP). In November 2017, Department of Homeland Security (DHS) leadership approved a revised life-cycle cost estimate (LCCE) and acquisition program baseline (APB) in preparation for the program’s acquisition decision event (ADE) 3 following deployment of final functionality. According to ICE officials, the program completed deployment of full operational capability (FOC) functionality in August 2017—4 months earlier than initially planned. FOC functionality included enhancements to case management capabilities, such as improved system search capabilities. The functionality was deployed in conjunction with enhancements and fixes for initial operational capability (IOC) functionality. The program achieved IOC in June 2016, which entailed delivering 80 percent of the modernized TECS functionality and successfully transitioning ICE off the legacy system. The overall cost thresholds in the current APB increased compared to the program’s prior APB from July 2016. Specifically, the acquisition cost threshold decreased by $14 million and the operations and maintenance (O&M) cost threshold increased by $147 million. These costs changed for various reasons, such as the following: The acquisition cost threshold decreased when ICE included actual costs through fiscal year 2016 and accounted for funding shortfalls. ICE officials told GAO that the program experienced a funding shortfall in fiscal year 2017 that led it to adjust spending under multiple contracts and shift some costs to fiscal year 2018. The O&M cost threshold increased when ICE extended the estimate from fiscal years 2024 to 2028 and continued contractor and systems engineering support for an additional 11 years. The affordability gap from fiscal years 2018 to 2022 may be overstated because DHS’s funding plan to Congress no longer contained O&M funding for individual programs. ICE officials anticipate receiving approximately $94 million in O&M funding to cover an estimated $105 million in O&M costs over this 5-year period. ICE officials said that they are pursuing strategies to reduce future O&M costs, such as awarding a competitive contract in March 2018 for O&M activities and any future enhancements. The program’s OTA completed follow-on operational test and evaluation (OT&E) in September 2017, which focused on evaluating the revised KPP, FOC functionality, and deficiencies identified during the program’s initial OT&E. In March 2017, DHS’s Director, Office of Test and Evaluation (DOT&E) found that the program was operationally effective and suitable with limitations, but that the test was not adequate to evaluate operational cybersecurity. DOT&E recommended that the program conduct threat-based operational cybersecurity testing, among other things. ICE officials said that the program completed threat-based cybersecurity tests in September 2017 and had begun to address identified vulnerabilities. DOT&E anticipates assessing the results from the program’s cybersecurity testing and follow-on OT&E by mid-February to support the ADE 3 decision. ICE officials continue to work closely with CBP to provide users access to various systems through the modernized TECS system. The program previously worked to resolve technical problems with CBP support services that emerged during final integration testing of the ICE and CBP modernized TECS systems, which contributed to a 3-month delay in achieving IOC. Users reported during initial OT&E that the modernized ICE TECS system was an improvement over the legacy system but they requested better integration with external systems, such as CBP’s Seized Assets and Case Tracking System (SEACATS), which they use to determine the disposition of seized assets for case management and reporting purposes. According to ICE officials, CBP subsequently decided to modernize SEACATS. ICE officials stated that they have coordinated closely with CBP to integrate the two modernized systems and ensure un-interrupted access to SEACATS for TECS users. For example, ICE developed a workaround so that TECS users maintain access to the latest seizure data available from the modernized SEACATS. ICE officials added that they continue to make improvements in interfaces with other external systems as prioritized by end users. In July 2017, ICE reported that the program was fully staffed. ICE officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. CONTINUOUS DIAGNOSTICS AND MITIGATION (CDM) NATIONAL PROTECTION AND PROGRAMS DIRECTORATE (NPPD) The CDM program aims to strengthen the cybersecurity of the federal government’s networks at more than 65 participating civilian agencies by providing tools and dashboards that continually monitor and report on network vulnerabilities. Tools are delivered in four phases: phase 1 and 2 tools report vulnerabilities in hardware and software, and user access controls, respectively; phase 3 tools will report on efforts to prevent attacks; and phase 4 tools will provide encryption to protect network data. Program revised its key performance parameters and test and evaluation master plan as a part of its rebaseline. Program plans to change its acquisition strategy and continues to face workforce challenges. GAO last reported on this program in April 2017 (GAO-17-346SP). In June 2017, Department of Homeland Security (DHS) leadership re-baselined the CDM program for the third time to approve initiating development of phase 3 and to address challenges encountered during phase 1. Specifically, contractors previously found large gaps—ranging from 19 to 384 percent—in the actual number of devices needing phase 1 tools than what was originally reported by 12 agencies. The operations and maintenance (O&M) cost thresholds increased by $631 million when the program shifted some potential acquisition costs to beThe program’s new acquisition program baseline (APB) modified the program’s cost, schedule, and performance parameters. For example: consistent with DHS’s new appropriation structure, among other things. The O&M cost thresholds previously decreased by $1.2 billion, in part, because DHS leadership determined the program would only fund CDM tools for the first 2 years after deployment. The acquisition costs did not increase despite phase 1 challenges, in part, because coverage for the U.S. Postal Service— The program’s full operational capability (FOC) date slipped almost 4 years after which had the largest gap in estimated devices—will no longer be funded by the CDM program. it was redefined from deployment of phase 1-3 tools at 5 agencies to the availability of these tools to all participating agencies. However, the program’s costs will increase and its FOC date may slip further once the program establishes goals for phase 4. NPPD officials said they were unable to complete planning efforts for phase 4 in time to incorporate it into the most recent APB revision and, therefore, plan to re-baseline the CDM program again in 2018. The CDM program identified a potential acquisition affordability gap in fiscal year 2018 based on its revised life-cycle cost estimate, which it addressed by adjusting the phase 3 schedule to shift some acquisition costs out to fiscal year 2020. The affordability gap from fiscal years 2018 to 2022 may be overstated because DHS’s funding plan to Congress no longer contained O&M funding for individual programs. However, the program anticipates receiving approximately $281 million in O&M funding over the 5-year period. 12/16 Phase 1 initial operational capability (IOC) National Protection and Programs Directorate (NPPD) CONTINUOUS DIAGNOSTICS AND MITIGATION (CDM) As part of its re-baselining efforts, the CDM program updated its operational requirements document and test and evaluation master plan. At the direction of DHS leadership, the program consolidated its previous 12 key performance parameters (KPP) into 5 main KPP functions—identification, protection, detection, response, and recovery—some of which have multiple sub-measures. The revised KPPs are intended to better align with the National Institute of Standards and Technology’s Cybersecurity Framework and were developed in collaboration with key stakeholders, such as the Joint Requirements Council, DHS’s Director, Office of Test and Evaluation (DOT&E), and the program’s OTA. The CDM program is only authorized to conduct testing on DHS networks, which means the other departments and agencies are responsible for testing the CDM tools and dashboards on their own networks. Under the program’s revised test and evaluation master plan, the OTA plans to perform operational assessments (OA) on DHS’s network to incrementally demonstrate each phase’s capabilities as they are deployed and to reduce risk prior to conducting formal program-level operational test and evaluation (OT&E). NPPD officials anticipate the first OA will be completed in calendar year 2018 and will test integration of phase 1 tools and dashboard reporting. NPPD officials previously told GAO that they had observed operational testing conducted at three other agencies and, in September 2017, said they continue to work with the program’s OTA to identify opportunities to observe testing at other agencies. The CDM program updated its acquisition plan as a part of its re-baselining efforts, which reflects a change in strategy for procuring CDM tools and integration services for participating agencies through the General Services Administration (GSA). Previously, the CDM program issued task orders for these tools and services through blanket purchase agreements established under vendors’ GSA Federal Supply Schedule contracts. These agreements are set to expire in August 2018. Going forward, the program plans to use an existing GSA government-wide acquisition contract—known as Alliant—to obtain CDM tools and services. According to NPPD officials, the new acquisition strategy is intended to provide greater flexibility in contracting for current capabilities and to support future capabilities. It will also allow participating agencies to order additional CDM-approved products or services from GSA’s schedule for information technology equipment, software, and services; however, as of September 2017, NPPD officials stated they were in the process of determining how this process will work. NPPD officials said that the program continues to face workforce challenges related to managing the program’s change in contracts and planning for phase 4. In February 2018, NPPD officials stated that they had on-boarded 5 staff to help address the program’s reported fiscal year 2017 gap of 16 full time equivalents. They noted that another 5 candidates were in the hiring process and that NPPD continues to work with officials from DHS’s Office of the Chief Security Officer to reduce continued challenges in onboarding new staff due to the lengthy security clearance process. In addition to activities outlined in this assessment, NPPD officials stated that the CDM program continues to manage its budget to ensure program costs match available funding, and is leveraging the collective buying power of federal agencies and strategic sourcing to achieve government cost savings on CDM products. NPPD officials also stated that, as of December 2017, CDM has deployed agency dashboards to 23 agencies and was conducting and testing information exchanges of data between agency dashboards and the federal dashboard. HOMELAND ADVANCED RECOGNITION TECHNOLOGY (HART) NATIONAL PROTECTION AND PROGRAMS DIRECTORATE (NPPD) HART will replace and modernize the Department of Homeland Security’s (DHS) legacy biometric identification system—known as IDENT—which shares information on foreign nationals with U.S. government and foreign partners to facilitate legitimate travel, trade, and immigration. NPPD plans to develop HART in four increments: increments 1 and 2 will replace and enhance IDENT functionality; increments 3 and 4 will provide additional biometric services, as well as a web portal and new tools for analysis and reporting. Key performance parameters will be demonstrated as capability is developed. Program has developed mitigation plans to address workforce risks. GAO last reported on this program in April 2017 (GAO-17-346SP). In June 2017, NPPD declared a schedule breach when it determined the HART program would not be able to meet its initial acquisition program baseline (APB) milestones. DHS leadership approved the program’s APB in April 2016 and authorized the program to initiate development efforts for increments 1 and 2 in October 2016. NPPD officials attribute the schedule slip to multiple delays in awarding the contract for increments 1 and 2 as a result of issues with the request for proposals (RFP). The program released the RFP in February 2017 and awarded the contract in September 2017—approximately 9 months later than NPPD officials had planned. However, the program experienced additional delays after a bid protest to the contract award was filed with GAO in October 2017. GAO subsequently denied the protest and NPPD officials said the program plans to initiate work with the contractor in March 2018. HART initially planned to achieve initial operational capability (IOC) with the deployment of increment 1 in December 2018, at which point program officials anticipated beginning to transition users from IDENT to HART. However, it is unclear when this will now occur, which is a significant challenge because IDENT is at risk of failure and may be unable to fully support requirements related to new programs— such as Customs and Border Protection’s Biometric Entry-Exit. As a result, delays in HART could contribute to delays in other DHS acquisition programs. The program updated its life-cycle cost estimate (LCCE) in June 2017 to inform the budget process. This LCCE is within its current APB cost thresholds, but does not account for the contractor’s solution. The program plans to update its LCCE and other acquisition documentation, such as its APB, after initiating work with the contractor. The affordability gap from fiscal years 2018 to 2022 may be overstated because DHS’s funding plan to Congress no longer contained operations and maintenance (O&M) funding for individual programs. However, the program anticipates receiving approximately $1.3 billion in O&M funding to cover $1.5 billion in O&M costs. NPPD officials explained that the current O&M cost estimate includes costs for maintaining IDENT. Future LCCE updates will reflect delivery of services through HART, which NPPD officials anticipate will be more cost effective. National Protection and Programs Directorate (NPPD) HOMELAND ADVANCED RECOGNITION TECHNOLOGY (HART) HART plans to demonstrate its eight key performance parameters (KPP) as capabilities are developed. Increment 1 has two KPPs that establish requirements for system availability and a fingerprint biometric identification service. Increment 2 has four KPPs that establish requirements for multimodal biometric verification services and interoperability with a Department of Justice system. Increments 3 and 4 each have one KPP that establish requirements for web portal response time and reporting capabilities, respectively. However, NPPD officials stated they will revisit the KPPs for increments 3 and 4 as they define requirements for these increments. S&T’s Office of Systems Engineering completed a technical assessment on HART in February 2016, and concluded that the program had a moderate overall level of technical risk. In October 2016, DHS leadership directed HART to work with S&T to conduct further analysis following the program’s initial contract award for increments 1 and 2. However, these efforts have also been delayed. NPPD officials told GAO they are currently planning for increments 3 and 4 and plan to refine the cost, schedule, and performance goals for these increments in its next APB. NPPD plans to pursue a separate contract for the development and delivery of increments 3 and 4. However, the program will require DHS leadership approval prior to initiating these development efforts. In September 2017, NPPD officials told GAO they had hired two staff and planned to hire additional staff to address the program’s staffing gap of 5.5 full time equivalents. In response to DHS leadership’s direction, the program coordinated with DHS’s Chief Technology Officer to assess the skills and functions of staff necessary to execute the program and to develop the HART staffing plan. In its June 2017 staffing plan, the program identified workforce risks, including the potential for experiencing insufficient technical skillsets and inadequate resources to simultaneously execute development of HART and operate IDENT. To mitigate these risks, the program plans to develop a training plan to address the gap in skills, leverage support within the program by cross- training staff, and issue contracts for additional support as needed, among other things. However, if the program does not have adequate staff to complete these efforts, it may experience further schedule delays. NPPD officials stated that the program’s schedule delays pose a challenge because IDENT remains at risk of failure despite incremental improvements to extend its service life and may be unable to fully support new customer requirements or requirements related to new programs. They added that the program has a risk management process, which it is using to manage a variety of identified risks—including several related to workforce. They noted that these risks have not yet materialized. NPPD officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. NATIONAL CYBERSECURITY PROTECTION SYSTEM (NCPS) NATIONAL PROTECTION AND PROGRAMS DIRECTORATE (NPPD) NCPS is intended to defend the federal civilian government from cyber threats. NCPS develops and delivers capabilities through a series of “blocks.” Blocks 1.0, 2.0, and 2.1 are fully deployed and provide intrusion-detection and analytic capabilities across the government. The NCPS program is currently deploying EINSTEIN 3 Accelerated (EA) to provide intrusion-prevention capabilities and plans to deliver block 2.2 to improve information sharing across agencies. A at 95 percent of agencies and departments. GAO last reported on this program in April 2017 (GAO-17-346SP). NPPD officials said the program is on track to meet the schedule and cost goals in its current acquisition program baseline (APB), which reflected changes resulting from the adoption of some of the Department of Homeland Security’s (DHS) Homeland Security Information Network (HSIN) capabilities for block 2.2 rather than developing custom solutions. However, challenges in completing test plans delayed testing: Initial operational test and evaluation (OT&E) for EA transition to sustainment— slipped from September 2016 to May 2017. The initial test event for block 2.2—intended to inform the ADE 2C for deploying additional block 2.2 capabilities—slipped from March 2017 to September 2017. As of August 2017, NPPD officials said NCPS had adopted all planned HSIN capabilities but one because of security concerns, which HSIN is addressing by piloting a new tool. The program updated its life-cycle cost estimate (LCCE) in June 2017 to inform the budget process, which is within its current APB cost thresholds. However, the program plans to update the LCCE again to support the EA, and costs through fiscal year 2022. The affordability gap from fiscal years 2018 to 2022 may be overstated because DHS’s funding plan no longer contained O&M funding for individual programs. NPPD officials anticipate receiving $1.8 billion in O&M funding over this 5-year period. The program is also projected to have an $83 million surplus in acquisition funding over this 5-year period, which NPPD officials anticipate will be less once the LCCE revision is complete. National Protection and Programs Directorate (NPPD) NATIONAL CYBERSECURITY PROTECTION SYSTEM (NCPS) In October 2017, the NCPS program completed the first block 2.2 operational assessment (OA), which focused on testing delivery of an information sharing portal to inform the program’s ADE 2C. In January 2018, DOT&E determined that it was too soon to assess block 2.2 progress toward operational effectiveness, suitability, and cybersecurity. DOT&E also noted block 2.2 is at risk of not meeting user needs because the portal comprises a small portion of planned capabilities and alignment with the operational requirements is unclear. DOT&E made a number of recommendations, including repeating the OA before conducting initial OT&E. A intrusion-prevention capabilities have been primarily provided through sole source contracts with internet service providers (ISP) and a contract to provide basic intrusion-prevention services. In December 2015, Congress required DHS to make available for use by federal agencies, certain capabilities, such as those provided by NCPS’s EA at approximately 93 percent of civilian federal agencies and departments and, in January 2018, NPPD officials said NCPS was up to 95 percent. According to NPPD officials, the program first focused on integrating EA for individual agencies and departments, but stated that they continue to work with all agencies and departments to provide EA services and approximately 95 percent of the federal civilian .gov user population is protected by at least one EA and an OA of NCPS block 2.2 information sharing capabilities in 2017. NPPD officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. NEXT GENERATION NETWORKS PRIORITY SERVICES (NGN-PS) NATIONAL PROTECTION AND PROGRAMS DIRECTORATE (NPPD) NGN-PS is intended to address an emerging capability gap in the government’s emergency telecommunications service, which prioritizes select officials’ phone calls when networks are overwhelmed. NPPD executes NGN-PS through commercial telecommunications service providers, which addresses the government’s requirements as they modernize their own networks. NPPD is executing NGN-PS in two phases—(1) voice and (2) data and video. Initial operational capability for voice phase wireless capabilities achieved in August 2017. Acquisition of data and video phase capabilities to begin in September 2021. GAO last reported on this program in April 2017 (GAO-17-346SP). In November 2017, the Department of Homeland Security’s (DHS) Chief Financial Officer approved a revised life-cycle cost estimate (LCCE) for NGN-PS, which includes costs for the entire program’s voice phase and eliminates operations and maintenance (O&M) costs. The program removed O&M costs because capabilities acquired through NGN-PS are transferred to and funded through NPPD’s Priority Telecommunications Service (PTS) once they become operational. NGN-PS is currently focused on delivering its voice phase, which is divided into three increments: Increment 1 maintains current priority service on long distance calls as commercial service providers update their networks; Increment 2 delivers wireless capabilities; and Increment 3 is intended to address landline capabilities. The program’s previous LCCE and current acquisition program baseline (APB) only include costs associated with increments 1 and 2. NPPD officials told GAO they plan to update the program’s APB in January 2018 to include costs, schedule, and performance goals for increment 3 and expect to receive DHS leadership approval to initiate development by August 2018. NGN-PS remains on track to meet its cost and schedule goals for the first two increments of the voice phase. The program’s full operational capability (FOC) for increment 1 previously slipped from June 2017 to March 2019, which NPPD officials attributed to funding shortfalls. NGN-PS achieved initial operational capability (IOC) for increment 2 wireless capabilities in August 2017 when priority service via cellular towers was demonstrated by the program’s largest service provider. The program projects an acquisition affordability gap of $92 million from fiscal years 2018 to 2022. However, DHS’s current funding plan does not include funding for increment 3, which accounts for the funding shortfall in fiscal years 2021 and 2022. NPPD officials said they anticipate receiving an additional $79 million in acquisition funding over this 2-year period, but will continue to prioritize capabilities if additional funding is not provided. These officials also said the program has achieved cost savings on increments 1 and 2 that will mitigate some of the projected shortfall in fiscal years 2018 and 2019. National Protection and Programs Directorate (NPPD) NEXT GENERATION NETWORKS PRIORITY SERVICES (NGN-PS) NGN-PS capabilities are evaluated through developmental testing and operational assessments conducted by service providers on their own networks. However, NPPD officials noted that each emergency is unique and that performance can be affected by damage to telecommunications infrastructure. NPPD officials review the service providers’ test plans, oversee tests to verify testing procedures are followed, and approve test results to determine when testing is complete. The OTA does not conduct a stand-alone operational test event for NGN-PS. Instead, the OTA leverages the service providers’ test and actual operational data to assess program performance. NPPD officials also said that they continuously review actual NGN-PS performance and that all service providers undergo annual network service verification testing under the PTS program. NGN-PS was established in response to an Executive Order requiring the federal government to have the ability to communicate at all times during all circumstances to ensure national security and manage emergencies. A Presidential Policy Directive issued in July 2016 superseded previous directives requiring continuous communication services for select government officials. According to NPPD officials, the new directive validates the program’s requirements for the voice phase and was used to develop requirements for the video and data phase. The program expects to begin the acquisition of the phase 2 for video and data in September 2021. In July 2017, NPPD reported that the program needed a systems engineer and was mitigating the vacancy with contracted support staff. The program also identified a need for an additional systems engineer and program support staff starting in fiscal year 2019 to support the start of increment 3. In August 2017, NPPD officials told GAO they continue to face challenges hiring and retaining engineers with adequate experience because of competition with the private sector. The program has historically mitigated staffing gaps by leveraging support from contracted and PTS program staff, as needed. In addition to activities identified in this assessment, NPPD officials stated that the program has received Joint Requirements Council validation of the phase 2 concept of operations and DHS leadership approval of the phase 2 operational requirements document. As of January 2018, the updated APB was in the approval process. NPPD officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. NATIONAL BIO AND AGRO-DEFENSE FACILITY (NBAF) SCIENCE AND TECHNOLOGY DIRECTORATE (S&T) The NBAF program is constructing a state-of-the-art laboratory in Manhattan, Kansas to replace the Plum Island Animal Disease Center. The facility will enable the Department of Homeland Security (DHS) and the Department of Agriculture (USDA) to conduct research, develop vaccines, and provide enhanced diagnostic capabilities to protect against foreign animal, emerging, and zoonotic diseases that threaten the nation’s food supply, agricultural economy, and public health. Commissioning process underway, but performance will not be demonstrated until construction is complete. NBAF adequately staffed, but staffing needs will change as operational stand-up activities begin. GAO last reported on this program in April 2017 (GAO-17-346SP). The program’s annual life-cycle cost estimate (LCCE) update is within its current acquisition program baseline (APB) cost thresholds and, according to NBAF officials, the program remains on track to meet its schedule goals. In August 2017, NBAF officials said that construction activities thus far—such as pouring concrete for the main laboratory and steel framing—have proceeded as anticipated and will continue through December 2020. NBAF officials told GAO the program has already received full acquisition funding for facility construction efforts through federal appropriations and gift funds from the state of Kansas. As construction continues, the program plans to begin operational stand-up activities for the facility. However, a potential affordability gap may delay the program’s ability to complete these stand-up activities, which are needed to begin conducting laboratory operations. The program was not included in DHS’s funding plan to Congress for fiscal years 2018 to 2022 because DHS did not report operations and maintenance (O&M) funding for individual programs. However, NBAF officials anticipate receiving only $149 million in O&M funding to cover an estimated $239 million in O&M costs over the next 5 years, resulting in a projected shortfall of approximately $90 million. NBAF officials stated the O&M funding gap could delay a number of operational stand-up activities, including plans to award a management operations and research support contract in October 2018, the purchase of laboratory and information technology equipment, and hiring of operations management staff. According to NBAF officials, if operational stand-up activities are delayed, there is a risk the facility will not be fully operational by December 2022, as is currently planned. This may delay the transition from the Plum Island Animal Disease Center, which is nearing the end of its useful life. NBAF officials reported that S&T plans to communicate the program’s future funding needs to DHS leadership through the annual budget process. If the program does not receive the funding it requests, these officials stated that S&T will prioritize the operational stand-up activities that best reduce the risk of schedule delays. Science and Technology Directorate (S&T) NATIONAL BIO AND AGRO-DEFENSE FACILITY (NBAF) A third-party commissioning agent has been retained as a subcontractor to the prime construction management contractor, and NBAF officials stated that a commissioning plan has been in place since 2012. According to NBAF officials, the commissioning agent worked with the facility design and construction teams to develop the commissioning plan, and detailed procedures are in place to install and commission equipment in the facility. The commissioning agent will monitor and test the facility’s equipment and building systems while construction is ongoing to ensure they are properly installed and functioning according to appropriate biosafety specifications. The commissioning agent will report its findings directly to program officials and coordinate with other entities involved in the commissioning process, including the NBAF program office, the construction management contractor, and end users, among others. Full commissioning of the facility is scheduled to be completed by May 2021, 6 months after the completion of construction. NBAF officials reported that they coordinate regularly with key stakeholders. For example, they hold regular coordination meetings with USDA officials to discuss NBAF operations, including operational stand-up activities and future procurement. The NBAF program office has also begun outreach to the federal regulators responsible for awarding the registrations needed for NBAF to conduct laboratory operations to begin planning for this authorization process. The NBAF program office is currently fully staffed. However, NBAF officials reported the program’s staffing needs will change in the coming years, as the program progresses through construction and begins operational stand-up of the facility. For example, over the next 5 years, the program will need to hire an operations director, bio-risk manager, chief information officer, and facility manager, among others, for NBAF operations management. However, the projected O&M funding shortfall during this same period could affect the program’s ability to hire new staff when needed and complete operational stand-up activities on time. NBAF officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. ELECTRONIC BAGGAGE SCREENING PROGRAM (EBSP) TRANSPORTATION SECURITY ADMINISTRATION (TSA) Established in response to the terrorist attacks of September 11, 2001, EBSP tests, procures, and deploys transportation security equipment, such as explosives trace detectors and explosives detection systems, across approximately 440 U.S. airports to ensure 100 percent of checked baggage is screened for explosives. EBSP is primarily focused on delivering new systems with enhanced screening capabilities and developing software upgrades for existing systems. Program is incorporating requirements to address cybersecurity risk for existing systems. EBSP plans to pursue a new procurement approach in 2018, and staffing challenges exist. GAO last reported on this program in April 2017 (GAO-17-346SP). In the program’s annual life-cycle cost estimate update, its operations and maintenance (O&M) costs exceeded the acquisition program baseline (APB) cost threshold, which constitutes a breach under the Department of Homeland Security’s (DHS) acquisition policy. The O&M costs increased when TSA accounted for updated maintenance costs and quantities, and shifted salaries from acquisition to O&M to align with DHS’s new appropriation structure. TSA officials said they did not submit a breach notification because they considered the movement of salaries to be an administrative change. The program plans to update its APB in calendar year 2018 to reflect a new plan for procuring equipment under its current acquisition strategy. TSA officials said this APB will also reflect the cost changes. In May 2016, DHS leadership approved a revised APB for EBSP, which reflects its current acquisition strategy to competitively procure systems on an ongoing basis using qualified product lists. The program’s revised APB cost thresholds decreased compared to its initial APB, which TSA officials attributed to various reasons, including shortening the program’s end date by 3 years and lower than anticipated actual costs, among other things. TSA officials told GAO that one of their primary challenges is funding, and the program is projected to face a $72 million acquisition funding shortfall in fiscal year 2018. TSA identified $70 million in carryover funding to address this gap. To mitigate anticipated funding gaps in future years, TSA officials said they may shift projects from one fiscal year to another or cancel them altogether, which may result in the delay or elimination of screening capabilities. The affordability gap from fiscal years 2018 to 2022 may be overstated because DHS’s funding plan to Congress no longer contained O&M funding for individual programs. TSA anticipates receiving $980 million in O&M funding over this 5-year period to cover $1 billion in O&M costs. TSA officials anticipate achieving the program’s final APB milestone—initial operational capability (IOC) for systems that detect additional materials and provide an advanced threat detection algorithm—by its revised threshold date. Previously, EBSP planned to award contracts for these systems in September 2015 and September 2018, respectively. Transportation Security Administration (TSA) ELECTRONIC BAGGAGE SCREENING PROGRAM (EBSP) TSA officials previously stated that EBSP has demonstrated that all deployed systems can meet the program’s key performance parameters, including automated threat detection, throughput, and operational availability. In September 2017, TSA officials said they had identified a critical need for improved cybersecurity requirements and plan to update the program’s acquisition documentation starting in 2018. Since March 2011, DHS’s Director, Office of Test and Evaluation (DOT&E) has assessed the operational test and evaluation (OT&E) results of 11 EBSP systems from multiple vendors and determined that 6 are effective and suitable. Most recently, DOT&E found that a medium-speed explosives detection system with an advanced threat detection algorithm tested in May 2017 was effective with limitations and not suitable, primarily because of the increase in manpower needed to operate the system on a long-term, continuous basis. TSA officials do not have any plans to retest this system within the next year. DOT&E also found that a reduced-size standalone explosives detection system tested in March 2017 was suitable with limitations, but not effective because of multiple factors resulting in the inability of operators to maintain control of baggage. As of December 2017, EBSP had deployed 1,664 explosives detection systems and 2,638 explosives trace detectors nationwide. In 2018, EBSP plans to pursue a new competitive procurement approach to replace and update existing systems that will include: New contract vehicles to better align EBSP procurement activities with the program’s strategic roadmap. Updates to EBSP’s vendor qualification process to allow for vendor collaboration before testing. Transitioning from procuring systems with different sizes and speeds to two types: (1) inline systems that integrate with a baggage handling system and are linked through a network and (2) standalone systems that may be integrated with a baggage handling system, but not linked to a network. The program is in the process of updating its acquisition documentation to reflect this new procurement approach and TSA officials anticipate opening a qualified products list for new systems starting in June 2018. TSA officials said that staffing remains a challenge for the program because of cuts in government and contracted mission support staff and critical vacancies, including a division director. In September 2017, TSA reported that existing personnel across the program have assumed responsibilities of these positions, but workloads are unsustainable at current staffing levels. TSA officials stated that EBSP continues to procure, test, and deploy equipment and capabilities to recapitalize older equipment, improve security screening capability at airports, and enhance the detection capabilities of the fleet. They added that TSA employs extensive testing to verify the suitability and effectiveness of equipment to meet requirements. Moving forward, EBSP intends to establish IOC milestones for new technologies and capabilities, while allowing TSA the flexibility to make risk-based decisions. TSA officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. PASSENGER SCREENING PROGRAM (PSP) TRANSPORTATION SECURITY ADMINISTRATION (TSA) The Department of Homeland Security (DHS) established PSP in response to the terrorist attacks of September 11, 2001. PSP identifies, tests, procures, deploys, and sustains transportation security equipment across approximately 440 U.S. airports to help TSA officers identify threats concealed on people and in their carryon items. The program aims to increase threat detection capabilities, improve the efficiency of passenger screening, and balance passenger privacy and security. Started testing on the Credential Authentication Technology in TSA Precheck lanes during 2017. Critical staffing vacancies persist and may delay followon acquisition planning efforts. GAO last reported on this program in April 2017 (GAO-17-346SP). In May 2017, the DHS Under Secretary for Management (USM) approved the sixth version of the PSP acquisition program baseline (APB) and subsequently removed the program from breach status. In January 2016, TSA declared a schedule breach of a key milestone—acquisition decision event (ADE) 3—for the Credential Authentication Technology (CAT) because of delays in incorporating new cybersecurity requirements. Consistent with previous versions of the program’s APB, the new baseline modified the program’s cost, schedule, and performance parameters. For example, the program established the following: Separate CAT milestone dates for TSA Precheck and standard lanes. TSA officials stated there is no capability difference between screening lanes, but an initial focus on TSA Precheck lanes will assist with demonstrating CAT requirements and resolving past testing issues that contributed to an initial 4-year delay to CAT’s full operational capability (FOC) date. PSP now plans to reach FOC for CAT more than 5 years later than its revised target of June 2018 and more than 9 years later than initially planned. New FOC dates for other technologies, which TSA officials said are expected to be more realistic about delivery dates and account for changes in some FOC quantities. For example, TSA requested and received approval in September 2017 to increase FOC quantities for second generation Advanced Technology (AT-2) TierI systems to meet increasing passenger volume and expected airport growth. In May 2017, the USM also directed the program to revise its life-cycle cost estimate (LCCE) in response to less-than-expected funding levels. The new LCCE also shifted some acquisition costs to operations and maintenance (O&M) to be consistent with DHS’s new appropriation structure. TSA officials believe the new funding profile will be sufficient to sustain legacy PSP equipment, but will significantly limit the program’s ability to enhance existing equipment capabilities and support operational needs. The affordability gap from fiscal years 2018 to 2022 may be overstated because DHS’s funding plan to Congress no longer contained O&M funding for individual programs. TSA anticipates receiving $906 million in O&M funding over this 5-year period to cover $923 million in O&M costs. 05/17 APB version 6.0 approved 03/20 CAT ADE 3 (precheck lanes) 09/21 CAT ADE 3 (standard lanes) 12/21 CAT FOC (precheck lanes) 12/23 CAT FOC (standard lanes) Transportation Security Administration (TSA) PASSENGER SCREENING PROGRAM (PSP) Since August 2010, DHS’s Director, Office of Test and Evaluation (DOT&E) has assessed the test results of eight PSP systems from multiple vendors and determined that three are effective and suitable. Most recently, DOT&E reviewed the results from an assessment of automated screening lanes, which TSA began pursuing in fall 2016 in response to an urgent operational need to address increasing passenger wait times. DOT&E found that automated systems showed potential to increase passenger screening rates, but noted some adverse impact on system performance and availability. Automated screening lanes operational utility assessment AT-2 tier II follow-on operational test & evaluation (OT&E) Going forward, TSA plans to conduct testing on updates made to existing PSP systems, as well as complete testing of CAT. TSA initiated CAT developmental testing in TSA Precheck lanes in late fiscal year 2017 and anticipates completing operational testing by June 2019. Testing will expand to standard screening lanes shortly thereafter and is expected to be complete by September 2020. However, in November 2017, DHS leadership approved TSA’s proposal to transfer requirements from the Security Technology Integrated Program, which provides critical data connectivity capabilities, to CAT to reduce the dependency between the programs. DHS leadership directed TSA to complete several actions to account for this change, including updating CAT’s operational requirements document and test and evaluation master plan. In January 2018, TSA officials said that they determined CAT’s current operational requirements document was still valid and anticipate updating the test and evaluation master plan by March 2018. TSA employs two acquisition strategies to acquire PSP systems: Qualified Product List (QPL) approach—used for proven technologies when capability requirements are rigid and contractors’ systems are mature. Any contractors’ systems that demonstrate they meet the capability requirements are added to the QPL. TSA has used this approach to acquire the second generation AT-2 systems, Bottled Liquid Scanners, and Explosive Trace Detectors. Low Rate Initial Production (LRIP) approach—used when capability requirements are flexible and contractors’ systems are evolving. Under this approach, PSP uses a series of development contracts to enhance systems’ capabilities over time. PSP is currently using this approach to acquire CAT. TSA planned to initiate new acquisition programs starting in fiscal year 2018 that will replace PSP, but this effort may be at risk because of understaffing. In August 2017, TSA reported that its checkpoint screening division—whose staff is concurrently responsible for PSP and its follow-on programs—continued to have staffing vacancies, including project managers, analysts, and a deputy program manager. TSA is mitigating these gaps with existing staff and, according to TSA officials, the staffing challenges may decrease because the new programs may be delayed in response to funding cuts. TSA officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. TECHNOLOGY INFRASTRUCTURE MODERNIZATION (TIM) TRANSPORTATION SECURITY ADMINISTRATION (TSA) The TIM program was initiated to address shortfalls in TSA’s threat assessment screening and vetting functions by providing a modern and centralized end-to-end credentialing system. The TIM system will manage credential applications and the review process for millions of transportation workers and travelers across three segment populations: maritime, surface, and aviation. It will support large programs, such as TSA Precheck and the Transportation Worker Identification Credential. Operational testing identified limitations with the system; cybersecurity has not been assessed. Staffing gaps in key areas, such as systems engineering and testing, are a significant program risk. GAO last reported on this program in October and April 2017 (GAO-18-46, GAO-17- 346SP). The TIM program is on track to meet the cost and schedule goals in its current acquisition program baseline (APB). In September 2016, the Department of Homeland Security’s (DHS) Under Secretary for Management approved the TIM program’s revised APB—which reflected a new technical approach to deploy capabilities using an agile development methodology—and subsequently removed the program from breach status, authorizing TSA to resume new development after a nearly 22-month pause. DHS leadership paused new development in January 2015 after the program breached its initial APB goals for various reasons, including technical challenges, insufficient contractor performance, and the addition of new requirements after DHS leadership had approved the program’s initial acquisition strategy. The program now plans to achieve full operational capability (FOC) in March 2022 and its life-cycle cost estimate (LCCE) increased to account for this 6-year schedule slip and integration with the Transportation Vetting System, among other things. Since the program’s re-baseline, it has been developing and deploying capabilities in 2-month incremental agile releases, such as functionality to transition TSA Precheck program to the TIM system. The program updated its LCCE in November 2017 to inform a program review with DHS leadership, which is within its current APB cost thresholds. The affordability gap from fiscal years 2018 to 2022 may be overstated because DHS’s funding plan to Congress no longer contained operations and maintenance (O&M) funding for individual programs. TSA officials anticipate receiving approximately $318 million in O&M funding over this 5-year period, which includes nearly $118 million in fees from vetting programs. TSA officials plan to realign $57 million to cover the projected acquisition shortfall, and said any additional surplus funding available in fiscal year 2022 would be used to implement new system requirements identified by the program’s customers. In November 2017, TSA officials identified several program and technical risks that could affect the program’s cost, schedule, and performance. These risks include an increase in new requirements and increased risk of system vulnerabilities and cyberattacks if the program does not identify a provider to perform software updates on open source code. TSA officials are working to mitigate these risks. Transportation Security Administration (TSA) TECHNOLOGY INFRASTRUCTURE MODERNIZATION (TIM) In April 2017, DHS’s Director, Office of Test and Evaluation (DOT&E) assessed the results of the program’s November 2016 follow-on operational test and evaluation (OT&E) for the maritime segment and determined that the system: • met two of its four key performance parameters (KPP), • was operationally effective and suitable with limitations, and • was not cyber-secure because threat-based cybersecurity testing was deferred to November 2018, after the program completes its migration to a new production environment. The OTA did not evaluate the program’s KPP related to enforcing system user access controls because it was new to the TIM program when testing began. In addition, the OTA cannot conduct testing on the program’s remaining KPP related to information reuse until the surface and aviation segments are deployed. In March 2017, DOT&E approved a new test and evaluation master plan for the TIM program, which calls for the OTA to conduct continuous operational testing for each 2-month agile release and document the results in a dashboard. According to TSA officials, the results of each release are provided to DOT&E, but DOT&E does not provide a formal assessment of these results. DOT&E plans to assess the results of the program’s cybersecurity testing in late calendar year 2018. Under the program’s new technical approach, TSA plans to replace the TIM system’s existing commercial-off-the-shelf applications with open source applications—software that can be accessed, used, modified, and shared by anyone—and move to a new virtual environment. The program’s new agile development methodology develops, tests, and deploys capabilities using an iterative, rather than a sequential approach. Consistent with this strategy, TSA awarded task orders in 2016 and 2017 totaling $34.5 million to the program’s existing contractor for agile design and development services, and plans to competitively award a new contract by May 2018. In October 2017, GAO found that TSA had not fully implemented several leading practices to ensure successful agile adoption. GAO also found that TSA and DHS needed to conduct more effective oversight of the TIM program to reduce the risk of repeating past mistakes. DHS concurred with all 14 recommendations made by GAO to improve program execution and oversight, and identified actions DHS and TSA can take to address them. TSA reported that staffing challenges are a significant risk to the program’s success and identified gaps in key areas—such as systems engineering, testing, and agile development. Program officials told GAO these positions cannot be filled because of a hiring freeze within TSA, which the component has imposed to assess their current workforce and restructure, if necessary. Program officials told GAO they requested waivers from the hiring freeze and, as of January 2018, they had received approval to hire 4 additional staff. TSA officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. FAST RESPONSE CUTTER (FRC) UNITED STATES COAST GUARD (USCG) The USCG uses the FRC to conduct search and rescue, migrant and drug interdiction, and other law enforcement missions. The FRC carries one cutter boat on board and is able to conduct operations in moderate sea conditions. The FRC replaces the USCG’s Island Class patrol boat and provides improved fuel capacity, surveillance, and communications interoperability with other Department of Homeland Security (DHS) and Department of Defense assets. FRC found operationally effective and suitable, and all key performance parameters validated. Main diesel engine issues persist, which may require further retrofits. GAO last reported on this program in March and April 2017 (GAO-17-218, GAO-17- 346SP). According to USCG officials, the FRC program is on track to meet its current cost and schedule goals. The USCG plans to acquire 58 FRCs and, as of September 2017, 25 had been delivered and 19 were on contract. To inform the budget process, the program updated its life-cycle cost estimate in June 2017, which is within its current acquisition program baseline (APB) cost thresholds. Previously, the program’s initial operational capability (IOC) date slipped after a bid protest related to the program’s initial contract award—now known as phase 1—and the need for structural modifications. USCG officials attributed the 5-year slip in the program’s full operational capability (FOC) date to a decrease in annual procurement quantities under the phase 1 contract. Specifically, in fiscal years 2010 and 2011, the quantities decreased from 6 FRCs per year to 4. In May 2014, the USCG determined that it would procure only 32 of the 58 FRCs through this contract and initiated efforts to conduct a full and open competition for the remaining 26 vessels—known as phase 2. In May 2016, the USCG awarded the phase 2 contract for the remaining 26 FRCs, which has a potential value of $1.42 billion. Under the phase 2 contract, the USCG can procure 4 to 6 FRCs per option period. The USCG ordered 6 FRCs at the time of the phase 2 award and, in June 2017, exercised an option for an additional 6 FRCs. The USCG has established that the annual procurement quantity will be dictated by funding levels, and funding shortfalls could cause further schedule delays. The affordability gap from fiscal years 2018 to 2022 may be overstated because—as we found in April 2015—DHS’s funding plan to Congress does not contain operations and maintenance (O&M) funding for USCG programs. USCG officials anticipate receiving $1.6 billion in O&M funding over this 5-year period. USCG officials stated that they expect to exercise an option for 4 FRCs in fiscal year 2018 and that the USCG plans to prioritize acquisition funding in fiscal years 2019 and 2020 to procure the final 10 hulls and complete procurement of all 58 FRCs. United States Coast Guard (USCG) FAST RESPONSE CUTTER (FRC) DOT&E noted that these deficiencies do not prevent mission completion or present a danger to personnel, but recommended that they be resolved as soon as possible. USCG officials indicated that they plan to resolve the remaining deficiencies through engineering or other changes. The USCG continues to work with the contractor—Bollinger Shipyards, LLC—to address issues covered by the warranty and acceptance clauses for each ship. For example, 18 engines—9 operational engines and 9 spare engines—have been replaced under the program’s warranty. According to USCG documentation, 65 percent of the current issues with the engines have been resolved through retrofits; however, additional problems with the engines have been identified since our April 2017 review. For example, issues with water pump shafts are currently being examined through a root cause analysis and will be redesigned and are scheduled to undergo retrofits starting in December 2018. We previously found that the FRC’s warranty resulted in improved cost and quality by requiring the shipbuilder to pay for the repair of defects. As of September 2017, USCG officials said the replacements and retrofits completed under the program’s warranty allowed the USCG to avoid an estimated $104 million in potential unplanned costs—of which $63 million is related to the engines. The FRC program does not have any critical staffing vacancies, but the USCG identified insufficient staffing for shore-side support groups as a potential risk that could affect the asset’s operations. These groups provide maintenance to the FRCs while they are in port. In order to mitigate this staffing issue, the USCG is using commercial contracts for maintenance to supplement the capacity of the USCG’s maintenance staff. USCG officials stated that the FRC program is fully funded, executable, and on track to reach FOC by March of 2027. They added that FRCs were recently delivered to locations in Mississippi, Alaska, and Hawaii. USCG officials stated that FRCs are integral to USCG operations, such as providing critical support during the recent hurricane season, and that the program office continues to work with the contractor and stakeholders to quickly and properly address issues with FRCs as they are identified. USCG officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. H-65 CONVERSION/SUSTAINMENT PROGRAM (H-65) UNITED STATES COAST GUARD (USCG) The H-65 aircraft is a short-range helicopter that the USCG uses to fulfill its missions, including search and rescue, ports and waterways security, marine safety, and defense readiness. The H-65 acquisition program increased the fleet’s size by 7 aircraft, added armament capabilities, upgraded navigation systems, and replaced each of the helicopters’ engines. The program is currently focused on upgrades to radar sensors, the automatic flight control system (AFCS), and avionics. Operational assessment of avionics upgrade planned to start in February 2018. Program fully staffed, but schedule slips raise risks with future staffing requirements. GAO last reported on this program in April 2017 (GAO-17-346SP). As of November 2017, the program remains in breach of its current acquisition program baseline (APB). In November 2016, the USCG notified Department of Homeland Security (DHS) leadership that it would not complete all activities required—including developmental testing and an operational assessment—to achieve acquisition decision event (ADE) 2C for low-rate initial production of the avionics and AFCS upgrades by its current APB threshold date of March 2017. USCG officials primarily attributed these delays to an underestimation of the technical effort necessary to meet the requirements and have subsequently worked with the contractor to continue development of avionic upgrades. In January 2017, DHS leadership directed the program to update its APB, life-cycle cost estimate (LCCE) and test and evaluation master plan by May 2017. However, the USCG did not meet this deadline, in part, because it decided to add a service life extension program (SLEP) to the H-65 program. The SLEP is expected to extend the current 20,000 flight hour service life of each aircraft by another 10,000 flight hours by replacing obsolete aircraft components. USCG officials stated that this will allow the USCG to delay purchasing new aircraft to prioritize funding for the Offshore Patrol Cutter. USCG officials plan to obtain approval for the SLEP when the program submits its revised APB for DHS approval, which is expected by March 2018. The program is revising its LCCE, but provided an update in June 2017 to inform the budget process. This update exceeds its current APB thresholds because it includes an initial estimate for the SLEP. The USCG estimates that the SLEP will cost $54 million for the entire fleet. USCG officials attributed the increase in operations and maintenance (O&M) costs to the additional extension of the aircraft’s operational life. The program’s O&M costs previously increased due to the USCG’s decision to extend the aircraft’s operational life from 2030 to 2039. The affordability gap from fiscal years 2018 to 2022 may be overstated because— as we found in April 2015—DHS’s funding plan to Congress does not contain O&M funding for USCG programs. USCG officials anticipate receiving $1.6 billion in O&M funding over this 5-year period. United States Coast Guard (USCG) H-65 CONVERSION/SUSTAINMENT PROGRAM (H-65) The program’s OTA plans to conduct an operational assessment starting in February 2018 to identify areas of risk before beginning initial operational test and evaluation (OT&E) in late calendar year 2018. Initial OT&E is intended to test all of the H-65 upgrades installed throughout the life of the program to support approval for full-rate production. The USCG awarded new contracts to Rockwell Collins—the original equipment manufacturer of the legacy AFCS and avionics—to address the challenges encountered with development of the new upgrades. Specifically, the program awarded new contracts to support continued development of the AFCS and avionics upgrades in July 2016 and March 2017, respectively. As of September 2017, the combined value of both contracts totaled more than $15 million. The USCG cancelled development of a dedicated surface search radar capability for the H-65 in 2014, but USCG officials said a commercial off-the-shelf weather radar with surface search capability will be installed as part of the avionics upgrade. USCG officials said there is some risk involved with extending the aircrafts’ service life beyond 20,000 flight hours since it has never been done by other agencies that operate the H-65. However, USCG officials stated that the aircraft manufacturer, Airbus, assisted the USCG’s chief aeronautical engineer in identifying specific parts needing replacement and is providing support. In July 2017, the USCG reported that the program was fully staffed, but that the schedule slips have introduced potential risks with future staffing requirements. The program is mitigating these risks by extending some military personnel and ensuring rotating personnel are replaced by new staff with the expertise needed to complete the program’s planned activities, such as testing. USCG officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. UNITED STATES COAST GUARD (USCG) The program is intended to assist the USCG in maintaining the capability to access the Arctic and Antarctic polar regions. The USCG requires its icebreaking fleet to conduct multiple missions, including defense readiness; marine environmental protection; ports, waterway, and coastal security; and search and rescue. The USCG plans to acquire three heavy icebreakers to recapitalize the only existing operational heavy icebreaker, which is nearing the end of its service life. Program initiated model testing of hull and propulsion systems, which will inform design decisions. Program office integrates USCG and Navy personnel, but funding responsibilities may cause challenges. GAO last reported on this program in September 2017 (GAO-17-698R). In June 2014, Department of Homeland Security (DHS) leadership granted the program acquisition decision event (ADE) 1 approval. The Acting Under Secretary for Management also acknowledged the USCG’s need to accelerate the acquisition process to mitigate gaps in the heavy icebreaking capability because the service life of the USCG’s only heavy polar icebreaker, which had already been extended, could end as early as 2020. In January 2018, DHS leadership approved the program’s initial acquisition program baseline (APB) establishing cost, schedule, and performance goals. The USCG planned to achieve a combined ADE 2A and 2B by December 2017, which would authorize the initiation of development efforts. According to DHS officials, this milestone was delayed to February 2018 to allow for the completion of required acquisition documents to inform the decision, such as the program’s life-cycle cost estimate and APB. The USCG is partnering with the Navy to leverage shipbuilding expertise and engaging early with potential shipbuilders through industry studies to mitigate some risks associated with the program’s accelerated acquisition schedule. However, GAO previously found that the program faces challenges in implementing the accelerated schedule. For example, the first icebreaker—which is preliminarily estimated to cost about $750 million to design and construct—would need to be fully funded in fiscal year 2019 at the same time the USCG is expecting to prioritize funding for the Offshore Patrol Cutter. In fiscal year 2017, the Consolidated Appropriations Act or associated explanatory materials, reflected funding for the program, including $150 million for advance procurement of heavy polar icebreakers and $25 million to the USCG for programmatic costs, respectively. USCG officials stated that the Navy funding could cover most of the design costs but would not cover long lead items or construction costs for any of the ships. They further stated that uncertainties with the amount and source of future appropriations have made planning the icebreaker acquisition challenging. United States Coast Guard (USCG) DHS leadership approved four key performance parameters (KPP) related to the ship’s ability to independently break through ice, the ship’s operating duration, and communications. In May 2017, the USCG began model testing of potential hull designs and propulsion configurations. USCG officials explained that the hulls of icebreakers are unique from other ships because they must balance a hull design optimized for icebreaking, which are generally broad and blunt, against a hull design optimized for seakeeping, which are generally narrow and streamlined. USCG officials noted that the power demands and propulsion system for the ship are dependent on the hull design. USCG officials stated that maneuverability was identified as a challenge during model testing and explained that azimuthing propulsors—propellers that sit below the ship and can rotate 360 degrees—offered better maneuverability than traditional propulsion systems. USCG officials said these propulsors are widely used on commercial ships, but may need modification to meet the USCG’s requirements. USCG officials anticipate results from the model testing to be completed by March 2018 and plan to use these results to inform the final specifications for the ships. In November 2017, DHS’s Director, Office of Test and Evaluation approved the program’s test and evaluation master plan, which calls for additional model testing to assess resistance, propulsion, and maneuverability. The USCG established an integrated heavy polar icebreaker program office with the Navy and in 2017, DHS, the USCG, and Navy entered into several agreements that outline oversight roles, among other things. For example, these agreements state that the program will follow DHS acquisition policies with DHS leadership serving as the acquisition decision authority for program milestones. However, the Navy will review and approve acquisition documents before the program seeks DHS approval. These agreements also state that the program’s contracting actions could be funded by either USCG or Navy appropriations, and the source of the appropriations will award the contract. The program plans to competitively award a contract, which would include options for the detail design and construction for all three ships to a single shipbuilder by June 2019. Program officials stated they plan to award the contract under full and open competition to obtain competitive prices and include the construction of the three ships as options to accommodate the program’s funding uncertainties. In February 2017, the USCG awarded contracts to five shipbuilders—valued at approximately $4 million each—for design studies which will inform program decisions. Program officials stated that under these design studies contracts, the shipbuilders developed several potential ship designs and preliminary costs, with a focus on alternative propulsion options and hull designs. In August 2017, USCG officials told GAO that the program’s staffing gap was not negatively impacting program efforts. USCG officials stated that the program office had completed requirements for ADE 2A and 2B, and is on track to release the request for proposals for the detail design and construction contract by March 2018. These officials added that, during 2017, the program office refined the program’s requirements, completed ice and open water model testing, and partnered with five industry teams to evaluate multiple design solutions. USCG officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. LONG RANGE SURVEILLANCE AIRCRAFT (HC-130H/J) UNITED STATES COAST GUARD (USCG) The USCG uses HC-130H and HC-130J aircraft to conduct search and rescue missions, transport cargo and personnel, support law enforcement, and execute other operations. Both aircraft are quad-engine propeller-driven platforms. The HC-130J is a modernized version of the HC-130H, which has advanced engines, propellers, and equipment that provide enhanced speed, altitude, range, and surveillance capabilities. Performance testing of new mission system processor complete. Transfer of HC-130H aircraft to other agencies ongoing. GAO last reported on this program in April 2017 (GAO-17-346SP). During 2017, the USCG continued a nearly 3-year effort to re-baseline the program— which includes revisions to the program’s life-cycle cost estimate (LCCE) and acquisition program baseline (APB)—to account for significant changes. Specifically, the USCG decided to pursue an all HC-130J fleet and, in fiscal year 2014, Congress directed the transfer of 7 HC-130H aircraft to the U.S. Air Force. The USCG was in the process of upgrading these aircraft, but cancelled further HC-130H upgrades. In September 2017, Department of Homeland Security (DHS) leadership directed the USCG to submit the revised APB by January 2018. According to USCG officials, the re-baseline has been delayed, in part, because Congress also directed the USCG to conduct a multi-phased analysis of its mission needs. In November 2016, the USCG submitted the results of its analysis for fixed- wing aircraft, which confirmed the planned total quantity of 22 HC-130J aircraft and an annual flight-hour goal of 800 hours per aircraft. USCG officials said the results of the analysis will be reflected in the program’s revised LCCE and subsequent APB, but noted that challenges with the vendor hired to complete the LCCE revision have also contributed to delays. The program submitted cost information in June 2017 to inform the budget process, but it reflected no updates from the program’s November 2011 LCCE. USCG officials previously attributed the acquisition cost growth and schedule slip from the program’s initial APB to the increase in HC-130J quantities from 6 to 22. However, when the revised LCCE is complete, estimated costs may decrease since the HC-130J aircraft are less expensive to maintain. As of December 2017, USCG officials stated they had received 11 HC-130J aircraft and had awarded contracts for 3 more—some of which were not requested. USCG officials previously stated that the program needs to acquire 1-2 HC-130J aircraft per year to meet its full operational capability (FOC) date. However, it is unclear how the USCG will meet its FOC date because it only requested funding for 1 aircraft over the next 5 years. The affordability gap from fiscal years 2018 to 2022 may be overstated because—as we found in April 2015—DHS’s funding plan to Congress does not contain operations and maintenance (O&M) funding for USCG programs. USCG officials anticipate receiving approximately $1.4 billion in O&M funding over this 5- year period. United States Coast Guard (USCG) LONG RANGE SURVEILLANCE AIRCRAFT (HC-130H/J) The HC-130J will not be able to meet two of its seven key performance parameters (KPP) until the USCG installs a new mission system processor on the aircraft—an effort that is already underway. These two KPPs are related to the detection of targets and the aircraft’s ability to communicate with other assets. The USCG is replacing the mission system processor on its fixed-wing aircraft—including the HC-130J—with a system used by the U.S. Navy and DHS’s Customs and Border Protection. The new mission system processor is intended to enhance operator interface and sensor management, and replace obsolete equipment. The USCG conducted developmental testing on a prototype of the HC-130J mission system processor. According to USCG officials, this testing was completed in June 2017 and successfully demonstrated the new mission system processor in a variety of operational environments. The USCG does not plan to operationally test the new processor on the HC-130J, in part, because the aircraft has already been tested. In 2009, DHS’s Director, Office of Test and Evaluation and the USCG determined the HC-130J did not need to operationally test the airframe because the U.S. Air Force conducted operational testing on the base C-130J airframe in 2005. Instead, the USCG plans to operationally test the new mission system processor in fiscal year 2021 during operational testing on the C-27J, which is new to the USCG’s fixed-wing fleet. As of November 2017, the USCG had accepted three HC-130J aircraft outfitted with the new mission system processor. In December 2013, Congress directed the transfer of 7 HC-130H aircraft to the U.S. Air Force for modifications—which consists of upgrades and installing a fire retardant delivery system—and subsequent transfer to the U.S. Forest Service. This direction factored into the USCG’s decision to pursue an all HC-130J fleet. As of December 2017, the Forest Service had not yet received any modified aircraft primarily because of issues with contractors. According to USCG officials, the original contract the Air Force awarded to install the fire retardant delivery system in May 2016 was terminated 7 months later due to an unqualified vendor and a new contract has not yet been awarded. In the meantime, the Forest Service is using 2 of the 7 HC-130Hs. USCG officials said these aircraft are not modified, but outfitted with a less effective firefighting device. As of November 2017, the USCG plans to operate 14 of its HC-130H aircraft until the end of their service lives or until they can be replaced with new HC-130J aircraft. However, as previously discussed, the USCG has not requested funding for the additional HC-130J aircraft to support this plan. In October 2017, USCG officials reported that they were in the process of hiring staff to address the program’s staffing gap. USCG officials provided technical comments on a draft of this assessment, which GAO incorporated, as appropriate. MEDIUM RANGE SURVEILLANCE AIRCRAFT (HC-144A/ C-27J) UNITED STATES COAST GUARD (USCG) The USCG uses HC-144A and C-27J aircraft to conduct all types of missions, including search and rescue and disaster response. All 32 aircraft—18 HC-144A aircraft and 14 C-27J aircraft—are twin-engine propeller driven platforms. The interior of both aircraft are able to be re-configured to accommodate cargo, personnel or medical transports. Developmental testing of new mission system processor is ongoing. Program continues to face challenges related to purchasing spare parts and accessing technical data. GAO last reported on this program in April 2017 and March 2015 (GAO-17-346SP, GAO-15-325). USCG officials said the program is on track to meet the cost and schedule goals in its current acquisition program baseline (APB), which Department of Homeland Security (DHS) leadership approved in August 2016 to reflect the restructuring of the HC-144A acquisition program. The USCG initially planned to procure a total of 36 HC-144A aircraft, but reduced that number to the 18 it had already procured after Congress directed the transfer of 14 C-27J aircraft from the U.S. Air Force to the USCG in fiscal year 2014. The program’s APB divides the program into two phases: phase 1 includes acceptance of the 18 HC-144A aircraft and upgrades to the aircraft’s mission and flight management systems, and phase 2 includes acceptance of and modifications to the C-27J aircraft to meet the USCG’s mission needs. In October 2017, USCG officials told GAO that the program had initiated phase 1 efforts to upgrade the first HC-144A aircraft. The USCG plans to complete upgrades on all HC-144As by the end of fiscal year 2021. For phase 2, the USCG has accepted all 14 C-27Js from the U.S. Air Force and plans to complete the modification of all C-27Js by March 2025 to achieve full operational capability (FOC). To inform the budget process, the program updated its life-cycle cost estimate (LCCE) in June 2017, which is within its current APB cost thresholds. This estimate includes C-27J modification costs, such as installation of a new sensor package and new mission system processor. The program’s LCCE for the 36 HC-144A aircraft previously increased to $28.7 billion in 2012 when the USCG accounted for 5 years of additional costs, among other things. The current LCCE represents a considerable decrease, but also reflects a reduction in the number of aircraft and planned flight hours. The affordability gap from fiscal years 2018 to 2022 may be overstated because—as we found in April 2015—DHS’s funding plan to Congress does not contain operations and maintenance (O&M) funding for USCG programs. USCG officials anticipate receiving nearly $1.7 billion in total funding over this 5-year period to cover nearly $1.8 billion in total costs. United States Coast Guard (USCG) MEDIUM RANGE SURVEILLANCE AIRCRAFT (HC-144A/C-27J) Neither the HC-144A nor the C-27J will be able to meet two of their seven key performance parameters (KPP) until the USCG installs a new mission system processor on the aircraft—an effort that is already underway. These two KPPs are related to the detection of targets and the aircraft’s ability to communicate with other assets. The USCG is replacing the mission system processor on its fixed-wing aircraft— including the HC-144A and C-27J—with a system used by the U.S. Navy and DHS’s Customs and Border Protection. The new mission system processor is intended to enhance operator interface and sensor management, and replace obsolete equipment. The USCG plans to operationally assess the new mission system processor during operational testing of the C-27J, which is scheduled to begin in fiscal year 2021. The USCG still faces challenges in transitioning the C-27J into the USCG fleet. In March 2015, GAO found that the successful and cost-effective fielding of the C-27J aircraft is contingent on the USCG’s ability to address risk areas including, purchasing spare parts and accessing technical data, among other issues. According to USCG officials, the program continues to face challenges purchasing spare parts and accessing technical data. The program is reliant on the aircraft original equipment manufacturer for about 35 percent of spare C-27J parts. For other parts, USCG officials said that the USCG continues to look for ways to provide the same or similar parts for the aircraft at a faster rate and the USCG plans to award contracts to two additional manufacturers in calendar year 2018. USCG officials stated that retrieving technical data for the C-27J aircraft remains a challenge, but the USCG is working with the Department of Defense to obtain rights to data currently owned by the original equipment manufacturer. Once the USCG receives appropriate rights to C-27J technical data, the USCG officials said they can begin modification of the aircraft. The USCG also plans to purchase the same surface search radar used on the HC-144A or the HC-130J for the C-27J, which will give the USCG some commonality in maintenance, logistics, and training for this aspect of the aircraft. In October 2017, USCG officials told GAO that the program’s staffing is adequate and the gap has not negatively affected the program. USCG officials stated that the program remains on track to meet the cost, schedule, and performance goals outlined in its current APB and that they monitor APB key parameters in accordance with DHS guidance. These officials added that market research continues to increase supply chain sources and to identify products for new mission systems. USCG officials also provided technical comments, which GAO incorporated as appropriate. NATIONAL SECURITY CUTTER (NSC) UNITED STATES COAST GUARD (USCG) The USCG uses the NSC to conduct search and rescue, migrant and drug interdiction, environmental protection, and other missions. The NSC replaces and provides improved capabilities over the USCG’s High Endurance Cutters. The NSC carries helicopters and cutter boats, provides an extended on-scene presence at forward deployed locations, and operates worldwide. Follow-on operational testing began in October 2017, but cybersecurity testing delayed. The USCG is conducting a study to determine root cause of propulsion system issues. GAO last reported on this program in March and April 2017 (GAO-17-218, GAO-17- 346SP). In November 2017, Department of Homeland Security (DHS) leadership approved a revised acquisition program baseline (APB), which accounted for the addition of a ninth NSC to the program of record. The USCG originally planned to acquire only eight NSCs; however, in the Consolidated Appropriations Act of 2016, Congress directed that not less than $640 million be immediately available and allotted to contract for the production of a ninth NSC. In December 2016, the USCG awarded a contract to produce the ninth NSC and, as of November 2017, six NSCs had been delivered and three were under construction. The USCG anticipates delivery of the ninth NSC in September 2020, which coincides with the program’s prior APB threshold date for full operational capability (FOC). However, the revised APB extends this date by 1 year to account for any risks in delivering the additional ship. The program’s FOC date previously slipped 4 years, which USCG officials attributed to funding shortfalls, among other things. The ninth NSC contributed to a $453 million and $123 million increase in the program’s APB cost thresholds for acquisition and operations and maintenance (O&M), respectively. However, the program’s revised life-cycle cost estimate (LCCE) is still lower than its initial estimate for eight ships, which USCG officials attribute to more accurate estimates. The revised LCCE also included costs for several design changes the USCG has had to implement on equipment with known issues. As of September 2017, 12 equipment systems required design changes, which totaled an estimated cost of over $260 million. This work includes structural enhancement work on the first two NSCs and the replacement of the gantry crane, which aids in the deployment of cutter boats. The affordability gap from fiscal years 2018 to 2022 may be overstated because— as we found in April 2015—DHS’s funding plan to Congress does not contain O&M funding for USCG programs. USCG officials anticipate receiving approximately $2.1 billion in O&M funding over this 5-year period to cover the NSC’s estimated $1.8 billion in O&M costs, but stated it will refine its annual budget request based on the program’s needs each year. The USCG also identified carryover funding to cover the projected acquisition funding shortfall in fiscal year 2018. United States Coast Guard (USCG) NATIONAL SECURITY CUTTER (NSC) The DHS USM also directed the USCG to complete a study to determine the root cause of the NSC’s propulsion system issues by December 2017; however, as of January 2018, the study was not yet complete. GAO previously reported on these issues—including high engine temperatures, cracked cylinder heads, and overheating generator bearings that were impacting missions—in January 2016. The NSC program does not have any critical staffing vacancies. However, in July 2017, the program reported that the greatest staffing challenge is a potential extension to the program’s end date if the USCG acquires more than 9 NSCs. If this occurs, the program office must reassess future staffing requirements to ensure adequate program oversight continues until the last NSC completes post-delivery activities. In addition, the USCG has made changes to its staffing model for operating the NSCs. The USCG initially planned to implement a crew rotational concept in which crews would rotate while NSCs were underway to achieve a goal of 230 days away from the cutter’s homeport. In February 2018, USCG officials told GAO they abandoned the crew rotational concept because the concept did not provide the USCG with the expected return on investment. Instead, USCG officials said a new plan has been implemented that does not rotate crew and is anticipated to increase the days away from home port from the current capability of 185 days to 200 days. USCG officials stated that NSCs had a record year of narcotics seizures in 2017. In addition to the test activities identified in this assessment, USCG officials stated that the first follow-on OT&E event was completed in December 2017 and the first cybersecurity test event is scheduled for February 2018. They also noted that the shipbuilder continues to show improving cost performance and is completing construction within budget. USCG officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. OFFSHORE PATROL CUTTER (OPC) UNITED STATES COAST GUARD (USCG) The USCG plans to use the OPC to conduct patrols for homeland security, law enforcement, and search and rescue operations. The OPC is being designed for long-distance transit, extended on-scene presence, and operations with deployable aircraft and small boats. It is intended to replace the USCG’s aging Medium Endurance Cutters (MEC) and bridge the operational capabilities provided by the Fast Response Cutters and National Security Cutters (NSC). Program plans to refine the ship’s design, as needed, based on early operational assessment results. Program’s acquisition strategy incorporated some best practices. GAO last reported on this program in April and June 2017 (GAO-17-346SP, GAO-17-654T). According to USCG officials, the OPC program is on track to meet its cost and schedule goals. In September 2014, Department of Homeland Security (DHS) leadership approved the program’s current acquisition program baseline (APB), which accounts for schedule slips resulting from delays in awarding the program’s initial contracts and a subsequent bid protest. The USCG expects to start construction of the first OPC in fiscal year 2019 and procure a total of 25 ships. The USCG plans to initially fund one OPC per year and eventually two OPCs per year until all 25 OPCs are delivered. USCG officials have stated that additional OPC delays will decrease the USCG’s operational capacity because the MECs will likely require increased downtime for maintenance and other issues, reducing their availability. In January 2016, DHS leadership directed the USCG to revise the OPC life-cycle cost estimate (LCCE) and submit it for approval within 6 months of awarding the detailed design and construction contract for the ships—which the USCG subsequently awarded in September 2016. In June 2017, the program submitted an updated LCCE to inform the budget process that—while not approved by DHS leadership—accounts for the contract award and the program’s schedule slips. As of December 2017, the program’s revised LCCE still had not been approved. It is unclear whether it will address other issues, such as an increase in the estimated weight of each ship. The OPC’s initial LCCE was based in large part on the estimated weight of each ship. However, in November 2017, USCG officials said the ship is expected to weigh up to 35 percent more than originally estimated. Nevertheless, USCG officials expect to procure all 25 OPCs for the program’s APB objective cost of $10.5 billion because the contractor identified cost efficiencies to compensate for the increased weight. GAO previously raised questions about the OPC’s affordability and its effect on other USCG acquisition programs, such as the Heavy Polar Icebreaker. Specifically, GAO noted that the OPC procurement will consume about two-thirds of the USCG’s planned acquisition budget between fiscal years 2018 and 2032 based on recent funding history. The program’s affordability gap from fiscal years 2020 to 2022 may be overstated because—as we found in April 2015—DHS’s funding plan to Congress does not report operations and maintenance (O&M) funding for USCG programs. USCG officials anticipate receiving $103 million in O&M funding over this 5-year period. United States Coast Guard (USCG) OFFSHORE PATROL CUTTER (OPC) The USCG plans to conduct initial operational test and evaluation (OT&E) on the first OPC in fiscal year 2023. However, the test results from initial OT&E will not be available to inform key decisions. For example, the results will not be available to inform the decision to build 2 OPCs per year—which USCG officials said is scheduled to begin in fiscal year 2021. Without test results to inform these key decisions, the USCG must make substantial commitments prior to knowing how well the ship will meet its requirements. The USCG is in the process of completing the design of the OPC before starting construction, which is in-line with GAO shipbuilding best practices. In addition, USCG officials stated that the program is using state-of-the-market technology that has been proven on other ships as opposed to state-of-the-art technology, which lowers the risk of the program. The USCG used a two-phased down-select strategy to select a contractor to deliver the OPC. For phase 1, the USCG conducted a full and open competition and selected three contractors to perform preliminary design work. For phase 2, the USCG selected one of the phase 1 contractors—Eastern Shipbuilding—to develop a detailed design of the OPC and construct no more than the first 11 ships. The contract—worth approximately $110 million—includes separate options for each ship. The options for ships 10 and 11 were unpriced and included in the solicitation as an incentive to convert the contract type from fixed price incentive to firm fixed price. These options will be included in a repricing proposal submitted by the contractor for ships 6-9 after delivery of the first ship. USCG officials have stated the USCG will decide whether to exercise the option for ships 10 and 11 based on the contractor’s re-pricing proposal for ships 6-9. The USCG plans to re-compete the contract for the remaining 14-16 ships. The OPC program continued to increase its required staffing level and the USCG reported that adjustments to staffing will continue as the program matures. The program faces shortages including engineers, a logistics manager, and a technical director, but USCG officials said they are hiring staff to address these gaps. USCG officials stated that the OPC program is fully funded, executable, and on track to award construction for the first OPC in September 2018. These officials said design efforts are on track and the contractor is meeting the milestones to deliver the first OPC in 2021. USCG officials noted that they are continuing to increase staff at the contractor’s facility to prepare for the start of construction for the first OPC. USCG officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. UNITED STATES CITIZENSHIP AND IMMIGRATION SERVICES (USCIS) The Transformation program was established in 2006 to transition USCIS from a fragmented, paper-based filing environment to a consolidated, paperless environment for processing immigration and citizenship applications. The program developed a new system architecture and delivers capability through releases that correspond to new product lines within four lines of business: Citizenship, Immigrant, Non-Immigrant, and Humanitarian. Revision of key performance parameters and test and evaluation master plan in progress. Program is reorganizing to leverage expertise within USCIS and revise its approach. GAO last reported on this program in April 2017 and July 2016 (GAO-17-346SP, GAO-16- 467). The program remains in breach of its current acquisition program baseline (APB). In September 2016, the Transformation program experienced a schedule breach when it failed to complete deployment of all the product lines associated with the Citizenship line of business. The deployment was delayed because of challenges processing new product lines on the new system architecture and other technical issues with the case management system. Prior to the breach, the program deployed six product lines, which supported approximately 24 percent of the total workload processed by USCIS in fiscal year 2016. Department of Homeland Security (DHS) leadership previously re- baselined the program in April 2015 after USCIS determined that it could not use any of the architecture delivered under its initial strategy, despite having invested more than $475 million in its development. In December 2016, DHS leadership directed USCIS to stop planning and development for new product lines, develop a breach remediation plan, and update its acquisition documentation. In February 2017, DHS leadership approved the program’s remediation plan and the program has since made progress in implementing this plan. However, DHS leadership elected to continue with the program’s pause in new development following program reviews in March 2017, July 2017, and October 2017. USCIS officials said they are revising the program’s acquisition documents—including its APB and life-cycle cost estimate (LCCE)—and plan to re-baseline by March 2018. The program updated the total costs in its LCCE to inform the budget process, but these costs do not reflect the program’s re-baselining plans. As a result, the status of the program against its cost and schedule goals is unclear. However, the program is more than 3 years past its original full operational capability (FOC) date. The affordability gap from fiscal years 2018 to 2022 may be overstated because DHS’s funding plan to Congress no longer contained operations and maintenance funding for individual programs. USCIS uses revenue from premium processing fees to fund the Transformation program and routinely collects more fees than the program’s estimated costs. In September 2017, USCIS officials told GAO that the program is updating its key performance parameters (KPP) and test and evaluation master plan as part of its re-baselining efforts because the program continues to struggle to meet its requirements. DHS leadership previously approved a revised set of eight KPPs for the program in April 2015. However, USCIS could not fully demonstrate these KPPs until it achieved FOC. In the interim, the program’s OTA conducted operational assessments (OA) of new product lines as capability was deployed. The OTA completed two OAs since the program updated its KPPs, but DHS’s Director, Office of Test and Evaluation (DOT&E) did not verify all of the results. DOT&E reviewed the results of the first OA and concluded that the system met 6 of the 7 tested KPPs, but noted that the capability assessed was a minor subset of the system’s FOC. The OTA subsequently initiated an OA intended to inform DHS leadership’s acceptance of the Citizenship line of business. However, in December 2017, USCIS officials reported that the assessment had not yet been completed. USCIS officials told GAO that the program office underwent a reorganization in January 2017 to help address the program’s recent challenges. This effort included dismantling the program office and repositioning Transformation under the USCIS Office of Information Technology so the program could leverage expertise in areas such as engineering within USCIS. USCIS officials reported that the program no longer plans to deliver capability by product lines because this strategy focused too narrowly on the automation of forms associated with the lines of business. Going forward, USCIS officials said the program plans to develop capabilities that will address broader objectives, such as reducing the time it takes to process applications and decisions. The program previously made significant changes after it experienced a 5-month delay with its first release, which was deployed in May 2012. DHS attributed this delay to weak contractor performance and pursuing an unnecessarily complex system, among other things. To address these issues, the Office of Management and Budget, DHS, and USCIS determined the program should implement a new acquisition strategy, which allowed for an agile software development methodology and increased competition for development work. This strategy was reflected in the program’s April 2015 re-baseline. USCIS officials told GAO that they plan to address the Transformation program’s staffing gap now that the reorganization is complete. USCIS officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. Appendix II: Key Portfolio Management Practices To help determine the extent to which the Department of Homeland Security (DHS) has taken actions to enhance its policies and processes to better reflect key portfolio management practices, we assessed the department’s requirements, acquisition management, and resource allocation policies using key practices we established in September 2012. These key practices are based on our past work, in which we examined the practices that private sector entities use to achieve a balanced mix of new projects and found that successful commercial companies use a disciplined and integrated approach to prioritize needs and allocate resources. As a result, these organizations can avoid pursuing more projects than their resources can support and better optimize the return on their investments. This approach, known as portfolio management, requires companies to view each of their investments as contributing to a collective whole, rather than as independent and unrelated. Appendix III: Objectives, Scope, and Methodology The objectives of this audit were designed to provide congressional committees insight into the Department of Homeland Security’s (DHS) major acquisition programs. We assessed the extent to which (1) DHS’s major acquisition programs are on track to meet their schedule and cost goals and (2) DHS has taken actions to enhance its policies and processes to better reflect key portfolio management practices. To answer these questions, we reviewed 28 of DHS’s 79 major acquisition programs. We reviewed all 16 of DHS’s Level 1 acquisition programs— those with life-cycle cost estimates (LCCE) of $1 billion or more—that had at least one project, increment, or segment in the Obtain phase—the stage in the acquisition life cycle when programs develop, test, and evaluate systems—at the initiation of our audit. Additionally, we reviewed 12 other major acquisition programs—including 8 Level 1 programs that either had not yet entered or were beyond the Obtain phase, and 4 Level 2 programs that have LCCEs between $300 million and less than $1 billion—that we identified were at risk of not meeting their cost estimates, schedules, or capability requirements based on our past work and discussions with DHS officials. Specifically, we met with representatives from DHS’s Office of Program Accountability and Risk Management (PARM)—DHS’s main body for acquisition oversight—as a part of our scoping effort to determine which programs (if any) were facing difficulties in meeting their cost estimates, schedules, or capability requirements. The 28 selected programs were sponsored by eight different components, and they are identified in table 7, along with our rationale for selecting them. To determine the extent to which DHS’s major acquisition programs are on track to meet their schedule and cost goals, we collected key acquisition documentation for each of the 28 programs, such as all LCCEs and acquisition program baselines (APB) approved at the department level since DHS’s current acquisition management policy went into effect in November 2008. DHS policy establishes that all major acquisition programs should have a department-approved APB, which establishes a program’s critical cost, schedule, and performance parameters, before they initiate efforts to obtain new capabilities. Twenty four of the 28 programs had one or more department-approved LCCEs and APBs between November 2008 and December 31, 2017. We used these APBs to establish the initial and current cost and schedule goals for the programs. We then developed a data collection instrument to help validate the information from the APBs and collect similar information from programs without department-approved APBs. Specifically, for each program, we pre-populated a data collection instrument to the extent possible with the schedule and cost information we had collected from the APBs and our 2017 assessment (if applicable) to identify schedule and cost goal changes, if any, since (a) the program’s initial baseline was approved and (b) January 2017—the data cut-off date of the report we issued in April 2017. We shared our data collection instruments with officials from the program offices to confirm or correct our initial analysis and to collect additional information to enhance the timeliness and comprehensiveness of our data sets. We then met with program officials to identify causes and effects associated with any identified schedule and cost goal changes. Subsequently, we drafted preliminary assessments for each of the 28 programs, shared them with program and component officials, and gave these officials an opportunity to submit comments to help us correct any inaccuracies, which we accounted for as appropriate (such as when new information was available). Additionally, in July 2017, we collected copies of the detailed data on affordability that programs submitted to inform the fiscal year 2019 resource allocation process. We also collected copies of any annual LCCE updates programs submitted in fiscal year 2017. For each of the 24 programs with a department-approved APB, we compared (a) the most recent cost data we collected (i.e., a department-approved LCCE, the detailed LCCE information submitted during the resource allocation process, a fiscal year 2017 annual LCCE update, or an update provided by the program office) to (b) DHS’s funding plan presented in the Future Years Homeland Security Program (FYHSP) report to Congress for fiscal years 2018–2022, which presents 5-year funding plans for DHS’s major acquisition programs, to assess the extent to which a program was projected to have an acquisition funding gap in fiscal year 2018. Through this process, we determined that our data elements were sufficiently reliable for the purpose of this engagement. The FYHSP reports information by the department’s new common appropriation structure, which created standard appropriation fund types including (1) procurement, construction, and improvements and (2) operations and support. We refer to these types of funding as (1) acquisition and (2) operations and maintenance throughout this report. which are listed in appendix II—and identified any significant shortfalls. Specifically, we assessed the joint requirements directives and instruction manual; DHS’s Acquisition Management Directive 102-01, Acquisition Management Instruction 102-01-001, and other related guidance; and DHS’s resource allocation directive, instruction, and handbook. First, we assessed each group of policies against the key practices using the following ratings: Met—the documents fully reflected the key practice. Partially met—the documents reflected some, but not all parts of the key practice. Not met—the documents did not reflect the key practice. We shared our preliminary analysis for each group of policies with the DHS officials responsible for implementing them—specifically, the Joint Requirements Council (JRC), PARM, and the Office of Program Analysis and Evaluation (PA&E)—to discuss our findings, identify relevant sections of the documents we had not yet accounted for, and solicit their thoughts on those key practices that were not reflected in the policies. Second, we used the scores for each group of policies to develop a department-wide rating for each key practice. When applicable, we weighted the department-wide rating based on the intent of the key practice. For example, the department-wide rating for the key practice related to resource allocation across the portfolio was based more heavily on the rating for the resource allocation policies, rather than the ratings for the requirements or acquisition management policies. Third, we rolled-up the ratings for all the key practices in a particular area—as identified in appendix II—to establish a department-wide overall rating for each key practice area. We concluded that a key practice area was met if all ratings for the individual key practices in that area were met; partially met if the ratings for the individual key practices in that area were all partially met or a mix of met and not met; or not met if the ratings for the individual key practices in that area were all not met. In addition, we reviewed documentation that resulted from DHS’s requirements, acquisition management, and resource allocation processes since January 2016 to get a sense of how the department has implemented its current policies. For example, we reviewed JRC- validated requirements documents; acquisition decision memorandums; Acquisition Program Health Assessment reports; and documentation related to the development of DHS’s fiscal year 2018 budget request and the fiscal year 2018–2022 FYHSP report, including resource allocation guidance, presentations to DHS leadership, and preliminary decisions. We also interviewed officials from the JRC, PARM, PA&E, and the Deputy’s Management Action Group to identify any current and planned initiatives to improve management of the department’s portfolio of major acquisition programs. We then compared our assessment of DHS’s current policies, practices, and planned initiatives to our previous findings and the Standards for Internal Control in the Federal Government. We conducted this performance audit from March 2017 through May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix IV: Comments from the Department of Homeland Security Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact listed above, Rick Cederholm (Assistant Director), Aryn Ehlow (Analyst-in-Charge), Pete Anderson, Lorraine Ettaro, Helena Johnson, TyAnn Lee, Alexis Olson, Sylvia Schatz, Roxanna Sun, and Lindsay Taylor made key contributions to this report. Other contributors included Mathew Bader, Carissa Bryant, Andrew Burton, Erin Butkowski, Lisa Canini, Jenny Chow, John Crawford, Lindsey Cross, Laurier R. Fish, Betsy Gregory-Hosler, Claire Li, Sarah Martin, Marycella Mierez, Erin O’Brien, Katherine Pfeiffer, John Rastler, Ashley Rawson, Andrew Redd, Jill Schofield, Charlie Shivers III, and Jeanne Sung. Related GAO Products DHS Program Costs: Reporting Program-Level Operations and Support Costs to Congress Would Improve Oversight. GAO-18-344. Washington, D.C.: April 25, 2018. Homeland Security Acquisitions: Identifying All Non-Major Acquisitions Would Advance Ongoing Efforts to Improve Management. GAO-17-396. Washington, D.C.: April 13, 2017. Homeland Security Acquisitions: Earlier Requirements Definition and Clear Documentation of Key Decisions Could Facilitate Ongoing Progress. GAO-17-346SP. Washington, D.C.: April 6, 2017. Coast Guard Cutters: Depot Maintenance Is Affecting Operational Availability and Cost Estimates Should Reflect Actual Expenditures. GAO-17-218. Washington, D.C.: March 2, 2017. Homeland Security Acquisitions: Joint Requirements Council’s Initial Approach Is Generally Sound and It Is Developing a Process to Inform Investment Priorities. GAO-17-171. Washington, D.C.: October 24, 2016. Homeland Security Acquisitions: DHS Has Strengthened Management, but Execution and Affordability Concerns Endure. GAO-16-338SP. Washington, D.C.: March 31, 2016. National Security Cutter: Enhanced Oversight Needed to Ensure Problems Discovered during Testing and Operations Are Addressed. GAO-16-148. Washington, D.C.: January 12, 2016. TSA Acquisitions: Further Actions Needed to Improve Efficiency of Screening Technology Test and Evaluation. GAO-16-117. Washington, D.C.: December 17, 2015. Homeland Security Acquisitions: Major Program Assessments Reveal Actions Needed to Improve Accountability. GAO-15-171SP. Washington, D.C.: April 22, 2015. Coast Guard Aircraft: Transfer of Fixed-Wing C-27J Aircraft Is Complex and Further Fleet Purchases Should Coincide with Study Results. GAO-15-325. Washington, D.C.: March 26, 2015. Homeland Security Acquisitions: DHS Should Better Define Oversight Roles and Improve Program Reporting to Congress. GAO-15-292. Washington, D.C.: March 12, 2015. Coast Guard Acquisitions: Better Information on Performance and Funding Needed to Address Shortfalls. GAO-14-450. Washington, D.C.: June 5, 2014. Homeland Security Acquisitions: DHS Could Better Manage Its Portfolio to Address Funding Gaps and Improve Communications with Congress. GAO-14-332. Washington, D.C.: April 17, 2014. Homeland Security: DHS Requires More Disciplined Investment Management to Help Meet Mission Needs. GAO-12-833. Washington, D.C.: September 18, 2012. Department of Homeland Security: Assessments of Selected Complex Acquisitions. GAO-10-588SP. Washington, D.C.: June 30, 2010. Department of Homeland Security: Billions Invested in Major Programs Lack Appropriate Oversight. GAO-09-29. Washington, D.C.: November 18, 2008.
Why GAO Did This Study Each year, the DHS invests billions of dollars in a diverse portfolio of major acquisition programs to help execute its many critical missions. DHS's acquisition activities are on GAO's High Risk List, in part, because of management and funding issues. The Explanatory Statement accompanying the DHS Appropriations Act, 2015 included a provision for GAO to review DHS's major acquisitions. This report, GAO's fourth annual review, assesses the extent to which: (1) DHS's major acquisition programs are on track to meet their schedule and cost goals, and (2) DHS has taken actions to enhance its policies and processes to better reflect key practices for effectively managing a portfolio of investments. GAO reviewed 28 acquisition programs, including DHS's largest programs that were in the process of obtaining new capabilities as of April 2017, and programs GAO or DHS identified as at risk of poor outcomes. GAO assessed cost and schedule progress against baselines, assessed DHS's policies and processes against GAO's key portfolio management practices, and met with relevant DHS officials. What GAO Found During 2017, 10 of the Department of Homeland Security (DHS) programs GAO assessed that had approved schedule and cost goals were on track to meet those goals. GAO reviewed 28 programs in total, 4 of which were new programs that GAO did not assess because they did not establish cost and schedule goals before the end of calendar year 2017 as planned. The table shows the status of the 24 programs GAO assessed. Reasons for schedule delays or cost increases included technical challenges, changes in requirements, and external factors. Recent enhancements to DHS's acquisition management, resource allocation, and requirements policies largely reflect key portfolio management practices (see table). However, DHS is in the early stages of implementing these policies. GAO identified two areas where DHS could strengthen its portfolio management policies and implementation efforts: DHS's policies do not reflect the key practice to reassess a program that breaches—or exceeds—its cost, schedule, or performance goals in the context of the portfolio to ensure it is still relevant or affordable. Acquisition management officials said that, in practice, they do so based on a certification of funds memorandum—a tool GAO has found to be effective for DHS leadership to assess program affordability—submitted by the component when one of its programs re-baselines in response to a breach. Documenting this practice in policy would help ensure DHS makes strategic investment decisions within its limited budget. DHS is not leveraging information gathered from reviews once programs complete implementation to manage its portfolio of active acquisition programs. DHS's acquisition policy requires programs to conduct post-implementation reviews after initial capabilities are deployed, which is in line with GAO's key practices. Acquisition management officials said they do not consider the results of these reviews in managing DHS's portfolio because the reviews are typically conducted after oversight for a program shifts to the components. Leveraging these results across DHS could enable DHS to address potential issues that may contribute to poor outcomes, such as schedule slips and cost growth, for other programs in its acquisition portfolio. What GAO Recommends GAO recommends DHS update its acquisition policy to require certification of fund memorandums when programs re-baseline as a result of a breach and assess programs' post-implementation reviews to improve performance across the acquisition portfolio. DHS concurred with GAO's recommendations.
gao_GAO-18-262
gao_GAO-18-262_0
Background and individuals made by an employee who believes he or she has witnessed certain wrongdoing, such as gross mismanagement Reprisal Complaint: Following a disclosure, a complaint that an employee has experienced reprisal as a result of the disclosure, such as demotion or discharge. For contractor and grantee employees at NASA, whistleblower protections have changed over time. For example, by statute, in 2007, NASA contractor employees were protected against reprisal if they disclosed information relating to a substantial violation of law related to a contract. However, in 2008, amendments to the whistleblower statute provided protections only to those contractor employees at NASA who reported “a substantial and specific danger to public health or safety.” In 2013, the statute was amended again to include disclosures of gross mismanagement of a NASA contract or grant, a gross waste of Administration funds, and abuse of authority relating to a NASA contract or grant, or a violation of law, rule, or regulation related to a NASA contract or grant. In 2014, the statute was further amended with the only significant change to protect grantee and subgrantee employees. See table 1 for detailed description of the 2008, 2013, and 2014 amendments of the statute. Requirements under the Current Statute Under the current statute, the NASA Office of Inspector General and Administrator have different responsibilities. Since the 2014 amendments, contractor, subcontractor, grantee, and subgrantee employees are protected from reprisal if they disclose to certain persons or bodies information they reasonably believe is evidence of gross mismanagement of a federal contract or grant, a gross waste of federal funds, an abuse of authority relating to a federal contract or grant, a substantial and specific danger to public health or safety, or a violation of law, rule, or regulation related to a federal contract or grant. Additionally, contractor employees may make whistleblower disclosures to several entities, including a management official at the contractor. Figure 1 depicts the disclosure process and the complaint process. NASA OIG Role: Upon receiving a reprisal complaint, the OIG must evaluate whether a reprisal complaint is covered under the statute. In addition to the steps described in figure 1 for investigating complaints, there are instances when the OIG does not investigate. The OIG might not investigate for a variety of reasons, such as in cases where the complaint is already under investigation by another authority such as another OIG, or otherwise does not allege a violation of the law, such as if whistleblower disclosure does not constitute gross fraud, waste, abuse or mismanagement. If the OIG determines the case is not covered under the statute, it may then notify the complainant that no further action will be taken on the reprisal complaint. Administrator Role: Upon receipt of the NASA OIG investigation report, the NASA Administrator (the head of agency) has 30 days to determine whether the contractor made a prohibited reprisal and issue an order denying or granting relief. According to NASA officials, during the 30-day period after the agency head receives the OIG report, the agency practice has been to ask the OIG for any additional investigative work and also afford the complainant and the contractor an opportunity to submit a written response to the OIG report. Any person adversely affected or aggrieved by the administrator’s order may, within 60 days of issuance, obtain a limited review by the U.S. circuit court of appeals. Agency Procurement Official Role: Under the NFS regulations, NASA contracting officers are also responsible for inserting an NFS whistleblower clause into applicable contracts that requires contractors communicate to their employees their rights under the statute. The NFS whistleblower clause lays out the responsibility of contractors to communicate to their employees their rights under the statute, in writing and in their predominant native language. All contracts over the simplified acquisition threshold awarded on or after July 29, 2014, require a whistleblower clause. The statute also requires NASA to make best efforts to include a clause providing for the applicability of the 2013 amendments in contracts awarded before July 1, 2013—the effective date of the 2013 amendments—that have undergone major contract modifications. The terms “best efforts” and “major modifications” are not defined in the statute. Unlike provisions affecting contractors, the statute does not require NASA to ensure that grantees or subgrantees notify employees in writing of their rights under the statute. NASA OIG Investigated Reprisal Complaints within Required Time Frames and Recently Updated Incomplete Guidance From 2008 to June 2017, NASA OIG addressed whistleblower reprisal complaints within required time frames, according to OIG officials. At the time we initiated this review, the OIG’s guidance for handling reprisal complaints had been updated to reflect most statutory changes; however, it did not include guidance regarding subgrantees. During the course of our review, the OIG updated the investigation guidance in October 2017 to include subgrantee employees. NASA OIG Handled Reprisal Complaint Investigations within Required Time Frames NASA OIG completed 6 reprisal investigations within required time frames. The OIG received 277 whistleblower disclosures leading to 48 reprisal complaints from 2008 through June 2017, and handled those complaints within required time frames, according OIG officials. For the 6 of those reprisal complaints that were investigated, the OIG used extensions. OIG officials said that extensions may be necessary for a number of reasons, including that the complaint may be highly technical in nature, requiring the OIG to find subject matter expertise to better understand the nature of the whistleblower complaint and whether it constitutes gross fraud, waste, abuse, or mismanagement. When the OIG receives a reprisal complaint, complainants are asked to fill out a whistleblower complaint form and an investigation is initiated. See figure 2 below for the process by which the OIG conducts its investigations. In addition, there were 5 complaints currently under investigation and 37 complaints during this time frame that the NASA OIG did not investigate because the OIG deemed them to be frivolous, determined they were not covered under the statute, or the complaint was handled in another forum, such as the court system or by another OIG. Complaints were deemed frivolous for several reasons, including if the complainant did not want to disclose his or her identity and proceed with the claim, or the whistleblower disclosure happened after the reprisal. OIG officials told us that when cases are disposed of without an investigation, the OIG notifies the complainant of the decision in writing. Figure 3 shows the disposition of the 48 reprisal complaints received from 2008 through June 2017. The OIG Updated Investigation Guidance The OIG has developed guidance for conducting investigations, which includes a chapter on contractor and grantee whistleblower reprisal complaints. Although most changes to the statute (such as to whom reprisal may be reported) had been incorporated into the investigation guidance, the initial guidance provided to us by the OIG did not include a 2014 statutory requirement to extend protections to subgrantees. During the course of our review, in October 2017, the OIG updated its guidance for investigating reprisal complaints to include subgrantee employees. Because subgrantees are now protected by statute, including them in the investigation policy will help ensure they are consistently extended protections through OIG investigations. In addition to its guidance, OIG officials said they have developed training specific to whistleblower investigations for new investigators, conducted internal training for investigators, and external training for contractor employees. Additionally, the OIG Investigators’ Central Field Office conducts periodic training for investigators that includes any updates to whistleblower protections. With regard to external training, the OIG officials said that investigators at some of the NASA centers—with the largest contract activity—have conducted on-site training for some contractors. This training is conducted as part of general fraud awareness training. NASA Administrator Did Not Meet the Required Time Frames for Reprisal Complaint Review The Administrator failed to meet the required review time frame and issue an order of final determination of reprisal for 5 completed investigations received from the OIG from 2008 through June 2017. In all 5 cases, the Administrator took longer than the 30 days to issue an order. In one of those 5 complaints, an official from the Office of General Counsel was unable to provide us with the issued order and said he did not believe one was completed, and could not provide an explanation as to why an order was not completed. For the 5 reprisal complaints, figure 4 shows the number of days from when the Administrator received an OIG report of findings to the time when an order of final determination was documented. In addition to the 5 complaints mentioned above, there was another OIG investigation of a reprisal complaint that did not require response from the administrator within 30 days, but was finalized within our review time frame, for a total of 6 completed OIG investigations of reprisal complaints. For 3 of the 6 complaints, the OIG found that reprisal had occurred and reported those findings to the administrator for final determination of reprisal. The Administrator determined that none of these 3 complaints qualified for protection under the law. For 2 of these complaints, the Administrator found that they did not qualify for protections because they fell under the 2008 version of the statute and failed to allege a violation specific to public health and safety. In 2017, a court affirmed the Administrator’s position. For the third complaint, the Administrator determined reprisal could not be substantiated due to the complainant not meeting the standards of evidence under the statute. Further, we found that NASA does not have a standard process in place for the Administrator to review cases that qualify for protections under the statute and issue an order of final determination. According to an official from the Office of General Counsel, the agency has no standard process to ensure the contractors are afforded due process, among other things. The official from Office of General Counsel said the Administrator may need to conduct an additional investigation in some cases. He said that each case is different and would have to be handled on a case by case basis. In addition, the official said the Administrator may need to conduct hearings, independent of the OIG. Moreover, the official from Office of General Counsel highlighted concern that the Administrator’s office does not have the resources to conduct additional investigative work, which he said is a key contributor for the office’s inability to meet the 30-day timeline to issue an order of final determination. Despite acknowledging these challenges, the Administrator does not have a formal process or criteria to monitor and evaluate the way the office handles issuing an order of final determination of reprisal to ensure that it meets the statutory time requirements. Because the Administrator took longer than 30 days to respond to all reprisal complaints, including one where the Office of General Counsel failed to provide evidence that the Administrator responded at all, there may be the unintended consequence of disincentivizing future whistleblowers from making disclosures who fear their complaint will not be handled timely. Internal controls require that management should establish and operate monitoring activities to monitor the internal control system, evaluate the results, and take appropriate action to address issues on a timely basis. Without monitoring, evaluating, and taking appropriate corrective action based on the way the Administrator or his or her designee makes a final determination of reprisal, there is no assurance that whistleblower reprisal complaints will be handled within required time frames in the future. NASA Almost Always Communicates Whistleblower Protections to Contractors, but Internal Guidance Is Unclear In almost all of the contracts we reviewed, NASA had met its obligation to ensure its contractors are communicating whistleblower protections to their employees through a whistleblower contract clause. We also found that NASA has put in place guidance to its contracting workforce on the protections, and guidance on when to include the whistleblower clause in contracts. However, we found that some NASA officials have interpreted this guidance differently. Further, NASA’s guidance does not reflect an agency-wide policy on when to include the whistleblower clause when modifying a contract. Whistleblower Clause Included in Almost All Contracts, but a Few Were Missing the Required Clause In most cases, NASA included a clause regarding whistleblower reprisal protections in applicable contracts to ensure contractors communicate rights to its employees. But we found that the clause was not included in all relevant contracts in our review. Based on our review of a generalizable sample of contracts, we estimate that 98 percent of contracts would be expected to include a whistleblower clause at the time new contracts were awarded in applicable contracts in 2016, and 2 percent would not. Within our sample, 4 contracts did not have a whistleblower reprisal clause. After we shared our contract file review findings with NASA officials, they modified 3 of the 4 contracts to include the missing required whistleblower clause. For the remaining contract, the contractor performance was complete, the contract had been closed, and no further action will be taken. According to NASA procurement officials, human error, combined with its former contract writing system, could explain why the contracts in our sample did not have the required clause. They explained that the former contracting writing system relied on templates and did not automatically include the NFS clause into all contracts. Under this system, contracting officers used templates that included a list of all potential or applicable NFS and FAR clauses, which are incorporated through a manual process. Officials said that if a clause were included in the templates, it is unlikely that it would be removed because doing so would require supervisory approval. NASA procurement officials told us that the agency launched a new contract writing system in June 2017. They said that under the new contract writing system, contracting officers use a logic system that prepopulates each contract with required clauses. They said that the new automated system will likely lead to fewer human errors because inserting the clause will not be a manual process. Because the new system is still being implemented, we were unable to evaluate whether the risk of human error has been reduced or eliminated to ensure applicable contract awards have the clause. Under the previous and current systems, NASA contracts are to undergo various levels of review prior to award—including attorney review—at the centers or headquarters based on risk level and dollar thresholds. For example, contracts awarded by JSC valued at over $50 million are to be reviewed by headquarters. NASA procurement officials stated that they conduct procurement management reviews, and centers conduct annual self-assessments; however, at one center, officials pointed out that these reviews have not previously included whether a whistleblower clause is included in new contracts or major modifications. They said this is because reviews typically focus on known issues or program risk, and inclusion of the whistleblower clause has not been previously identified as an issue or risk. Contractors we spoke with were generally aware of their responsibilities to communicate reprisal protections to their employees because their contracts with NASA included the required NFS whistleblower clause. In response to our review, NASA procurement officials said they plan to include a review of the inclusion of NFS whistleblower clause in future compliance reviews as an area of emphasis and will instruct centers to include whether the clause is included in applicable contracts as part of the centers’ self-assessment process. NASA’s Guidance Contributes to Different Understanding of Reprisal Protections Three elements of NASA’s whistleblower reprisal protection guidance—its procurement notice, NFS clause, and definition of major modification— contribute to potential confusion or inconsistent application of whistleblower reprisal protections. First, in July 2014, NASA notified its contracting officials of the changes to the NFS required by the 2013 amendments to section 2409 Title 10 of the U.S. code through a procurement notice 04-80, but this notice has been interpreted differently by officials in NASA Headquarters, a NASA center, and the OIG. Procurement notices are drafted by NASA Headquarters, reviewed and approved by NASA general counsel and NASA’s Office of Procurement. The NASA centers, acting through their procurement offices and, as needed their legal counsel, review and implement the notices. After the procurement notice was issued, some NASA officials interpreted it differently. For example, in one instance, a NASA center Chief Counsel’s office attorney advised a center procurement official that reprisal protections found in the 2013 amendments extend to contractors’ employees working on contracts awarded before the effective date of the amendments. This is true, he said, regardless of whether the contract contains any clause explicitly making the 2013 amendments applicable. However, both the OIG and the Administrator’s counsel have expressed a different understanding of the statute conveyed in the notice, stating that a clause making the 2013 amendments applicable must be in a contract in order for the complainant to be protected under the statute. Later, the attorney from the center Chief Counsel’s office revised his understanding of the statute and concluded the procurement notice was not accurate as written. Second, NASA personnel have different understandings about whether the NFS clause is sufficient for contractor employees to be covered by the statute. The NFS clause instructs contractors to inform their employees in writing of contractor whistleblower employee rights; but, unlike the FAR clause that is used to implement similar legislation for other agencies, the NFS clause does not state that employees working on the contract are subject to the whistleblower rights and remedies. The attorney from a NASA center said that the NFS clause is enough to ensure contractor employees are given rights under the statute. However, OIG officials have said that without including that element of the clause, employees working under NASA contracts awarded prior to the effective date of the 2013 amendments may not be covered by those amendments. See table 2 for description of the clauses and their differences. Third, the lack of agency-wide guidance for when to include the clause in major modifications leads to different implementation of the requirement. The 2013 amendments require that NASA makes best efforts to include a whistleblower clause in contracts undergoing a major modification. NASA’s July 2014 procurement notice also encourages contracting officers to include the NFS whistleblower clause when issuing major modifications to contracts awarded before July 29, 2014. However, it does not specify what a major modification is under this statute, and the statute itself does not define “major.” According to NASA procurement officials at headquarters and at two NASA centers, it is at the discretion of the NASA Centers’ offices of procurement and contracting officers to decide if a clause is inserted into modifications, and whether they are considered major. Procurement officials and the contracting officers we spoke with told us that there is no definition of major modifications in the law, regulation, or NASA Headquarters or Center policies or guidance. NASA procurement officials said this is because it could be different for each contract and the contracting officer makes the determination based upon the facts related to the situation. Nevertheless, without communicating the factors to consider when determining whether a modification is major and whether that contract should or should not include the clause, NASA and the Centers’ procurement officials are at risk of potential inconsistent incorporation of the clause among applicable contracts. One attorney in NASA’s General Counsel’s Office said there may be costs associated with asking a contractor to insert new clauses—such as the whistleblower clause—into an existing contract during a major modification because it would require a bilateral negotiation between the contractor and the agency. However, one contractor we spoke with said that there would be no cost to adding the clause and that doing so would not be an issue because the whistleblower clause is consistent with the internal whistleblower policies and practices of the institution. Further, he said that the institution he represents would be hesitant to argue against inclusion of the NFS clause in its contracts with NASA. Internal control standards require that an entity should internally communicate necessary quality information in order to meet requirements of the mission. These 3 areas of potential confusion related to NASA’s current guidance could result in different application of the law, unless they are clarified. NASA Has Not Established a Mechanism to Communicate Whistleblower Protections to Grantees Although whistleblower protections are now extended to grantee employees by statute, NASA does not have a mechanism in place to communicate the protections to grantees or subgrantee employees. Unlike the requirement for NASA to ensure contractors communicate whistleblower reprisal rights to their employees in writing and in the employees’ predominant language, the statute does not prescribe a similar requirement for NASA to ensure that grantees communicate whistleblower reprisal rights to their employees. During the grant application process, NASA requires grantees to attest that they will not require grantee employees to sign confidentiality agreements that would prohibit them from reporting fraud, waste, and abuse. NASA officials said that grant awards do not include a mechanism, such as a term or condition, to encourage NASA grantees to notify their employees of their whistleblower reprisal protections. In the 10 NASA grants from fiscal year 2016 that we reviewed, there was no requirement included for grantees to communicate these protections to employees. However, we found that all 10 grants included a statement that each award was subject to all applicable laws and regulations of the United States in effect on the date of award, including the Uniform Guidance. For federal grants in general, the Uniform Guidance provides a government- wide framework for grants management. Within this guidance, there is a reference to the whistleblower protections in the statute; however, it does not explicitly describe the statute’s requirements. The grant advocacy group and representatives of three NASA grantees we spoke with were aware that some protections exist and noted that many grantees have their own whistleblower policies, but were not aware of the specific protections provided by the statute, which indicates that opportunities exist for improving communications between NASA and its grantees about these protections. Further, representatives of the grant advocacy group noted that the whistleblower protections for grantee employees were not specifically mentioned at recent annual meetings where grantees and federal officials discuss issues that affect grantees. Internal controls require that management externally communicate the necessary quality information to achieve the entity’s objectives. Without additional communication about the protections provided by the statute, grantees may not fully understand or appreciate the significance of the rights afforded to their employees, and grantee employees may not be aware of their whistleblower reprisal protections, which could hinder their willingness to report instances of fraud, waste, and abuse. Conclusions Because contractor and grantee employee whistleblowers risk reprisal after disclosing potential fraud, waste, abuse and mismanagement, ensuring they are protected from retaliation or adverse consequences is critical. Without monitoring and evaluating the timeliness of reviewing and responding to reprisal complaints, the Administrator may not be prepared to determine reprisal on future cases within the statutorily required 30 days. Additionally, although NASA has developed guidance related to contractor protections, this guidance has led to inconsistent interpretation of the law and could potentially lead to inconsistent application of how contractor protections for employees are conveyed. More clear guidance would help contracting officers determine when to incorporate the NFS clause into major modifications to ensure consistency throughout the agency. Finally, because unlike contracts, there is no similar clause for grants, NASA is in the position to help ensure grantees know their employees’ rights against reprisal if they observe and disclose fraud, waste, abuse or mismanagement. However, NASA has not effectively communicated to grantees information about these provisions and as a result grantees and their employees may not be fully aware of these protections. Consequently, if they witness fraud, waste, abuse or mismanagement, they may not be willing to disclose those for fear of reprisal. Recommendations for Executive Action: We are making three recommendations to NASA: The Administrator should monitor, evaluate, and make appropriate corrective actions, such as a documented process, to ensure it reviews reprisal complaints in a timely manner. (Recommendation 1) The Administrator should review NASA’s guidance or develop other guidance, including defining major modification, to clarify when whistleblower protections are conveyed. (Recommendation 2) The Administrator should communicate whistleblower protections to grantees and subgrantees and their employees. (Recommendation 3) Agency Comments And Our Evaluation We provided a draft of this product to NASA for review and comment. NASA provided written comments on a draft of this report. In its written comments, reprinted in appendix II, NASA concurred with all three recommendations. In its response to our recommendations NASA agreed to develop and document a process to ensure it reviews reprisal complaints in a timely manner to ensure all parties’ due process rights are protected, review existing procurement policy and clarify guidance as appropriate, and update NASA grant guidance to communicate whistleblower protections to grantees, sub-grantees and their employees. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and members. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology To assess the extent to which National Aeronautics and Space Administration’s (NASA) Office of Inspector General (OIG) has investigated contractor and grantee whistleblower reprisal complaints and developed guidance for the investigations, we reviewed data provided by the NASA OIG on the total number of whistleblower allegations of fraud, waste, abuse, misconduct, or mismanagement and reprisal claims. We also reviewed the number of contractor and grantee employee whistleblower allegations and reprisal complaints provided by the OIG and the outcomes or decisions reached by the OIG of a reprisal complaint from fiscal years 2008 through 2017. We assessed the reliability of these data by asking the NASA OIG to describe the source(s) of information used and steps taken to identify the numbers provided, and limitations and caveats that would affect GAO’s use of the data—such as the data being self-reported by the OIG and Office of General Counsel. Based on these steps, we determined the data to be sufficient for our purposes of determining how the complaints were addressed. Additionally, we reviewed relevant documentation to assess the extent to which the NASA OIG was conducting investigations and communicating findings to the NASA Administrator within required time frames. To determine the extent NASA OIG developed guidance, we interviewed or obtained written answers from OIG officials about their processes and practices for investigating whistleblower reprisal complaints. We reviewed the guidance and training and other materials that NASA OIG uses to implement whistleblower protection investigations. We also visited Johnson Space Center (JSC)—selected because it had the highest number of reprisal cases from 2008 through 2017—to discuss policies and procedures specific to that center with OIG investigators and the OIG program manager for whistleblower protections. Because they are also a part of the Investigators’ Central Field Office, we also spoke with investigators at Marshall Space Flight Center and Kennedy Space Center. To assess the extent to which NASA’s Administrator meets the statutory timeliness requirements to review reprisal complaints, we reviewed the timeliness of the Administrator’s final determination to ensure that NASA was meeting statutory requirements. Specifically, we reviewed relevant documentation to assess the extent to which the Administrator was making final determination of reprisal in 30 days—the required review period specified by statute. We reviewed the Administrator’s documentation on the final disposition of reprisal investigations and compared the date of the Administrator’s final decision to the date of receipt of the reprisal investigations from the NASA OIG. We also conducted interviews with the Office of General Counsel, who spoke on behalf of the Administrator. To assess the extent to which NASA communicated the applicable whistleblower reprisal protections externally with contractors, we reviewed a generalizable sample of NASA contracts to determine the extent a required whistleblower clause was included. We used the Federal Procurement Data System-Next Generation (FPDS-NG) to generate a sample of contract actions over $300,000 that were awarded by NASA in fiscal year 2016. We selected contracts that were not only over the simplified acquisition threshold (generally $150,000), but were over $300,000 to account for possible exceptions and to ensure that we were sampling contracts that would be required to include a whistleblower reprisal clause. From the 270 contracts identified, for purposes of examining the inclusion of NASA Federal Acquisition Regulation Supplement (NFS) clause 1852.203-71 (or other potentially applicable clauses) in NASA contracts, a legal requirement, we selected a generalizable random sample of 100 contracts. The sample is projectable to NASA fiscal year 2016 contracts; however, we did not make a case by case legal determination for contracts not in our sample. We randomly selected 10 contracts from each center that awarded new contracts in 2016, and for those centers that did not have 10 contracts, we selected all contracts. The remaining contracts were then pulled from the NASA Shared Services Center (NSSC) because that center does a majority of NASA’s contracting. We asked for contracts awarded in fiscal year 2016 to ensure we were sampling contracts that are required to have the clause and would be reasonably accessible by NASA. We excluded interagency contracts and task or delivery orders awarded using blanket purchase agreements to ensure we sampled base contracts awarded by NASA, not other agencies. We estimated the percentage of NASA contracts expected to include whistleblower clause(s) as the weighted average of results from the 10 contracting centers. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that could have been drawn. Because each sample could have provided different estimates, we express the uncertainty associated with any particular estimate as a 95 percent confidence interval. This is the interval that, with repeated sampling, would be expected to contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that this confidence interval contains the true percentage of contracts expected to include whistleblower clause(s); however, to assess legal compliance we would have to make a case by case determination, which we did not do. We conducted data reliability checks on the FPDS-NG dataset by comparing it to contract documentation obtained from contract files and determined it was sufficiently reliable for our purposes. Additionally, we conducted interviews with NASA procurement officials and contracting officers at multiple locations including NASA Headquarters, NSSC and JSC to discuss any additional measures NASA takes to communicate whistleblower protections to its contractors and their contractor employees. To further assess internal communication, we reviewed relevant documentation, including guidance, and conducted interviews with procurement officials, NASA’s Office of General Counsel, and Chief Counsels at JSC, NSSC, and Marshall Space Flight Center. To assess the extent to which NASA communicated the applicable whistleblower reprisal protections with grantees, we reviewed a non- generalizable sample of grants awarded by NASA in fiscal year 2016 to determine whether NASA grants included a mechanism notifying grantees of their employees’ whistleblower rights and reprisal protections. We used FPDS-NG to identify a non-generalizable random sample of 10 grants awarded by NASA in fiscal year 2016 for review to determine whether any of the selected grants included a mechanism to communicate whistleblower reprisal protections to grantee employees. We conducted data reliability checks on the FPDS-NG data by comparing it to grant documentation obtained from grant awards and determined it was sufficiently reliable for our purposes. Additionally, we conducted interviews with NASA grant making officials to discuss any additional measures NASA takes to communicate whistleblower reprisal protections to its grantees and their grantee employees. Finally, in order to learn about contractor and grantee experiences during NASA’s implementation of enhanced whistleblower protections, we conducted interviews with or received written answers to questions from a selected group of NASA contractors and grantees. Using FPDS-NG data, we selected institutions with the highest and lowest contracts (including small business contracts) and grants by obligations in 2016. Using these selection criteria, we selected three contractors and three grantees to meet with based on the amount of funds obligated in 2016. We ultimately interviewed or obtained written answers from all selected contractors and grantees. Additionally, we spoke with two advocacy groups, one about grants and one about contracts. We conducted this performance audit from March 2017 to March 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the National Aeronautics and Space Administration Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Penny Berrier (Assistant Director), Mary Diop, Lorraine Ettaro, Alexandra Gebhard, Kurt Gurka, Stephanie Gustafson, Julia Kennon, Jordan Kudrna, Kate Lenane, Roxanna Sun, and Khristi Wilkins made key contributions to this report.
Why GAO Did This Study NASA obligated over $18 billion in contracts and more than $1 billion in grants in fiscal year 2016, and it relies on a significant number of contractor and grantee employees to accomplish its work. These employees are legally protected from reprisal—such as demotion or firing—for whistleblowing. GAO was asked to review NASA's whistleblower reprisal protections for contractor and grantee employees. This report addresses, among other things, the extent to which (1) NASA's Inspector General investigated contractor and grantee whistleblower reprisal complaints; (2) NASA's Administrator reviewed reprisal complaints in a timely manner; and (3) NASA communicated the applicable whistleblower reprisal protections to contractors. GAO reviewed NASA and its Inspector General's policies and guidance; reviewed a generalizable sample of 100 contracts from all NASA centers with contracts in fiscal year 2016; and interviewed relevant officials and contractors, grantees, and advocacy groups. What GAO Found From 2008 through June 2017, National Aeronautics and Space Administration (NASA) contractor and grantee employees submitted 48 reprisal complaints such as alleged firing or demotion for reporting fraud, waste, or abuse within the government. NASA's Inspector General addressed all 48 complaints, completed investigations for 6 of those complaints, and forwarded investigation reports to the NASA Administrator, who is responsible for making a final determination of whether reprisal occurred. The Administrator determined that none of the complaints qualified for protection under the law. Further, in 5 of the 6 cases forwarded by the OIG, the Administrator was required by statute to make a final determination of reprisal within 30 days. GAO found that the Administrator did not meet this required time frame for all 5 cases and had no documented response for one of them (see figure for all 5 cases). According to officials from NASA's Office of General Counsel, each case must be handled on a case by case basis to ensure due process and 30 days is insufficient time to issue an order of final determination of reprisal. However, in order to ensure that whistleblower reprisal complaints are handled within required timeframes, NASA would have to monitor and evaluate its processes for making final determinations of reprisal, but it has not yet taken this step. Consequently, NASA does not know what changes may be needed to ensure that it is meeting the statutory 30-day requirement. NASA communicates whistleblower protections to contractors through inclusion of a required contract clause. For example, GAO found that almost all—98 percent—of contracts would be expected to include a whistleblower clause as required by statute. However, certain elements of NASA whistleblower protection guidance have contributed to a different understanding of reprisal protections among officials at headquarters, a NASA center, and the Inspector General. For example, a July 2014 procurement notice and contract clause language resulted in different interpretations about when the protections apply. Federal internal control standards require that an entity should communicate necessary quality information internally to meet the objectives of its mission. Without additional clarity in its guidance on when the protections apply, NASA centers and procurement officials will be at risk of inconsistent implementation of the law. What GAO Recommends GAO is making three recommendations to NASA, including evaluating the process for reviewing reprisal complaints to ensure it is meeting the required timeframe and clarifying guidance on when protections apply to contractor employees. NASA agreed with the recommendations and plans to develop a documented process to ensure it reviews reprisal complaints in a timely manner and clarify guidance as appropriate, among other things.
gao_GAO-18-692T
gao_GAO-18-692T_0
Background PTC systems are required by law to prevent certain types of accidents or incidents. In particular, a PTC system must be designed to prevent train- to-train collisions, derailments due to excessive speed, incursions into work zone limits, and the movement of a train through a switch left in the wrong position. While railroads may implement any PTC system that meets these requirements, the majority of the railroads are implementing one of four types of systems. PTC’s intended safety benefits can be fully achieved nationwide when all required railroads have successfully installed PTC components, tested that these components work together and the systems function as designed, and are interoperable with other host and tenant railroads’ PTC systems that share track. Interoperability means the locomotives of any host railroad and tenant railroad operating over the same track segment will communicate with and respond to the PTC system, allowing uninterrupted movements over property boundaries. Interoperability is critical to PTC functioning properly given the complexity of the rail network in the United States. In much of the country, Class I railroads function as hosts for Amtrak and commuter railroads. For example, one of the seven major Class I railroads reports that 24 tenant railroads operate over its PTC-equipped tracks, including freight, Amtrak, and commuter railroads. A notable exception to this is the Northeast Corridor, which runs from Washington, D.C., to Boston, Massachusetts, which Amtrak predominantly owns and over which 6 freight and 7 commuter railroads operate as tenants. PTC implementation involves multiple stages to achieve full implementation, including planning and system development, equipment installation and testing, system certification, and full deployment, including interoperability. Each railroad must develop an FRA-approved PTC implementation plan that includes project schedules and milestones for certain activities, such as equipment installation. The equipment installation stage involves many components, including communication systems; hardware on locomotives and along the side of the track (called “wayside equipment”); and software in centralized office locations as well as onboard the train and along the track. Railroads are required to report quarterly and annually to FRA on the railroad’s PTC implementation status relative to the implementation plan. A railroad can also revise its implementation plan to reflect changes to the project, which then must be reviewed and approved by FRA. In addition, railroads must demonstrate that the PTC system is deployed safely and meets functional requirements through multiple stages of testing. Before initiating testing on the general rail system, railroads must submit a formal test request for FRA approval that includes, among other things, the specific test procedures, dates and locations for testing, and the effect the tests will have on current operations. The multiple stages of PTC testing include: Laboratory testing: locomotive and wayside equipment testing in a lab environment to verify that individual components function as designed. Field testing: includes several different tests of individual components and the overall system, such as testing of each locomotive to verify that it meets functional requirements and field integration testing—a key implementation milestone to verify that each PTC component is integrated and functioning safely as designed. Revenue service demonstration (RSD): an advanced form of field testing in which the railroad operates PTC-equipped trains in regular service under specific conditions. RSD is intended to validate the performance of the PTC system as a whole and to test the system under normal, real-world operations. Interoperability testing: host and tenant railroads that operate on the same track must work together to test interoperability to ensure each railroad can operate seamlessly across property boundaries. Almost all of the 40 railroads currently required to implement PTC must demonstrate interoperability with at least one other railroad’s PTC system. Using results from field and RSD testing, combined with other information, host railroads must then submit a safety plan to FRA for approval. We have previously reported that these safety plans are about 5,000 pages in length. Once FRA approves a safety plan, the railroad receives PTC system certification, which is required for full implementation, and is then authorized to operate the PTC system in revenue service. According to FRA officials, the FRA may impose conditions to the PTC safety plan approval as necessary to ensure safety, resulting in a conditional certification. Railroads may receive a maximum 2-year extension from FRA past the December 31, 2018, deadline if they meet six criteria set forth in statute. Specifically, railroads must demonstrate, to the satisfaction of FRA, that they have: (1) installed all PTC system hardware consistent with the total amounts identified in the railroad’s implementation plan; (2) acquired all necessary spectrum consistent with the implementation plan; (3) completed required employee training; (4) included in a revised implementation plan an alternative schedule and sequence for implementing the PTC system as soon as practicable but no later than December 31, 2020; (5) certified to FRA that they will be in full compliance with PTC statutory requirements by the date provided in the alternative schedule and sequence; and (6) for Class I railroads and Amtrak, initiated RSD or implemented a PTC system on more than 50 percent of the track they own or control that is required to have PTC. For commuter and Class II and III railroads, the sixth statutory criterion is to have either initiated RSD on at least one territory required to have operations governed by a PTC system or “met any other criteria established by the Secretary,” which FRA refers to as “substitute” criteria. FRA is responsible for overseeing railroads’ implementation of PTC, and the agency monitors progress and provides direct assistance to railroads implementing PTC. For example, FRA officials provide technical assistance to railroads, address questions, and review railroad-submitted documentation. FRA has a national PTC director, designated PTC specialists in the eight FRA regions, and a few additional engineers and test monitors responsible for overseeing technical and engineering aspects of implementation and reviewing railroad submissions and requests. In anticipation of the upcoming implementation deadline, in May 2017, FRA began to send notification letters to railroads it determined were at risk of both not meeting the December 31, 2018, implementation deadline and not completing the requirements necessary to qualify for an extension. FRA identified “at-risk” railroads by comparing a railroad’s hardware installation status to the total hardware required for PTC implementation, according to the railroad’s implementation plan. FRA has increased the “at-risk” threshold percentage over time as the deadline approaches. See table 1. FRA has additional oversight tools, which include use of its general civil penalty enforcement authority for failure to meet certain statutory PTC requirements. FRA has used this authority in 2017 and 2018 to assess civil penalties against railroads that failed to comply with the equipment installation milestones, the spectrum acquisition milestones, or both, that the railroads had established in their implementation plans for the end of 2016 and 2017. As part of our body of work on PTC, we found that railroads face numerous PTC implementation challenges and made recommendations to FRA to improve its oversight of implementation. Specifically, in 2013 and 2015 we found that many railroads were struggling to make progress due to a number of complex and interrelated challenges, such as developing system components and identifying and correcting issues discovered during testing. Most recently, we found in March 2018 that FRA had not systematically communicated information or used a risk- based approach to help railroads prepare for the 2018 deadline or to qualify for an extension. We also found that many railroads were concerned about FRA’s ability to review submitted documentation in a timely manner, particularly given the length of some required documentation such as safety plans and FRA’s limited resources for document review. In March 2018, we recommended FRA identify and adopt a method for systematically communicating information to railroads and use a risk-based approach to prioritize its resources and workload. FRA agreed with our recommendations. Many Railroads Remain in Early Stages of PTC Implementation and FRA Has Clarified Extension Requirements Railroads Continue to Install and to Test PTC Systems, and Report Previously Identified Implementation Challenges As of June 30, 2018, many railroads reported that they remain in the equipment installation and field-testing stages, which are early stages of PTC implementation. However, since we last testified in March 2018, railroads have made progress on equipment installation. Based on our analysis of the 40 railroads’ reported status as of June 30, 2018, about half of the railroads have completed equipment installation, and many others are nearing completion of this stage. Specifically, three-quarters of the 40 railroads reported being more than 90 percent complete with locomotive equipment installation. Similarly, nearly three-quarters of railroads that must install wayside equipment reported being more than 90 percent complete. The remaining one-quarter of railroads are among those designated by FRA as at-risk of both not meeting the end of 2018 implementation deadline and not completing the requirements necessary to qualify for an extension. Specifically, in August 2018, FRA identified 9 railroads—all commuter railroads—as at-risk, fewer than the 12 railroads FRA had previously designated as at risk in its June 2018 letters to railroads. Since we last testified, most commuter railroads reported slow progress with testing, especially with RSD, while Class I railroads and Amtrak have reached later stages of testing. Notably, all 7 Class I freight railroads and Amtrak reported having initiated field testing and entering RSD as of June 30, 2018. We reported in 2013 and 2015 that Class I railroads and Amtrak have been conducting PTC implementation activities for longer than commuter railroads, which has likely factored into their advanced progress. However, commuter railroads and Class II/III railroads have progressed more slowly. For example: Laboratory and initial field testing: 19 of 28 commuter railroads reported having initiated this testing as of June 30, 2018, 6 more commuter railroads than the 13 we previously reported as having initiated field testing as of September 30, 2017. Additionally, 2 of 4 Class II/III railroads reported having initiated testing as of June 30, 2018. RSD testing: 8 of 28 commuter railroads reported initiating RSD testing as of June 30, 2018, 2 more commuter railroads than the 6 we previously reported as having entered RSD testing as of September 30, 2017. No Class II/III railroads reported having initiated RSD. As noted earlier, unless a commuter or Class II/III railroad receives approval for using substitute criteria, the railroad must initiate RSD, a final stage of PTC testing, on at least one territory by December 31, 2018, to qualify for an extension. Railroad representatives reported that they continue to face many of the same challenges we have previously identified. For example, in response to our questionnaire to all 40 railroads implementing PTC, 14 reported challenges with PTC vendors and contractors, which we originally reported on in 2015. One railroad noted that, because its contractor manages PTC projects across the country with the same deadline and requirements, it can be difficult for all railroads to get the resources they need from their contractor. We previously reported that there are a limited number of vendors available to design PTC systems, provide software and hardware, and conduct testing. For example, we reported in 2015 that, according to railroad industry representatives, there were two vendors for the onboard train management computer and three vendors for the wayside equipment. Likewise, we previously reported that railroads face software challenges, and noted that railroads had concerns with the number of defects identified during software testing, since these take time to address. In response to our questionnaire, 11 railroads reported encountering challenges related to maturity of the PTC software systems, such as working through software bugs or defects during testing. FRA Has Recently Clarified Extension Requirements In June, July, and August 2018, FRA held three PTC symposiums that were attended by representatives from all 40 railroads and that focused on the extension process and substitute criteria, PTC testing, and safety plans, respectively. FRA’s June 2018 symposium covered information consistent with our March 2018 recommendation that the agency adopt a method for systematically communicating information related to the requirements and process for an extension to railroads. Specifically, FRA presented information on the procedures for requesting and obtaining FRA’s approval for an extension to implement PTC beyond the December 2018 deadline including FRA’s review process. FRA also clarified that for railroads eligible to use substitute criteria, initiating field testing was one approach that could potentially qualify as substitute criteria, rather than initiating RSD. Representatives we interviewed from the railroads that participated in the symposiums found them to be helpful and some railroads reported that the information presented led them to adjust their approach to meeting the December 2018 deadline. For example, one railroad representative we spoke to said that until the symposium, he was unaware that using field testing as substitute criteria was a potential option. Some railroads we met with also told us they are re-evaluating what activities and documentation need to be revised and submitted to FRA before the December 2018 deadline based on the information presented at the symposiums. For example, representatives from one railroad we met with said that FRA officials encouraged them to update their PTC implementation plan right away with current equipment installation totals, to ensure consistency across all required documentation by the end of 2018. A couple of railroads noted that the information presented at the symposiums clarified many questions and would have been beneficial to know a year or two earlier in the implementation process. In addition, in recent months FRA has continued to provide assistance to railroads and has taken a series of steps to better prepare railroads for the 2018 deadline. These steps include meeting regularly with individual railroads and developing approaches intended to help many railroads meet the requirements necessary for a deadline extension. For example, representatives from one commuter railroad said agency officials have been willing to share lessons learned, clarify requirements, and review draft documentation to provide informal feedback. Railroads and FRA Are Working toward Extensions, Leaving Substantial Work to Be Completed Beyond 2018 Most Railroads Anticipate Needing an Extension, and Many Plan to Start RSD Testing Beyond 2018 More than three-quarters of railroads (32 of 40) reported to us that they plan to apply for an extension. However, FRA officials noted that with the exception of possibly one or two railroads, they anticipate that all railroads will likely need an extension. As of September 2018, most railroads have not submitted their request for an extension. A railroad must demonstrate that it has met all of the criteria to qualify before it may formally request an extension, and as previously discussed, many railroads remain in the early stages of PTC implementation. Of the eight railroads that anticipate reaching full implementation by December 31, 2018, five have conditionally certified safety plans; one has submitted its safety plan for review; one plans to submit its safety plan to FRA in fall 2018 for certification; and one did not specify when it would submit its safety plan for certification. Of the 32 railroads that intend to apply for an extension, half reported that they plan to use substitute criteria to qualify, including 12 commuter and 4 Class II and III railroads. Moreover, three-quarters of the commuter and Class II and III railroads that plan to use substitute criteria (12 of 16) intend to apply to use their initiation of field testing or lab testing as substitute criteria. Figure 1 depicts the stage of PTC implementation railroads at least expect to reach by December 31, 2018, to be in compliance, based on railroads’ responses to our July-August 2018 questionnaire. Although FRA has recently made clear that it is authorized to grant extensions based on initiating field testing or other FRA-approved substitute criteria, this approach defers time-intensive RSD testing into 2019 and beyond. In March 2018, we testified FRA officials told us that moving from the start of field testing to the start of RSD can take between 1 and 3 years, and has averaged about 2 years for those railroads that have completed that stage. We also testified that FRA officials believe that most railroads underestimate the amount of time needed for testing. FRA officials told us that they do not consider railroads that are approved for an extension under substitute criteria to be necessarily at a higher-risk of not completing PTC implementation by 2020. However, in light of these time estimates and the unknown challenges that railroads may face during testing, railroads that are in the early field-testing stage moving into 2019 could face challenges completing PTC implementation by the extended December 2020 deadline. Railroads further behind in PTC implementation may need to apply for an extension due to factors such as compressed implementation schedules, as well as the time needed for FRA approvals. For example, representatives from one commuter railroad said they hope to reach RSD before the December 31, 2018, deadline, but that it would be difficult to meet the extension requirements, apply for, and receive an extension given the volume of paperwork FRA will be receiving at the end of the year. Instead, the railroad plans to submit an extension request using substitute criteria consisting of field testing in order to be in compliance at the end of the year. Such an approach involves first applying for and receiving approval for substitute criteria and then formally requesting an extension and submitting supporting documentation to FRA before the end of the year. Entering RSD prior to the deadline could be difficult given that FRA officials told us they have advised railroads to allow at least a month for FRA’s review of test requests, which must be approved prior to initiating field testing and RSD. Additionally, for some railroads further along in PTC implementation, particularly Class I freight railroads, interoperability is a key remaining hurdle for full implementation by the end of 2018, and railroads expect this challenge to persist in the future. The two Class I railroads we interviewed noted that ensuring all tenant railroads are PTC-equipped, tested, and interoperable is a primary reason the railroads plan to request an extension. One of these host railroads also reported that it has little ability to influence its tenants’ progress with PTC implementation. Across all 40 railroads, 8 reported current or anticipated challenges working with tenant or host railroads, or both, to plan and conduct testing to ensure interoperability. Moreover, given that few railroads have reached the interoperability testing stage, the challenges railroads may face in this stage remain unclear. For example, some railroads we interviewed noted it is unknown how much time and effort will be required to work through interoperability issues during testing to ensure the system’s reliability. One railroad association stated that interoperability is, and will continue to be, a substantial challenge for metropolitan areas with dense and complex rail networks with several host-tenant relationships. For example, according to one commuter railroad, 14 different freight and commuter railroads will need to interoperate in the Chicago area. FRA’s Substantial Workload Remains a Concern FRA’s already substantial workload is expected to increase as railroads continue to submit documentation necessary for extensions and continue PTC implementation activities. FRA is focused on ensuring railroads are in compliance through the December 2018 deadline—whether via an extension or by completing implementation. While FRA officials report that they anticipate almost all railroads will likely request an extension, only one—a Class I railroad—had submitted an application for an extension as of early September 2018. FRA will need to review and approve all related documentation associated with each extension request and make a determination within 90 days, meaning if a railroad were to submit its extension request on December 31, 2018, FRA would have until the end of March 2019 to approve or deny the railroad’s extension request. In addition to extension requests and supporting documentation, many railroads will also be submitting to FRA: requests for substitute criteria, test requests to initiate field testing or RSD, revisions to PTC implementation plans, and PTC safety plans. To help manage the forthcoming influx of documentation, FRA officials have offered to review draft documentation, such as substitute criteria requests and test requests, and have advised railroads to take FRA’s review times into account prior to submitting required documentation. FRA officials told us that in trying to manage their workload, they initially told railroads they did not have time to review draft submittals. However, they found that taking the time to conduct draft reviews ultimately led to higher quality formal submittals and accelerated the overall review process. In addition, FRA officials said that their goal is to not delay any railroad that is ready to move into testing, and that they advised railroads to build 30–45 days for test request reviews into their project schedules. Despite these efforts, railroads remain concerned about the agency’s ability to manage the PTC workload in the coming months and beyond 2018. For example, 9 of the 40 railroads identified FRA’s resources and review times as a challenge leading up to the December 2018 deadline. Based on similar concerns, in March 2018, we recommended FRA develop an approach to prioritize the allocation of resources to address areas of greatest risk as railroads work to complete PTC implementation. FRA has acknowledged the railroads’ concern given the surge of submissions requiring FRA approval in 2018 and has reported the agency is reallocating existing expertise and expanding the PTC workforce through training, expanding contracts with existing support contractors, and initiating one additional contract to provide technical support. For example, FRA officials told us that they reallocated resources to shift PTC Specialists’ responsibilities to focus exclusively on testing-related activities because their involvement is critical for the testing stage. Although FRA has taken steps to provide key extension information to railroads and help ensure railroads’ compliance with PTC deadlines, uncertainty remains, particularly in regard to FRA’s enforcement strategy if railroads are noncompliant with the statute, such as if railroads were to fail to apply for an extension by the deadline. Representatives from all railroads implementing PTC we met with told us that FRA’s planned enforcement approach for any railroad that fails to meet the requirements for an extension beyond 2018 is unclear. FRA officials told us they have shared the range of applicable civil penalties with railroads for years, but that any policy decisions about how potential fines will be levied for non- compliant railroads is a policy decision that has not yet been made. In addition, it is also unclear how the agency would approach enforcement for railroads that have a host or tenant operating on their tracks that has not completed implementation or met the requirements necessary for an extension. FRA officials said that the goal of enforcement is to help bring all railroads into compliance and that they would have to look at the specific circumstances for any host-tenant issues before assessing a fine. In conclusion, almost all railroads will likely request an extension beyond 2018, which will require FRA approval and, for many railroads, substitute criteria requests that may result in approximately a third of railroads remaining in the early stages of PTC implementation at the start of 2019. However, given that almost no railroads have submitted extension requests, it is unlikely we will know how many railroads will be granted an extension by the December 31, 2018 deadline. Although FRA has reported taking some actions in response to our March 2018 recommendation that they better prioritize resources, FRA resources and review times remain a significant concern. These issues, combined with the ongoing implementation, testing, and interoperability challenges that a number of railroads reported to us, raise questions as to the extent FRA and the railroad industry are poised for full PTC implementation by December 31, 2020. Chairman Denham, Ranking Member Capuano, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Susan Fleming, Director, Physical Infrastructure at (202) 512- 2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Susan Zimmerman (Assistant Director); Katherine Blair; Greg Hanna; Delwen Jones; Emily Larson; Joanie Lofgren; SaraAnn Moessbauer; Maria Wallace; and Crystal Wesco. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Forty railroads including Amtrak, commuter, and freight railroads are currently required by statute to implement PTC, a communications-based system designed to slow or stop a train that is not being operated safely. PTC must be interoperable, meaning trains can operate seamlessly on the same PTC-equipped track, including “tenants” that operate on track owned by another “host” railroad. Although the deadline for PTC implementation is December 31, 2018, railroads may receive a maximum 2-year extension to December 31, 2020, if they meet certain statutory criteria. GAO was asked to review railroads' PTC implementation progress. This statement discusses (1) railroads' implementation progress and FRA's steps to assist them and (2) how railroads and FRA plan to approach the 2018 and 2020 deadlines. GAO analyzed railroads' most recent quarterly reports covering activities through June 30, 2018; sent a brief questionnaire to all 40 railroads; and interviewed officials from FRA and 16 railroads, selected in part based on those identified as at-risk by FRA. What GAO Found As of June 30, 2018, many railroads remained in the early stages of positive train control (PTC) implementation—including equipment installation and early field testing. About half of the 40 railroads implementing PTC reported that they are still installing equipment, though many are nearing completion. However, with the exception of the largest freight railroads—known as Class I—and Amtrak, most railroads reported less progress in later implementation stages, especially revenue service demonstration (RSD), an advanced form of field testing that is required to fully implement PTC. Of the 28 commuter railroads required to implement PTC, 19 reported initiating field testing, but only 8 reported initiating RSD. The Federal Railroad Administration (FRA) recently clarified the criteria railroads must meet to qualify for a 2-year extension past the December 31, 2018, PTC implementation deadline. To receive an extension, railroads must meet 6 statutory criteria. For the sixth criterion, commuter and smaller freight railroads are authorized to either initiate RSD on at least one track segment or use FRA-approved substitute criteria. FRA clarified these and other requirements at three PTC symposiums hosted for railroads in summer 2018. For example, FRA officials said that for railroads eligible to use substitute criteria, initiating field testing instead of RSD was one approach that could potentially receive FRA's approval. FRA's actions are consistent with GAO's March 2018 recommendation that the agency communicate to the railroads the requirements and process for an extension. Most railroads anticipate needing an extension, leaving substantial work for both railroads and FRA to complete before the end of 2020. Thirty-two of 40 railroads reported to GAO that they, or the railroad which owns the track on which they operate, will apply for an extension. Sixteen commuter and smaller freight railroads reported planning to apply for an extension using substitute criteria, and of these, 12 intend to apply for substitute criteria based on early testing such as field testing. Though substitute criteria are authorized in law, this approach defers time-intensive RSD testing into 2019 and beyond. In addition, railroads expressed concerns with the time and effort involved with interoperability testing—a key remaining hurdle for railroads such as Class I railroads that are further along with implementation. Further, railroads expressed concern that FRA's workload will markedly increase as railroads submit requests for extension approvals. FRA has acknowledged concerns about the pending surge of submissions and has taken recent steps to help manage the forthcoming influx of documentation, such as reallocating resources. Nonetheless, given that as of early September 2018, only 1 railroad—a Class I railroad—had applied for an extension, it remains unclear how many extension requests FRA will receive or what FRA's enforcement strategy will be for noncompliance with the statute, such as for railroads that fail to apply for an extension by the deadline. In addition, challenges related to PTC implementation and FRA's resources raise questions as to the extent FRA and the railroad industry are poised for full PTC implementation by December 31, 2020. What GAO Recommends In March 2018, GAO recommended FRA take steps to systematically communicate extension information to railroads and to use a risk-based approach to prioritize agency resources and workload. FRA has taken some steps to address these recommendations, such as recently communicating and clarifying extension requirements to all railroads during three symposiums, and GAO will continue to monitor FRA's progress.
gao_GAO-18-140
gao_GAO-18-140_0
Background FDA Medical Device Review Process FDA classifies each medical device type intended for human use into one of three classes based on the level of risk it poses to the patient or the user and the controls necessary to reasonably ensure its safety and effectiveness. Examples of types of devices in each class include the following: Class I: tongue depressors, elastic bandages, reading glasses, and Class II: electrocardiographs, powered bone drills, and mercury Class III: pacemakers and replacement heart valves. Before medical devices may be legally marketed in the United States, they are generally subject to one of two types of FDA premarket review processes. Premarket approval (PMA) process: Class III device types are typically required to obtain FDA approval through the PMA process. Under this process, the medical device sponsor must submit an application that includes—among other things—full reports of investigations, typically including clinical data, providing reasonable assurance that the new device is safe and effective. The PMA process is the most stringent type of premarket review. A successful application results in FDA’s approval to market the device. From 2001 through 2016, medical device sponsors submitted 651 PMA applications, and FDA approved for marketing 506 of those submissions. (See fig. 1.) Premarket notification, or 510(k), process: Most medical devices requiring premarket review are subject to FDA’s premarket notification or 510(k) process. This includes class I and II device types that are not specifically exempted from the 510(k) notification requirement. Under this process, the medical device sponsor must notify FDA at least 90 days before it intends to market a new device and demonstrate to FDA that the new device is substantially equivalent to a predicate device, and therefore does not require a PMA. For most 510(k) notifications, clinical data are not required and substantial equivalence will normally be determined based on comparative descriptions of intended device uses and technological characteristics, and may include performance data. A successful 510(k) submission results in FDA’s clearance to market the device. From 2001 through 2016, medical device sponsors submitted 61,439 premarket notifications and FDA cleared 51,028 devices for market. (See fig. 2.) During premarket review under both the PMA and 510(k) processes, FDA and the medical device sponsor may engage in an interactive process. To start, there may be a pre-submission meeting between FDA and the sponsor, during which the parties discuss the upcoming review and try to resolve potential obstacles for approval or clearance. Then, FDA receives the premarket submission, makes a determination to accept or not accept the submission, and assigns a reviewer. In making its assessment whether to approve, or clear, a submission, FDA relies on the sponsor to provide supporting data as part of the submission. However, the agency can request additional information in the course of the review in order to make a determination of reasonable assurance of safety and effectiveness, or of substantial equivalence. This additional information can be obtained through informal interactions, such as a phone call or email. Alternatively, for more significant issues, FDA may make a more formal request for additional information, known as a deficiency letter in the case of a PMA application and additional information (AI) letter for a 510(k) notification. FDA will issue such requests if the submission lacks significant information necessary for FDA to complete its review, and the agency will request the sponsor amend the submission to provide the necessary information regarding the device. If a sponsor disagrees with an FDA regulatory decision concerning a medical device submission, including a CDRH employee’s decision to request additional information or a significant decision regarding approval or clearance of a medical device, it can take multiple actions. Specifically, a sponsor can, among other things, (1) contact the CDRH Ombudsman for assistance, (2) file an internal appeal of an FDA decision, or (3) request that the disagreement be resolved through CDRH’s Medical Device Dispute Resolution Panel, as described below. Ombudsman: According to FDA’s guidance, prior to the agency reaching a regulatory decision, the most effective means of resolving a dispute between CDRH and an external stakeholder is through discussion and agreement. The CDRH Ombudsman is available to assist in clarifying issues, mediate meetings and teleconferences, and conduct discussions with the parties in an effort to resolve disagreements short of a formal review or internal appeal. Internal Appeal: Once FDA makes a regulatory decision, a sponsor can request a supervisory review of that decision, which we refer to as an internal appeal. For this process, the supervisor of an FDA employee will, at the request of a medical device sponsor, review a decision or action of the employee and issue a decision. The decision rendered by the supervisor, acting as the review authority, customarily takes one of the following forms: overturning the decision of the employee; upholding the employee decision; or, in some circumstances, referring the matter back to the employee for reconsideration under defined conditions. Medical Device Dispute Resolution Panel: If the dispute remains unresolved, the sponsor may request that FDA convene the Medical Device Dispute Resolution Panel. The panel is intended to provide a means for independent review of a scientific controversy or dispute between a sponsor and FDA, and make a recommendation to the Center director. According to FDA’s guidance, the panel is primarily intended to address scientific controversies rather than other issues such as regulatory, legal, or statutory authority disputes. As part of its commitments associated with the Medical Device User Fee Amendments of 2012 (MDUFA III), FDA agreed to participate in an independent, comprehensive assessment of the medical device submission review process. Acting on recommendations from the contractor that conducted the assessment, FDA established working groups for each submission type, including PMAs and 510(k)s, which studied existing review processes and made recommendations. In August 2017, the Medical Device User Fee Amendments of 2017 (MDUFA IV) reauthorized FDA’s medical device user fee program, and FDA committed to another independent assessment. FDA has committed to hiring a contractor to conduct this assessment by the end of December 2017 with a second phase to begin in 2020. FDA Least Burdensome Requirements In 1997, FDAMA added a requirement that the agency use the least burdensome approach during certain parts of PMA and 510(k) reviews. These requirements were intended to reduce unnecessary burdens associated with the premarket approval and clearance processes; however, they did not lower the statutory criteria for demonstrating a reasonable assurance of safety and effectiveness or substantial equivalence. While the language in FDAMA differs slightly for the PMA and 510(k) processes, in both instances FDA was directed to consider the “least burdensome” means of requesting information needed for its review. Specifically, FDAMA requires that when the agency specifies data that must be submitted as part of a PMA application, the agency must consider the least burdensome appropriate means of evaluating device effectiveness that would have a reasonable likelihood of resulting in approval. The agency must similarly consider the least burdensome appropriate means of demonstrating substantial equivalence when requesting information under the 510(k) notification process. In both cases, FDA is statutorily required to request only information that is necessary to support the determination that there is reasonable assurance of effectiveness or substantial equivalence, respectively. Subsequent laws have clarified the least burdensome requirements. In 2012, the Food and Drug Administration Safety and Innovation Act clarified that the term “necessary” means the minimum required information that would support either a determination that a PMA application provides reasonable assurance of the effectiveness of the device or a determination, for a 510(k) notification, of substantial equivalence between a new device and a predicate device. In 2016, the 21st Century Cures Act added a provision applying the least burdensome concept to FDA’s requests for additional information in the PMA process. The law also applied the least burdensome concept to significant decisions, such as denials of PMA applications, requiring such decisions to include a brief statement regarding how least burdensome requirements were considered and applied. Additionally, the law mandated each FDA employee involved in premarket submission reviews, including supervisors, to receive training on the least burdensome provisions, and required the agency to conduct an audit of the training, among other things, no later than June 2018. Although FDA officials have noted that the least burdensome principles are broad and could apply to all activities within the PMA and 510(k) premarket review process, they noted that the requests for additional information represent a key juncture for the application of least burdensome requirements. According to agency officials and industry representatives, the requests for additional information—deficiency letters in the case of PMAs and AI letters for its 510(k) reviews—are when FDA and the sponsor could disagree on whether the requested information is necessary for the agency to reach a final decision on the medical device under review. FDA Implementation of the Least Burdensome Requirements Following the enactment of FDAMA in 1997, FDA went through a process in collaboration with the medical device industry to define the least burdensome concept and develop an approach to implement the provisions. Based on this, FDA released multiple guidance documents related to least burdensome requirements from 2000 through 2002. In November 2000 guidance, FDA outlined a four-part approach— referred to as “four-part-harmony” by FDA staff—for communicating deficiencies to medical device sponsors in accordance with the least burdensome requirements. The guidance helps reviewers describe deficiencies identified in submissions in ways that are direct, concise, and complete, thus ensuring a more effective use of reviewers’ and sponsors’ time, effort, and resources. It also provides a suggested format for sponsors to respond to FDA. FDA updated this guidance in September 2017. In 2002 guidance, FDA described its principles for implementing the least burdensome requirements and its activities to assess implementation. The guidance outlines FDA’s interpretation of the least burdensome concept as described in FDAMA, and explains its application to activities associated with PMA and 510(k) reviews. The guidance also states that FDA was in the process of developing tools to be used by both agency staff and its stakeholders to periodically assess the implementation of the least burdensome principles. It noted some measurement tools had already been developed and that additional tools were also needed to assess the impact of the least burdensome approach on expediting the development of new medical technologies. In addition, FDA has included language about those requirements in other guidance documents. For example, in 2014, FDA issued guidance on the 510(k) program that describes how the least burdensome principles may affect the type of information necessary to demonstrate substantial equivalence at different decision points in the review of a 510(k). FDA Frequently Requested Additional Information to Support Medical Device Reviews, and Sponsor Disagreements Often Related to Least Burdensome Requirements FDA Issued Deficiency and Additional Information Letters for a Significant Proportion of PMAs and 510(k)s FDA requested sponsors provide additional information for a majority of the PMAs and 510(k)s it reviewed. For the period 2001 through 2016, FDA issued a large number of deficiency and AI letters relative to the number of submissions, although there was variation annually. For PMAs, the number of deficiency letters as a percentage of new PMA applications submitted ranged from about 54 percent to 113 percent annually, or 82 percent on average, from 2001 through 2016. For the years 2006 through 2010, this percentage, as well as the total number of letters was higher, and FDA issued more deficiency letters than there were PMA applications submitted. Similarly, AI letters as a percentage of total 510(k) notifications received ranged from about 58 percent to more than 174 percent annually, or about 106 percent on average, from 2001 through 2016. While the number of 510(k) notifications remained similar across the time period we examined, from 2009 through 2012, the number of AI letters issued each year was, on average, nearly double the number in other years. During this period, FDA issued more AI letters than there were 510(k) notifications submitted. Since 2014, these percentages have been lower for both PMAs and 510(k)s. FDA officials acknowledged the historical increase in the number of deficiency and AI letters and noted the more recent decrease. The officials attributed this decrease to a number of changes the agency agreed to in MDUFA III. For example, FDA implemented a policy to review submissions for administrative completeness prior to accepting the submission. They said this allowed the agency to limit deficiency and AI letters to issues related to the quality of the data provided and the studies conducted in support of the submission rather than to administrative issues. Also as a result of MDUFA III, the agency implemented an interactive review process to increase informal interaction between FDA and applicants and to minimize the number of review questions communicated through deficiency and AI letters. (See table 1.) We identified changes in how the deficiency letters and AI letters referenced the least burdensome requirements. Based on our sample of 73 letters from 1997 through 2016, FDA included an explicit acknowledgment of the least burdensome requirements in the letters issued from 2001 through 2009. However, based on our review, this practice ended in 2010, and later letters did not include this standard language. Representatives from the medical device industry told us that including the least burdensome language in the deficiency letters was a good practice because it raised awareness of the least burdensome principles. In September 2017, FDA released updated deficiencies guidance that, according to FDA officials, instructs staff how to better articulate the reason that the information is needed in accordance with the least burdensome requirements. This guidance does not set forth boilerplate language regarding the least burdensome requirements for use in deficiency letters, but does include examples of well-constructed deficiencies, definitions for major and minor deficiencies, and a statement that FDA will attempt to resolve minor deficiencies interactively. Though Data are Limited, Least Burdensome Requirements were a Significant Contributing Factor in Disagreements Raised by Medical Device Sponsors The least burdensome requirements were often a significant contributing factor in disagreements raised by medical device sponsors, according to FDA officials and available FDA data. According to FDA, the most effective means of resolving disagreements is through discussion and mediation, and to that end, the Ombudsman’s office is routinely involved in discussions between firms and medical device reviewers during the review process. For example, in 2016, the CDRH Ombudsman was involved with PMA and 510(k) medical device reviews 360 times out of 3,444 submissions. Although the agency was unable to identify which of these interactions were related to least burdensome requirements, agency officials told us that a substantial number likely resulted from a difference of opinion between the applicant and FDA on the appropriate level of scientific evidence, a portion of which likely have a least burdensome component. The least burdensome provisions were also frequently related to issues that applicants raised during internal agency appeals of FDA decisions of PMA and 510(k) reviews. Although FDA did not have readily available data on appeals that occurred prior to 2013, the agency was able to provide information about the 63 appeals of significant decisions that occurred from 2013 through 2016. Of these 63 appeals, FDA identified 33 appeals—2 related to PMAs and 31 related to 510(k)s—in which the issue identified by the sponsor was related to least burdensome principles. According to medical device industry representatives, sponsors may not always pursue an appeal, so the number of official appeals may not represent the extent of least burdensome-related issues that sponsors experience. They said the sponsor may determine it is best to avoid conflict that could complicate future device submissions and comply with the request for additional information, even if it disagrees. Of these 33 appeals, FDA agreed, or partially agreed with the sponsor for 11 appeals, which resulted in FDA overturning the decision or reopening the file and continuing the review. For the remaining 22 appeals, the agency upheld the initial reviewer decision. The following presents examples of appeals where the issue identified by the sponsor was related to the least burdensome requirements. In one appeal related to a 510(k) review, the sponsor objected to the reviewer’s finding that the device was not substantially equivalent to a device already on the market. The sponsor stated that it had provided sufficient data for a substantial equivalence determination, and the FDA reviewer’s request for additional risk mitigation measures and supplemental testing was unwarranted and inappropriate. The review authority determined that, while the information provided in the 510(k) premarket submission was not sufficient to establish substantial equivalence, some of FDA’s requests were unwarranted. As a result of the appeal, FDA reopened the file and provided the sponsor an opportunity to respond to a new set of requests for additional information. In an appeal related to a PMA review, the sponsor contended that FDA’s not approvable decision reflected an inconsistent and erroneous interpretation of the clinical data supporting the safety and effectiveness of the subject device, and that the data it had provided was sufficient for FDA to reach an approved decision. The sponsor further contended that the review staff failed to utilize the principles outlined in FDA guidance. The review authority upheld FDA’s initial decision and determined there was not sufficient valid scientific evidence to demonstrate a reasonable assurance that the subject device was safe and effective under the proposed conditions of use. The Medical Device Dispute Resolution Panel, which provides another avenue to resolve disagreements between sponsors and the agency, has also addressed issues related to the least burdensome requirements. Since the panel was created following FDAMA in 1997, medical device sponsors have requested that FDA resolve three disagreements through this avenue, each related to PMAs. Although not tracked by FDA, at our request, officials reviewed the records and found that one of the three disputes was related to the least burdensome requirements. Specifically, for a September 2001 dispute, FDA officials said the sponsor requested the panel after FDA initially found that the data from the clinical study submitted by the sponsor did not sufficiently support effectiveness. After reviewing evidence from the applicant and from FDA, the dispute resolution panel determined that the sponsor had provided sufficient evidence to prove effectiveness, and the device was ultimately approved. FDA Offered Some Training on the Least Burdensome Requirements, and Evaluates its Training for Effectiveness FDA Offered Some Early Least Burdensome Training at Limited Times, and Has Incorporated Related Information in Broader Training FDA officials indicated that training specific to the least burdensome requirements was held in the years following the enactment of FDAMA in 1997. FDA was unable to provide records of that training, including its content. However, officials told us that the training was specific to the least burdensome requirements and offered from 1997 through 1999. FDA officials said the agency offered other presentations in subsequent years that they said covered similar least burdensome topics. For example, the agency provided slides from a presentation created in 2000 that provided an overview of FDA’s implementation of the requirements. Although FDA officials told us this least burdensome specific training was not offered after 1999, they identified various other trainings that they said incorporated the least burdensome concept. For example, a 2005 presentation on clinical trial design has multiple slides on least burdensome requirements, and specifically states that a course objective is to “understand how least burdensome principles apply.” Least burdensome requirements are also mentioned in other training materials where they may not be the focus—for example one slide of a presentation on biomarkers included a mention of least burdensome requirements. Officials also identified the training program for new reviewers that FDA implemented in 2011 as a source of training on least burdensome principles. Specifically, the Reviewer Certification Program is a training curriculum that FDA has required most new device reviewers to complete since 2011. The training curriculum covers a wide variety of courses on topics related to a reviewer’s responsibilities. While none of these courses is specific to the least burdensome requirements, there are courses covering related topics. For example, there is one course on technical writing that includes FDA’s guidance on developing deficiencies with least burdensome principles. Five other courses on different topics mention either the least burdensome requirements or related principles, such as a course on FDA’s legislative history that included a slide identifying the least burdensome statutory provisions as an element of FDAMA, though the slide did not explain the least burdensome requirements or provide additional context. Of the 490 staff assigned to review PMAs and 510(k)s, FDA indicated that as of the end of calendar year 2016, 335 had completed the Reviewer Certification Program, 150 started working on premarket submissions prior to the beginning of 2011, and the remaining 5 individuals did not complete the training for varying reasons. In response to the 21st Century Cures Act, enacted in December 2016, FDA is providing mandatory online training specific to the least burdensome requirements. FDA indicated that the training focuses on key behaviors that reflect the least burdensome approaches as documented in updated guidance that FDA issued in September 2017. FDA officials told us that, as of October 31, 2017, 91 percent of CDRH staff had received the new least burdensome specific training. In addition to the online training, FDA plans other activities, such as follow-up office-level briefings to address questions or concerns and an introductory podcast from the CDRH director. In addition to providing this training to current employees, FDA plans to incorporate least burdensome requirement training into new employee orientation and the Reviewer Certification Program, and plans to include ongoing support and promotion of least burdensome principles through a center working group on the least burdensome requirements. In addition to course-based training, FDA officials told us that least burdensome concepts are conveyed to reviewers through mentoring. Officials explained that much of the training on the least burdensome requirements occurs through mentoring and conversations with supervisors, and that those encounters are not documented. FDA Is Implementing Evaluations of All Training Courses for Medical Device Review Staff, including Courses that Address the Least Burdensome Requirements While FDA has not had processes in place to evaluate its medical device training, it is implementing such processes for all training, including courses related to the least burdensome requirements. In its June 2014 report, the contractor performing the independent evaluation noted that CDRH did not have mechanisms in place to measure the quality and effectiveness of its training programs. The report noted that FDA should identify metrics and incorporate methods to better assess review process training satisfaction, learning, and staff behavior changes. FDA officials explained that while they had customer reaction evaluations for trainings for at least 24 years, they started evaluating training participant learning with the Reviewer Certification Program starting in 2010. FDA is in the process of implementing a training evaluation model, which includes various levels of evaluation, from assessing participant response to the training to evaluating its impact on the agency. As of 2017, FDA reported it was evaluating training programs to determine participant learning and preparing to evaluate whether that learning changed participant behavior. Officials told us they anticipate beginning to conduct evaluations that assess agency impact in fiscal year 2018, and they plan to have the model completely implemented for all trainings by fiscal year 2020. FDA currently evaluates its Reviewer Certification Program to determine participant learning, and though the least burdensome requirements are not specifically addressed in the Reviewer Certification Program evaluation materials FDA provided to us, they did include questions on topics related to least burdensome requirements. In addition to its current training evaluation plan, FDA is also required by the 21st Century Cures Act to conduct an audit of the training and its effectiveness in implementing the least burdensome requirements. Specifically, the training audit is to be conducted by the ombudsman responsible for premarket reviews, identified by FDA as the CDRH Ombudsman. According to a draft plan, FDA plans to conduct training evaluations, a process review of 510(k) and PMA documentation to assess reviewer compliance with FDA procedures, and seek feedback from industry on its experience with the premarket review process and how the least burdensome requirements are applied. Officials indicated that criteria are still under development and that they hoped to have them further developed in the first quarter of 2018, with the authorizing legislation requiring completion of the audit by June 2018, 18 months after enactment of the law. FDA is Taking Steps that May Improve Its Requests for Additional Information Overall, but Has Not Fully Evaluated Its Implementation of the Least Burdensome Requirements FDA Is Implementing Processes to Improve the Consistency and Clarity of Its Requests for Additional Information during Medical Device Reviews Some stakeholders and others have raised concerns about the consistency and clarity of FDA’s requests for additional information during medical device reviews. For the past 17 years, FDA has required reviewers to only request information that is necessary to make a PMA determination of “reasonable assurance of safety and effectiveness” or a 510(k) determination of “substantial equivalence” in their review of a submission. Representatives of one of the organizations representing the medical device industry noted the high percentages of medical device submissions that involve a letter, and some of their member companies have said that FDA reviewers may request additional information as a result of intellectual curiosity rather than a “need to know.” In addition, the independent assessment’s 2014 report, funded by FDA as part of MDUFA III, found inconsistent decision-making among FDA review staff throughout various stages of the review process, including additional information requests. While the 2014 report did not address least burdensome requirements explicitly, it examined related processes. For example, according to the report, there was inconsistent decision-making among FDA review staff throughout various stages of the review process, including a lack of clarity regarding FDA reviewer thresholds for triggering deficiency letters. The report recommended that FDA develop criteria and establish mechanisms to improve consistency in decision-making throughout the review process. To address problems identified during the independent assessment, FDA is implementing several initiatives to improve center processes. FDA officials told us that, in anticipation of MDUFA IV, they recognized a need for a dedicated quality management infrastructure. In 2014, FDA established a Quality Management Unit to improve center processes, which they said would include those related to the least burdensome requirements. The unit completed a framework that outlined its vision and mission and established organizational objectives, such as developing a document control system, providing training, and conducting quality assessments, audits, and management reviews. In addition, FDA officials told us that starting in October 2017, FDA planned to fulfill its MDUFA IV commitments to improve the clarity and consistency of its deficiency letters and AI letters after releasing updated guidance. In September 2017, FDA published guidance reflecting the commitments under MDUFA IV that all deficiency letters and AI letters include a statement indicating the specific basis for any cited deficiencies. According to FDA officials, this new approach will help ensure that the letters more consistently ground requests for information in the specific reason that FDA is requesting the information from the sponsor. For example, FDA may cite a law, final rule, or specific scientific issue as the basis for its request, rather than providing a more general statement of the request’s relevance. According to industry representatives, in the past, FDA reviewers have, at times, asked for additional information without including justification, and may have requested additional information as a result of intellectual curiosity rather than a “need to know.” The representatives stated that this new policy may better ensure the reviewers apply the least burdensome approach to their review. The updated guidance also explains that all deficiency letters and AI letters will undergo supervisory review prior to issuance to ensure that the information requested is relevant to a marketing authorization decision, all four elements of the deficiency are included, deficiencies are prioritized from most to least significant, and each deficiency is appropriate to include in light of the totality of all deficiencies. Officials told us that while supervisory concurrence was previously needed, under the new guidance, supervisors are now expected to review for certain criteria. For example, in the past, supervisors may have considered whether four-part harmony was addressed in each deficiency letter, but under the updated guidance this is now an expected practice. Officials said this will increase the extent to which deficiency letters are consistently constructed. In the MDUFA IV commitment letter, FDA agreed to base all deficiency letters and AI letters on a complete review of the submission and include all deficiencies. Therefore, FDA officials told us that any deficiencies identified following that letter would generally be limited to issues raised as a result of new information. For example, if FDA asked for information on bio-compatibility testing, FDA will first review that information, and based on that review may ask for new information. In that instance, the information responding to the initial deficiency is new information. FDA officials said that past letters should also have included all deficiencies, but this may have been done inconsistently. To further standardize its process for reviewing medical device submissions and developing requests for additional information, FDA is developing and implementing smart templates. FDA officials told us that these templates guide device reviewers through a standardized process for each submission. For example, they help reviewers identify the types of information necessary and include prewritten deficiency letters that have been approved by internal experts. FDA has had a smart template in place for the 510(k) process since 2013, according to FDA reports. FDA indicated that the template is already required for certain offices and divisions within CDRH, and plans for full adoption in the future. FDA officials told us that the agency also developed templates for de novo premarket submissions, which are currently available for voluntary use and will likely be mandatory in fiscal year 2018. Officials told us they plan to hire a person to develop a template to guide PMA reviews, which will likely take most of 2018. They told us the use of the smart template for PMAS will likely become mandatory for use by all reviewers in 2019. In addition to improving the consistency of deficiency letters, FDA officials said the information generated from the templates could be used to track deficiencies and requests for additional information, as well as provide information on the number and type of deficiencies in the letters. FDA officials told us that the plans for database and back-end analytical capabilities using information from the smart templates were less certain and dependent on available resources, and they pointed out that the information technology infrastructure can present unforeseen challenges. FDA Has Not Developed Metrics to Evaluate Implementation of the Least Burdensome Requirements, and While a New Audit Process Could Aid Oversight, Its Scope Is Still Unclear FDA has not established performance metrics that would allow it to evaluate its implementation of the least burdensome provisions. FDA officials told us that the agency does not track concerns related to the least burdensome requirements, such as by examining dispute data to identify those that may be related. According to FDA’s 2002 guidance, the agency was in the process of developing tools to be used by both agency staff and its stakeholders to periodically assess the implementation of the least burdensome requirements. The FDA guidance identified a need for additional tools to accurately assess the agency’s incorporation of the least burdensome principles into its various regulatory activities and to assess the impact of the least burdensome approach on expediting the development of new medical technologies. Agency officials told us FDA had not developed these tools, but was now in the process of making other tools available. For example, they cited the development of the smart templates that will guide reviewers as they evaluate medical device submissions and generate deficiency letters. Officials noted that, given the scientific nature of the inquiry, and because least burdensome is a general principle, developing a metric specific to the least burdensome requirements is a challenge. While this can be a challenge, FDA officials have noted that they are attempting to identify surrogate measures that can provide an indication that the reviewer considered the least burdensome requirements when making a request. According to federal standards for internal control, performance metrics are important for management to have relevant, reliable, and timely information available for management decision–making and external reporting purposes. Without such a metric, FDA may be asking medical device sponsors to provide information unnecessarily or in less efficient ways that are not in compliance with the requirement to use the least burdensome approach to medical device reviews. FDA is in the process of developing an audit program that could provide it with information on its implementation of the least burdensome requirements. FDA has committed to conducting annual quality audits, which will be led by CDRH’s Quality Management Unit. Accordingly, FDA plans to identify, with industry input, areas to audit at least once per year. Initially, the agency has agreed to complete an audit of deficiency letters and pre-submissions by the end of fiscal year 2020. As of August 2017, FDA was still planning the deficiency letters audit, and developing its methodology and identifying audit outcomes. FDA officials told us the agency plans to finalize a deficiency letters audit plan by the spring of 2018 and begin data collection by early summer of 2018. Officials explained that the audit will focus on processes—for example, the audit will not examine the scientific content of deficiency letters but will instead focus on whether CDRH has followed existing policies and procedures surrounding deficiency letters. In addition, the Quality Management Unit was still in the process of hiring most of its staff. As of August 2017, FDA officials told us the unit had 6 staff reporting to an Associate Director, and CDRH plans to gradually hire 20 more staff by 2020, starting once MDUFA IV funds are available beginning in October 2017. In addition to these more specific efforts, FDA also plans to continue its overall evaluation of the medical device review process. The 2016 independent assessment resulting from MDUFA III broadly evaluated FDA’s device review process, and although it mentioned least burdensome requirements only briefly, it addressed a number of related elements, including the quality of the review process and staff training. Under MDUFA IV, FDA committed to another independent assessment in two phases: (1) an evaluation of FDA’s implementation of the corrective action plan FDA developed in response to the MDUFA III assessment and (2) an evaluation of FDA’s premarket device review program to identify efficiencies that should be realized as a result of the process improvements and investments under MDUFA III and IV, among other things. As with the prior assessment, the new assessment will likely examine processes related to the least burdensome requirements, though the extent to which it will address the requirements is not yet known. Agency officials told us that FDA has committed to hiring a contractor by the end of December 2017. Conclusions FDA must balance the need to obtain sufficient data to determine the safety and effectiveness of medical devices under review, with the potential for undue burden and approval delays if unnecessary data is requested. Assuring that the agency uses the least burdensome method to complete its review helps to ensure it is able to make decisions about medical device approval in a timely way. While FDA implemented guidance and training related to the least burdensome requirements following the passage of FDAMA in 1997, it has taken few steps to develop performance metrics to evaluate the extent to which reviewers are using a least burdensome approach when reviewing medical device submissions. Recently, FDA implemented several changes that have the potential to improve its oversight of the least burdensome requirements and the clarity with which reviewers communicate the need for additional information. While planned audits of FDA’s medical device review process have the potential to provide the agency with evaluation tools through which to assess performance, these audits are still early in their development and the extent to which they will allow FDA to assess implementation of the least burdensome requirements is unclear. A complete and thorough assessment will be important for the agency to assure itself and external stakeholders that its reviews adhere to the least burdensome principles and requirements and thus are appropriately balanced. Recommendation for Executive Action We are making the following recommendation to FDA: The Commissioner of FDA should develop performance metrics and use them to evaluate the implementation of the least burdensome requirements, such as during its planned audits of medical device deficiency letters. (Recommendation 1) Agency Comments We provided a draft of this report to HHS. HHS concurred with our recommendation and provided written comments, which are reprinted in appendix I. In its written comments, HHS agreed that appropriate implementation of the least burdensome requirements is essential to FDA’s evaluation of its PMA and 510(k) medical device submissions, and agreed that it is important for FDA to evaluate how successfully it is implementing the requirements. HHS also reiterated FDA’s commitment to the least burdensome principles and provided an overview of its related efforts, several of which were noted in our draft report. HHS noted its concern that our draft report did not sufficiently capture all of FDA’s efforts. While HHS cited FDA’s efforts related to improving the science underlying its regulatory decisions, which could reduce burden on medical device sponsors, our review focused on the steps involved in FDA’s review process. In this regard, HHS concurred with our recommendation that it develop performance metrics and use them to evaluate the implementation of the least burdensome requirements, such as during its planned audits of medical device deficiency letters. In response to this recommendation, HHS indicated that FDA intends to assess how it follows least burdensome requirements as part of these audits. We continue to encourage FDA to develop the evaluation tools necessary to ensure it conducts a complete and thorough assessment of its implementation of the least burdensome requirements. In addition to these general comments, HHS provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Health and Human Services Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, William Hadley (Assistant Director), Matthew Byer (Analyst-in-Charge), Luke Baron, and William Garrard made key contributions to this report. Also contributing were Sam Amrhein and Jennifer Rudisill.
Why GAO Did This Study Determining that a new medical device is safe and effective is a substantial investment of time and resources for the sponsor and FDA, the agency that regulates medical devices. FDA relies on the device sponsor to provide supporting data at the time of its original submission, and the agency can request additional information during the review. The Federal Food, Drug, and Cosmetic Act, as amended, requires that when FDA requests additional information from sponsors, the agency consider the least burdensome means of evaluating a medical device. GAO was asked to provide information on FDA's implementation of the least burdensome requirements in its medical device review process. This report (1) describes FDA's requests for additional information and sponsor disagreements, (2) describes its least burdensome training efforts, and (3) describes FDA actions to improve its requests for additional information and examines the extent to which it has evaluated its implementation of the least burdensome requirements. GAO reviewed FDA documents and guidance and interviewed agency officials. GAO also interviewed officials from four relevant medical device manufacturing associations. What GAO Found Since 1997, the Food and Drug Administration (FDA) has been required to consider the least burdensome means of evaluating certain types of medical devices for marketing, including when requesting that sponsors—generally manufacturers—seeking to market their medical devices provide information in addition to what was provided in their submissions. GAO found that, from 2001 through 2016, FDA issued letters asking sponsors to provide such information for a majority of the more than 62,000 medical device submissions that it reviewed. Sponsors may formally disagree with the request on the grounds that it is not the least burdensome method needed for FDA to review the submission. For example, sponsors appealed FDA decisions internally to agency management 63 times from 2013 through 2016, and of these, FDA identified 33 such appeals in which the sponsor raised an issue related to least burdensome requirements. FDA agreed or partially agreed with the sponsors in 11 of these appeals. Medical device industry representatives noted that these appeals may not fully represent the number of such disagreements, because applicants are generally concerned that an appeal would damage their relationship with FDA and potentially negatively affect future device applications. FDA provided staff training that was specifically dedicated to addressing the least burdensome requirements from 1997 through 1999. Since 1999, FDA has not offered a course dedicated to the least burdensome requirements, but has incorporated related concepts into other training programs, such as in a training mandatory for most new reviewers. In response to the 21st Century Cures Act, enacted in 2016, FDA is providing new least burdensome training to all relevant employees, and said that 80 percent had received the training as of October 2, 2017. Although FDA did not specifically evaluate the effectiveness of past training on least burdensome requirements, it is implementing an evaluation of all device-related training, including the new least burdensome training. It also plans to complete a required audit of training on least burdensome requirements by June 2018. FDA has not specifically evaluated implementation of the least burdensome requirements. However, in response to broader evaluations, such as an independent assessment of its medical device review process, the agency is in the early stages of developing processes that may improve its requests for additional information. For example, FDA plans to conduct an audit of letters requesting additional information. FDA is developing the audit's methodology and expects it will assess whether the agency's process was followed. However, due to their early stage, the extent to which these efforts will allow FDA to assess implementation of the least burdensome requirements is unclear. In 2002, FDA stated that it planned to periodically assess the implementation of the least burdensome principles, and federal internal control standards identify the importance of performance metrics for such assessments. However, the agency has yet to develop performance metrics to do so. Until such measures are developed and used, FDA will not be able to evaluate whether it effectively and consistently applies a least burdensome approach in its medical device reviews. What GAO Recommends GAO is making one recommendation that FDA develop and use performance metrics to evaluate the implementation of the least burdensome requirements. The Department of Health and Human Services agreed with GAO's recommendation.
gao_GAO-19-106
gao_GAO-19-106_0
Background Colombia is the world’s largest producer of cocaine and also continues to be a source of heroin and marijuana. After declining most years since 2000, coca cultivation and cocaine production increased again in Colombia beginning in 2013, hitting record highs in 2017 (see fig. 1). Much of the cocaine produced in Colombia is consumed in the United States. According to the Drug Enforcement Administration’s (DEA) Cocaine Signature Program, over 90 percent of cocaine found in the continental United States is of Colombian origin. In 2017, the DEA reported that cocaine use in the United States was increasing concurrent with production increases in Colombia. Although the United States continues to be the primary market for Colombian cocaine, Colombian drug traffickers are also expanding into other markets around the world, according to DEA and Office of National Drug Control Policy (ONDCP) reporting. U.S., Colombian, and UN officials; as well as third-party researchers, have cited a variety of reasons for the increases in coca cultivation and cocaine production in Colombia, including: the Colombian government’s decision to end aerial eradication of coca crops in October 2015; prior to the end of aerial spraying, coca growers’ movement to areas off limits to aerial spraying and other countermeasures employed by growers; the Colombian government’s desire to avoid social protests in coca- growing regions controlled by the FARC during peace negotiations; the FARC’s drive to induce farmers to plant additional coca in areas under their control in anticipation that the Colombian government would provide subsidies for farmers to switch from coca to licit crops after the conclusion of the peace agreement; declining Colombian and U.S. funding for counternarcotics efforts; decreases in the price of gold, which diminished criminal organizations’ revenues from illegal gold mining and led to a redirection of resources back to cocaine production to make up losses; and increased demand for cocaine in the United States and other parts of the world. Armed Conflicts and Drug Trafficking in Colombia over Time Colombia has historically been one of Latin America’s more enduring democracies and successful economies. However, Colombia has also faced more than 50 years of internal conflict and has long been a leading drug producing and trafficking nation. See figure 2 for a map showing Colombia’s geographic location relative to the United States. For several decades, Colombia has struggled with a multi-sided conflict, involving both left-wing guerilla groups and right-wing paramilitary groups (see sidebar for background information on Colombia). Since its start, the conflict has resulted in at least 220,000 deaths and the displacement of more than 5 million Colombians, according to the Congressional Research Service. The FARC, a Marxist insurgent organization formed in 1964, was the largest of the left-wing groups. At its peak, the FARC had an estimated 16,000 to 20,000 fighters, according to the Congressional Research Service. In an effort to unseat the Colombian government, the FARC, along with the second largest left-wing guerilla group in Colombia, the National Liberation Army (known by its Spanish acronym ELN), undertook a widespread campaign of murder, kidnapping, extortion, and other human rights violations, according to various sources. Over time, the two groups also became increasingly involved in drug trafficking to fund their operations. (slightly less than twice the size of Texas) In response to the violence caused by the FARC and the ELN, a number of wealthy Colombians, including drug traffickers, began to hire armed paramilitary groups for protection during the 1980s. According to DOD officials, initially these groups were formed legally as self-defense groups; however, they turned to crime and drug trafficking over time. Many of these groups subsequently united under an umbrella organization called the United-Self Defense Forces of Colombia (known by the Spanish acronym AUC). According to reporting from various U.S. government and third-party sources, the AUC murdered individuals suspected of supporting the FARC and ELN and engaged in direct combat with these groups. From 2003 through 2006, the AUC formally dissolved after negotiating a peace agreement with the administration of former Colombian President Álvaro Uribe. However, some former AUC members did not demobilize and instead joined criminal groups (known as criminal bands, or Bacrim) that continue to be involved in drug trafficking today, according to reporting from various U.S. government and third-party sources. Throughout the 1980s and early 1990s, Peru and Bolivia were the leading global producers of cocaine but enforcement efforts in those two countries increasingly pushed cocaine production into Colombia. By the late 1990s Colombia had emerged as the leading source of cocaine in the world. Over time the landscape of drug trafficking in Colombia has changed. In the 1980s and early 1990s, major drug trafficking organizations such as the Medellín and Cali cartels controlled cocaine trafficking in Colombia. These cartels were vertically integrated organizations with a clearly defined leadership that controlled all aspects of cocaine production and distribution in their respective geographic areas. By the late 1990s, however, Colombian authorities, with the support of the United States, had largely succeeded in dismantling these two cartels. Over time, drug trafficking in Colombia fragmented and is now generally characterized by more loosely organized networks that are less integrated and have less well-defined leadership structures. Major organizations currently involved in drug trafficking include the Clan del Golfo, the largest of the Bacrim; FARC dissident groups that have not accepted the peace agreement; and the ELN. Peace Agreement with the FARC In August 2016, the Colombian government and the FARC reached a peace agreement ending more than five decades of conflict. The peace agreement was the culmination of four years of formal negotiations. In October 2016, however, Colombian voters narrowly defeated a referendum on whether to accept the peace agreement. After the voters rejected the agreement, the Colombian government and the FARC worked to make certain revisions and signed a second accord. The Colombian Congress then approved the revised agreement in November 2016. The Colombian government has estimated that it will cost $43 billion to implement the peace agreement over 15 years but State has estimated that the cost will be between $80 billion and $100 billion. The peace agreement included agreements on six major topics: land and rural development, the FARC’s political participation after disarmament, illicit crops and drug trafficking, victims’ reparations and transitional justice, the demobilization and disarmament of the FARC and a bilateral cease-fire, and verification to enact the programs outlined in the final accord. The agreement on illicit crops and drug trafficking addresses a range of issues related to coca eradication and crop substitution, public health and drug consumption, and drug production and trafficking. As part of the agreement, the FARC committed to work to help resolve the problem of illegal drugs in the country and to end any involvement in the illegal drug business. Among other things, the Colombian government pledged to prioritize voluntary drug-crop substitution programs over forced eradication, and where forced eradication was necessary, to prioritize manual removal over aerial spraying. Other portions of the peace agreement also relate to counternarcotics efforts. For example, the section on land and rural development discusses benefits for farmers who undertake substitution of illicit crops. Colombian authorities and the FARC have completed several actions called for under the peace agreement but progress on implementation has been uneven. Since the finalization of the peace agreement in November 2016, over 7,000 FARC members have disarmed and surrendered almost 9,000 weapons, about 1.7 million rounds of ammunition, and about 42 tons of explosive material, according to State reporting. The Colombian Congress has also passed implementing legislation, including a bill establishing the Special Jurisdiction for Peace to support transitional justice efforts. However, a significant number of FARC members have refused to demobilize and key FARC leaders have been accused of violating the peace agreement through continued involvement in the drug trade and other illegal activities. According to State reporting, the FARC has also failed to offer information on drug trafficking routes, contacts, and financing, as it had committed to do under the accord. The peace agreement continues to be controversial in Colombia with many Colombians believing that it does not do enough to hold the FARC accountable for the violence and crimes that it committed. Colombian President Iván Duque, who assumed control of the government in August 2018, has stated his intention to revise some elements of the agreement. Currently, the Colombian government is also engaged in peace negotiations with the ELN that were formally launched in February 2017. Although the talks continue, the negotiations have experienced several setbacks. For example, the two parties had agreed to a temporary ceasefire that lasted from September 4, 2017, to January 9, 2018, but they did not reach an agreement to extend the ceasefire and the ELN launched a number of attacks shortly thereafter, including a police station bombing in the city of Barranquilla that killed 7 police officers and injured more than 40. Plan Colombia and U.S. Counternarcotics Efforts in Colombia Colombia and the United States have a longstanding partnership on counternarcotics efforts. Since the early 1970s, the U.S. government has provided assistance to the Colombian government to support its efforts to combat illicit drug production and trafficking activities. However, by the late 1990s, Colombia had become the world’s leading producer of cocaine and a major source of heroin used in the United States. In response, the Colombian government, with U.S. support, launched Plan Colombia in 1999 with the goals of (1) reducing the production of illicit drugs and (2) improving security in the country by reclaiming areas of the country held by illegal groups. U.S. assistance to Colombia over the years has focused on three key approaches for reducing the supply of illegal drugs produced in the country and trafficked to the United States: eradication, interdiction, and alternative development. Eradication. Eradication seeks to reduce coca cultivation by destroying coca plants through either the aerial spraying of herbicides on the crops, or the manual spraying of herbicides or uprooting of the plants by personnel on the ground. Interdiction. Interdiction seeks to disrupt or dismantle drug trafficking organizations by investigating the operations of drug traffickers; seizing drugs and their precursors, cash, and other assets; destroying processing facilities; blocking air, sea, and land drug trafficking routes; and arresting and prosecuting drug traffickers. Alternative development. Alternative development seeks to discourage involvement in the drug trade by providing people with viable, legal livelihoods through training, technical assistance, and other support; as well as by working with the private sector, civil society, and the Colombian authorities to create the necessary conditions in communities for legal economies to develop. Under the general guidance of the White House’s ONDCP and the leadership of State at the country-level, a number of U.S. agencies have a role in supporting counternarcotics efforts in these three key areas. ONDCP is, among other things, responsible for developing the National Drug Control Strategy and coordinating the implementation of this strategy. It does not implement any counternarcotics programs in Colombia. State is the lead agency responsible for setting U.S. counternarcotics policy in Colombia, consistent with the overall direction provided by the National Drug Control Strategy. The ambassador at Embassy Bogotá has ultimate authority over all U.S. agencies operating in the country. State is the agency primarily responsible for supporting eradication efforts in Colombia. A number of agencies are responsible for supporting various aspects of interdiction efforts in Colombia, including: State; DOD; DOJ’s Criminal Division, DEA, and Federal Bureau of Investigation (FBI); and DHS’s Immigration and Customs Enforcement (ICE), Customs and Border Protection (CBP), and U.S. Coast Guard. USAID is the agency primarily responsible for supporting alternative development efforts in Colombia. The U.S. government provided about $5 billion in foreign assistance for Colombia in fiscal years 2008 through 2017. State and USAID provide foreign assistance to Colombia for a range of programs and activities that extend beyond counternarcotics efforts. State and USAID provide this assistance to Colombia through several accounts. State funds the largest share of its programs in Colombia through the International Narcotics Control and Law Enforcement account. It also provides funding to Colombia through the Foreign Military Financing; International Military Education and Training; and Nonproliferation, Anti-terrorism, Demining, and Related Programs accounts. USAID implements its programs in Colombia using funding from the Economic Support Fund account. DOD provides counternarcotics funding to Colombia through its Central Transfer Account. Figures 3 and 4 show U.S. assistance to Colombia in fiscal years 2008 through 2017. The U.S. government’s efforts in Colombia are part of its broader efforts to combat drug trafficking throughout the Western Hemisphere, including in other partner countries and in the “transit zone,” which is the area from South America through the Caribbean Sea and the eastern Pacific Ocean used to transport illicit drugs to the United States. In addition, the U.S. government combats the illegal drug problem through a range of domestic law enforcement efforts and programs designed to reduce illicit drug use. These various efforts are not addressed in this report. Recent Developments in U.S.-Colombia Efforts on Counternarcotics The Obama administration supported the peace process in Colombia and announced a new initiative in February 2016, known as Peace Colombia. Peace Colombia was designed to establish a new framework for cooperation between the two countries and refocus U.S. assistance to support peace agreement implementation. The administration called for an initial $450 million in funding for Peace Colombia in fiscal year 2017. Under Peace Colombia, U.S. assistance was to be focused in three areas: consolidating and expanding progress on security and counternarcotics while reintegrating the FARC into society; expanding the Colombian state’s presence and institutions to strengthen the rule of law and rural economies, especially in former conflict areas; and promoting justice and other essential services for conflict victims. More recently, the Trump administration has raised questions about Colombia’s commitment to meeting its counternarcotics obligations. As required by law, the Trump administration in September 2017 issued a memorandum documenting the annual presidential determination on countries that are major drug transit or illicit drug producing countries. As in years past, the memorandum identified Colombia as one of these countries. The memorandum also stated that the administration had seriously considered designating Colombia as a country that had demonstrably failed to adhere to its obligations under international counternarcotics agreements due to the extraordinary growth of coca cultivation and cocaine production over the past three years. According to the memorandum, the administration ultimately decided not to take this step because of the close partnership between the U.S. government and the Colombian National Police and Armed Forces. However, the memorandum underscored that the administration would keep the designation as an option and expected Colombia to make significant progress in reducing coca cultivation and cocaine production. As part of the U.S.-Colombia High Level Dialogue in March 2018, the U.S. and Colombian governments pledged to expand counternarcotics cooperation over the next 5 years with the goal of reducing Colombia’s estimated coca cultivation and cocaine production by 50 percent by the end of 2023. U.S. Agencies Conducted Performance Monitoring of Counternarcotics Activities in Colombia, but Have Not Evaluated Key Efforts and State Has Not Undertaken a Comprehensive Review of the Overall Approach U.S. Agencies Conducted Performance Monitoring of Counternarcotics Activities in Colombia, but Have Not Evaluated the Effectiveness of Eradication and Interdiction Efforts U.S. agencies have conducted ongoing performance monitoring of various counternarcotics activities in Colombia, but State, DOD, DHS, and DOJ have not conducted evaluations of U.S. eradication and interdiction programs. Performance monitoring is the ongoing review and reporting of program accomplishments, particularly progress toward pre-established goals. It is typically conducted by program or agency management. Performance monitoring focuses on whether a program has achieved its objectives, expressed as measurable performance standards. In contrast, program evaluations are individual systematic studies conducted periodically or on an ad hoc basis to assess how well a program is working. They are often conducted by experts, either from inside or outside the agency, who are not working on the program. Program evaluations typically examine a broader range of information on program performance and its context than is feasible to monitor on an ongoing basis. Performance Monitoring and Reporting U.S. agencies have conducted a range of performance monitoring efforts to assess their counternarcotics activities in Colombia. While some monitoring is performed through interagency mechanisms, most monitoring is done at the individual agency level. Interagency monitoring mechanisms include ONDCP reports, such as its annual Budget and Performance Summary and its annual National Drug Control Strategy Performance Reporting System Report, and Embassy Bogotá’s annual Performance Plan and Reports. ONDCP’s Budget and Performance Summaries and Performance Reporting System Reports are not Colombia-specific and discuss a range of domestic and international counternarcotics efforts. These reports, however, generally provide some limited performance information related to Colombia. For example, ONDCP’s Budget and Performance Summaries include information, by agency, on their counternarcotics budget requests as well as some selected performance reporting. As part of these documents, State and USAID have reported data on certain performance metrics specific to Colombia, such as the number of hectares of drug crops eradicated in U.S. government-assisted areas of Colombia and the number of rural households benefitting from U.S. government interventions in Colombia. In addition, the reports contain narrative related to the results of counternarcotics activities in Colombia. At the country level, Embassy Bogotá’s annual Performance Plan and Report provides information on the embassy’s progress in meeting its goals and objectives, including those related to counternarcotics. As part of these reports, the embassy provides data on results for the fiscal year, relative to established targets, for a range of counternarcotics performance metrics. These Performance Plan and Reports primarily focus on State and USAID activities, rather than describing the results of all U.S. agencies’ activities in Colombia. At the agency level, State, USAID, DOD, DOJ, and DHS and their components have, to varying degrees, conducted performance monitoring of their counternarcotics activities in Colombia. Examples of key performance monitoring activities, by agency, are described below: State: State, with input from other U.S. agencies involved in counternarcotics efforts, produces its annual International Narcotics Control Strategy Report, which is global in scope, but includes specific country reports, including on Colombia. These reports describe key steps that Colombia has taken over the year to combat drug trafficking and how U.S. assistance has supported these efforts. In addition, State’s Bureau of International Narcotics and Law Enforcement Affairs (INL) has developed a Colombia country plan for 2017 through 2021 that presents results data for a number of counternarcotics-related indicators, such as the percent of coca hectares eradicated against Colombia’s national goals and the number of hours flown by the Colombian National Police in support of counternarcotics and other related missions. The INL country plan also establishes performance targets for future years. State/INL implementing partners are also responsible for producing periodic reports that describe their progress in meeting pre-established performance targets for their projects. USAID: USAID has developed a Colombia-specific information system, the Monitoring and Evaluation Clearinghouse (Monitor), that provides the agency with information about the status and progress of all USAID alternative development projects in Colombia. For example, Monitor tracks metrics such as the number of hectares of licit crops supported by USAID, the number of beneficiaries from improved infrastructure services, and the number of households who have obtained documented property rights as a result of USAID assistance. USAID implementing partners are also responsible for producing periodic reports that describe their progress in meeting pre- established performance targets for their projects. DOD: U.S. Southern Command (SOUTHCOM) completes annual Program Objective Memorandums (POM) related to each of its program areas as part of the DOD budget process. Each POM is tied to a particular project code. For example, SOUTHCOM has a project code for counternarcotics support in South America and a project code for the Regional Helicopter Training Center in Colombia. As part of each POM, SOUTHCOM reports on the activities supported under the project code and reports on results relative to pre-established performance targets. Examples of metrics tracked in the POMs include the rate of operational readiness of Colombian maritime patrol aircraft and the hours a day the Colombian Air Force was able to provide video surveillance to support operations. DOJ: DEA has developed its annual Threat Enforcement Planning Process, which guides the agency’s operational strategy and serves as a means of monitoring performance. Under this three-stage process, DEA offices, including the one in Colombia, first identify threats within their area of responsibility that link to agency-wide threats that DEA has established. The offices then develop mitigation/enforcement plans for each identified threat, and, subsequently, produce impact statements that summarize the outcomes and results related to each mitigation/enforcement plan. For example, the impact statements describe key arrests that have been made and major seizure operations. In addition, the FBI office in Colombia produces an annual summary of statistics to monitor the accomplishments of the Colombian vetted unit that it supports, including the number of arrests, the amount of drugs seized, and the commercial value of assets seized. DHS: ICE and CBP stated that they do not conduct performance monitoring activities specific to Colombia. Coast Guard officials stated that the Coast Guard compiles information that it provides to its Colombian counterparts on a recurring basis, including data on the number of Colombian-flagged ship interdictions it has completed and the number of Colombian nationals apprehended. All three agencies contribute to DHS annual performance reports. These annual reports include some performance information related to DHS counternarcotics efforts more broadly, such as ICE’s work combatting transnational criminal organizations that may operate in Colombia. State, USAID, DOD, DOJ, and DHS use a range of metrics to assist them in both formally and informally monitoring the performance of eradication, interdiction, and alternative development efforts in Colombia. These agencies produce some of these data, while in other cases they use data from other sources including implementing partners, the Colombian government, and the UN. Examples of key metrics include: Eradication: hectares of coca cultivated, hectares of coca eradicated, and coca replanting rates. Interdiction: amounts of cocaine seized, the number of cocaine processing laboratories destroyed, the number of drug trafficking organizations disrupted or dismantled, and the number of drug trafficking suspects extradited to the United States. Alternative Development: the number of households involved in coca cultivation, increases in the value of sales of legal products in areas involved in narcotics production, the number of households receiving land titles as a result of U.S. assistance, and the value of agricultural and rural loans generated through U.S. assistance. State, USAID, DEA, and DOD have undertaken efforts to further strengthen their performance monitoring efforts in recent years. For example, in September 2017, State/INL signed a new monitoring and evaluation contract for the Western Hemisphere which is designed to strengthen its existing performance measures and identify new metrics to better assess performance. According to a State official, the contractor is currently working with both State officials in Washington D.C. and at embassies in the Western Hemisphere to, among other things, develop a list of performance measures that link to INL’s goals for the region and that involve data that can be feasibly and consistently collected across the countries in the region. USAID officials noted that recently USAID has been collecting data on contextual indicators and developing baseline studies to help inform new alternative development programs it is implementing in Colombia. According to USAID officials, these baseline studies have collected information related to productivity, exports, income, multidimensional poverty, citizen security, social capital, and trust in institutions. In addition, as noted above, DEA established its new Threat Enforcement Planning Process in fiscal year 2017. According to DEA, this process is designed to, among other things, allow the agency to move beyond basic output measures and better assess how its offices, including the office in Colombia, are doing in combatting priority threats within their area of responsibility. Finally, according to a DOD official, DOD’s Office of Counternarcotics and Global Threats is developing guidance for assessing the counternarcotics programs it supports around the world to help the office’s leadership make better informed decisions about how to best use DOD’s limited counternarcotics resources. Although performance metrics are useful for monitoring progress and can help inform evaluations of effectiveness, they are generally not intended to assess effectiveness directly. For example, U.S. agencies track data on the amount of cocaine seized in Colombia, but a number of U.S. officials noted that it is unclear to what extent increases in cocaine seizures in recent years are due to the increased effectiveness of interdiction efforts or more cocaine being present in Colombia to seize. As another example, some agencies track data on the number of Colombian officials receiving counternarcotics training through their programs, but these data are not designed to capture what, if any, improvements in counternarcotics outcomes are achieved as a result of that training. Evaluations USAID has completed independent evaluations of several of its alternative development programs. However, other agencies have not formally evaluated the long-term effectiveness of their eradication or interdiction activities. Alternative Development: Since 2008, USAID has conducted a number of formal, independent evaluations of its alternative development programs in Colombia. Some of these evaluations have examined USAID’s alternative development efforts more broadly, while others have focused on the effectiveness of specific programs such as USAID’s Consolidation and Enhanced Livelihood Initiative, More Investment in Sustainable Alternative Development, and Areas for Municipal-Level Alternative Development programs. Many of these evaluations were done through a 5-year monitoring and evaluation contract that USAID awarded to Management Systems International in May 2013. Eradication and Interdiction: State, DOD, DEA, FBI, ICE, CBP, and the U.S. Coast Guard all reported that they had not conducted any formal, systematic evaluations to assess the effectiveness of U.S.-supported eradication and interdiction efforts in Colombia since 2008. State documents indicate that State was considering an evaluation of its counternarcotics activities in Colombia as early as 2015; however, State officials noted that these plans were delayed due to competing priorities. State reported that it now plans to award a contract in 2019 for an evaluation of its counternarcotics activities. According to State officials, a scope of work for the evaluation has not been completed, so the details of the planned evaluation have not yet been decided, including whether the evaluation would assess activities in the long term and which activities it would include. State’s November 2017 evaluation policy highlights the importance of evaluations in achieving U.S. foreign policy outcomes and ensuring accountability. The policy establishes a requirement that all large programs, such as State’s counternarcotics program in Colombia, be evaluated at least once in the program’s lifetime, or once every 5 years for ongoing programs. According to State officials, evaluations can be challenging to design and potentially entail significant investments of resources and time; however, State’s evaluation policy reaffirms the importance and feasibility of conducting evaluations, including impact evaluations. Without evaluations of U.S.-supported eradication and interdiction efforts in Colombia, U.S. agencies do not have complete information regarding the long-term effectiveness of these efforts in reducing coca cultivation and the cocaine supply. As the lead agency responsible for setting U.S. counternarcotics policy in Colombia, State is best positioned to lead an evaluation of U.S.-supported eradication and interdiction efforts in the country. However, such an evaluation would benefit from the involvement and expertise of other U.S. agencies engaged in counternarcotics activities in Colombia. State’s evaluation policy encourages such evaluations that are undertaken collaboratively with other U.S. agencies. State Has Not Conducted a Comprehensive Review of the Overall U.S. Counternarcotics Approach in Colombia to Determine the Most Effective Combination of Activities The U.S. counternarcotics approach in Colombia has historically entailed a combination of eradication, interdiction, and alternative development programs. Although the U.S. government implements a wide range of counternarcotics efforts in Colombia and can point to various results for these activities, State and other U.S. agencies have no systematic way to determine whether the current combination of activities is the most effective approach to achieve U.S. goals. According to DEA officials, measuring the effectiveness of overall U.S.-counternarcotics efforts in Colombia has been particularly challenging in recent years due to historical, transformational events which have taken place in that country. Various U.S. officials acknowledged that the substantial increases in coca cultivation and cocaine production as well as the other significant changes that have occurred in Colombia in recent years, including the end of aerial eradication, the conclusion of the peace agreement with the FARC, and decreases in Colombian and U.S. counternarcotics budgets, necessitate that the U.S. government review its approach to counternarcotics efforts and consider adjustments to reflect these developments. In addition, the U.S. government’s approach is affected by Colombia’s counternarcotics priorities and key initiatives, which continue to evolve. For example, in September 2015, Colombia announced a new counternarcotics strategy which specified three priority areas: rural development programs to reduce drug cultivation; law enforcement efforts to dismantle drug trafficking organizations; and public health approaches to reduce domestic drug consumption. Colombia has also launched an initiative to establish Strategic Operational Centers (known by the Spanish acronym CEO) in key regions of the country. These CEOs are designed to bring together the Colombian military, police, and civilian agencies to focus on a whole-of-government approach to improving security, establishing a state presence, and fighting drug trafficking in these areas. The Colombian government has now launched CEOs in three areas—Tumaco, San José del Guaviare, and Caucasia—and plans to open a fourth, in Cúcuta, later in 2018 (see fig. 5). It is also considering adding a fifth CEO in the Caquetá/Putumayo region. In addition, the Colombian government, with support from the U.S. embassy, launched the Antioquia Free from Coca initiative in December 2017. The initiative seeks to bring together the Colombian national government, local governments in Antioquia, the armed forces, the private sector, and the U.S. government to create a new model for development and counternarcotics in the Antioquia region. State has reported that the U.S. government plans to shift substantial resources to the initiative. Various U.S. officials stated that finding an appropriate combination of eradication, interdiction, and alternative development assistance is critical to achieve the U.S. objective of reducing cocaine production and trafficking in Colombia in this new context. To find this combination, U.S. officials stated that there are a range of considerations to weigh. For example, U.S. officials stated that they must consider to what extent to prioritize pursuing short-term reductions in coca cultivation and cocaine supplies versus longer-term efforts to address the underlying causes of the drug problem in Colombia, such as the widespread lack of legal economic opportunities in rural areas of the country. In addition, U.S. officials and documents from various agencies noted that counternarcotics efforts must be properly sequenced and coordinated to be effective. DEA analysis, for example, found that farmers are unlikely to permanently abandon coca farming without sustained and concurrent eradication and alternative development. Although U.S. officials noted the importance of finding an appropriate combination of eradication, interdiction, and alternative development assistance, they acknowledged that they have not undertaken a comprehensive review of their counternarcotics approach in Colombia that considers the benefits and limitations of these efforts to determine whether the U.S government’s current combination of activities is the most effective approach to achieve U.S. counternarcotics goals. Officials from State and other agencies noted that such reviews are challenging to do systematically and noted that they must generally rely on imperfect metrics, such as the amount of coca being cultivated, to determine if their counternarcotics approach is working. In addition, most U.S. efforts at measuring performance and evaluating results are focused at the individual agency level, rather than designed to determine what combination of U.S. counternarcotics activities will best achieve U.S. objectives of reducing the cocaine supply. Federal internal control standards state that agency management should use quality information to achieve the entity’s objectives. Among other things, the standards note agency management should use quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks. Without a comprehensive review of the U.S. counternarcotics approach in Colombia that considers the combination of eradication, interdiction, and alternative development efforts, the U.S. government lacks important information on how to most effectively combat drug trafficking in a changing environment in Colombia. To undertake such a review, the U.S. government might determine the need to collect additional information and conduct further evaluations of its counternarcotics programs, but it could also potentially use a range of existing information on what is known about the effectiveness of eradication, interdiction, and alternative development programs. State, as the lead agency at the embassy in Colombia, would be best positioned to guide an interagency effort to undertake such a review. Available Evidence Indicates that U.S.- Supported Eradication Efforts in Colombia May Not Be an Effective Long- Term Supply- Reduction Approach State’s INL has supported Colombian aerial and manual eradication efforts over time, but these efforts have declined after the Government of Colombia’s decision to end aerial eradication and several years of limited or no funding for manual eradication driven by decreased Colombian government demand for this assistance, according to State officials. Despite these declines, officials from several U.S. agencies reported eradication should be a vital component of U.S. counternarcotics efforts in Colombia. Nevertheless, U.S. officials and the studies and experts in our review identified a number of factors which may reduce the effectiveness of eradication as a supply reduction approach, including the strategies coca growers use to mitigate the effects of eradication and potential adverse effects it may have on Colombian citizens. Additionally, third- party research suggests that eradication efforts do not substantially affect the long-term supply of cocaine and are potentially costly. Since 2008 U.S.- Supported Eradication Efforts Have Declined after Changes in Colombian Counternarcotics Policy; However, U.S. Officials Believe Eradication Is an Important Component of an Overall Counternarcotics Approach INL has provided financial assistance and operational support for Colombian eradication efforts in three key areas: aerial eradication, manual eradication, and aviation support. Overall eradication efforts, however, have declined over time and the Colombian government stopped aerial eradication altogether in 2015. Aerial Eradication: Until 2015, INL directed the largest portion of its eradication assistance toward the Colombian National Police aerial eradication program. The program’s goal was to reduce coca cultivation and harvests by spraying coca fields with glyphosate. INL helped fund, plan, and operate the aerial eradication program. It provided the pilots, planning, aircraft, logistics, maintenance, and fuel to operate the program’s two spray bases. Funding for the aerial eradication program declined over time from $66.2 million in fiscal year 2008 to $12.7 million in fiscal year 2014. From October 2013 to October 2014, aerial eradication was temporarily suspended by the U.S. Embassy in Bogotá after two pilots were shot down during eradication operations. In May 2015, the Colombian government stopped the aerial eradication program amid concerns that glyphosate had a negative impact on public health. Cessation of aerial spraying took effect in October 2015. Manual Eradication: According to State officials, U.S. assistance shifted from aerial to manual eradication after the 2015 ban on aerial spraying. Manual eradication involves using mobile eradication teams, which are transported into coca fields to manually remove and destroy coca plants (see fig. 6). These teams are made up of Colombian police and military personnel, as well as civilian contractors, according to INL officials. Initially manual eradication was used in concert with aerial spraying in an effort to combat replanting in areas already subjected to aerial spraying, but with the ban on aerial spraying, manual eradication became a stand-alone approach. INL provides a variety of support for manual eradication teams including operational support and equipment, such as demining and brush cutters. Additionally, INL helps identify and fund the development of new technologies that might improve the effectiveness of manual eradication, such as armored ground spraying vehicles which protect manual eradicators from the danger of improvised explosive devices and landmines. INL funding for manual eradication varied during fiscal years 2008 through 2016, ranging from four fiscal years where INL provided no funding to a high of $9.5 million in fiscal year 2014. INL funding for manual eradication increased substantially in fiscal year 2017 to $26 million. According to State, decreases in the budget for manual eradication were driven by reduced Colombian government demand for this assistance. INL Aviation Support: INL has also provided aviation support to the Colombian National Police and the Colombian Army to assist counternarcotics efforts. According to INL, these aviation programs provide critical assistance for a number of counternarcotics efforts such as eradication, but also for interdiction, and security operations. Because Colombia is a vast country with rugged terrain, many rivers, and poor roads, State officials indicated air mobility is critical for effective counternarcotics operations. Colombian National Police (CNP): INL provides logistical, operational, maintenance, safety, and training assistance to the CNP’s aviation brigade in support of its counternarcotics operations. The CNP aviation program costs roughly one-third of INL’s Colombia budget, averaging about $55 million annually in fiscal years 2008 through 2017. Under this program INL helped the CNP procure its air fleet. Currently, the INL aviation program supports a total of 56 CNP aircraft, of which 52 are owned by the U.S. government (see fig. 7). Additionally, INL’s aviation program provides assistance for the CNP to build maintenance facilities, develop training plans, implement safety programs, and procure equipment, such as flight recorders and communications gear. As of 2018, INL also plans to provide $21 million over 4 years for the CNP’s aerial imagery collection and data analysis system, which Colombian authorities use to map coca fields and plan eradication missions. Colombian Army: INL provided aviation support for the Colombian Army prior to Colombia’s takeover of the army aviation program in 2012—a process known as nationalization. INL provided the Colombian Army’s aviation program nearly $150 million from fiscal years 2008 through 2011. According to INL, this support contributed significantly to the Colombian Army’s aerial eradication efforts as well as efforts to dismantle armed drug trafficking organizations, such as the FARC and ELN. In 2008, the Colombian government began to nationalize 62 aircraft from INL and, in 2012, assumed full responsibility for their maintenance and operations. Multiple Factors May Limit the Effectiveness of Eradication Efforts or Undermine Their Viability as a Long-Term Supply- Reduction Strategy U.S. and UN officials as well as third-party studies we reviewed identified a number of factors that reduced the effectiveness of eradication efforts at an operational level. We previously reported that U.S. funded counternarcotics efforts, which focused on aerial spraying, did not achieve Plan Colombia’s overarching goal to reduce the cultivation, production, and distribution of cocaine by 50 percent, in part because coca farmers responded with a series of effective countermeasures. Separately, State also indicated that aerial eradication was becoming less effective prior to the end of the spraying program in 2015. Similarly, U.S. and UN officials noted factors that had a negative impact on the effectiveness of manual eradication efforts. Crop displacement: U.S. officials, UN reports, and third-party researchers have noted that eradication has caused coca cultivation to move, or be displaced, to smaller plots and areas “off-limits” to aerial spraying, such as national parks, territories near international borders, and protected indigenous and Afro-Colombian areas, thus diminishing its impact on supply reduction. According to INL, at the beginning of the 2000s plots of 10 or more hectares were commonplace, easy to identify, and spray, but by 2016, the average plot size was less than a hectare, making aerial spraying more difficult. In addition, coca cultivation in areas off-limits to aerial spraying, such as national parks, border areas, and indigenous and Afro-Colombian areas, has increased substantially. According to one State cable, in 2014 over 70 percent of the nationwide cultivation increases in cultivation occurred in these areas. The Congressional Research Service reported that cultivation increased in these areas by 50 percent between 2014 and 2015. Likewise, a UN report noted that between 2015 and 2016, coca cultivation had increased by 32 percent in indigenous areas, by 45 percent in Afro-Colombian areas, and by 27 percent in national parks. According to the UN report, these areas account for only .04 percent of Colombia’s national territory but are the source of 32 percent of the nation’s coca cultivation. Four of the studies in our literature review also concluded that eradication led to crop displacement. One study indicated that the displacement of coca cultivation tends to disproportionately affect vulnerable populations by concentrating crime in the areas where these populations tend to live. The study concluded that coca cultivation has increased in some of the most socially and environmentally vulnerable areas of Colombia, including disadvantaged rural communities and has tended to further marginalize those Afro-Colombian communities that experienced dramatic increases in coca cultivation. Countermeasures: Coca growers and drug traffickers can employ countermeasures, such as using mines and improvised explosive devices, which create serious risks for manual eradication teams. For example, 4 manual eradicators were killed and 39 wounded during manual eradication operations in 2017, according to one State cable. Likewise, aerial spraying operations were also targeted by attacks. For example, in 2013 two pilots were shot down while conducting aerial eradication operations. This attack led to a temporary halt in aerial spraying operations. One State cable reported that from 1996 to October 2015 at least five spray aircraft were downed by hostile fire, resulting in the deaths of four pilots. Replanting, pruning, and other mitigation efforts: Coca growers have developed techniques, including replanting and pruning, which can mitigate damage to coca plants and reduce the effectiveness of eradication efforts. According to a 2017 UN report, 80 percent of the coca fields detected in 2016 had previously been subjected to aerial or manual eradication efforts. One DEA report confirmed that 25 percent of coca growers in the region they studied in 2008 had replanted their crops after spraying. Colombian government data showed that from 2014 through 2016 areas subjected to manual eradication were replanted between 25 and 37 percent of the time. In addition, coca growers can prune bushes immediately after spraying to help counter the effects of glyphosate and allow the plants to yield fresh leaves that may be harvested. According to data provided by State, from 2006 through 2012 areas subjected to aerial spraying were reconstituted—replanted or pruned—on average about 56 percent of the time. Growers may also intersperse coca plants alongside licit agricultural crops because aerial eradication efforts tend to be focused on large coca fields and attempt to avoid licit crops. Coca growers’ economic incentives: According to a DEA study, in 2007, nearly 60 percent of coca growers were ready to abandon coca farming. Likewise, a 2009 DEA study stated that sustained aerial eradication efforts, lasting 5 to 8 years, would force coca growers to give up coca farming. DEA noted that the Putumayo region, which it used as a model in the study, was “nearing a tipping point” in which coca cultivation would be abandoned after aerial eradication caused 60-80 percent losses in coca fields. However, aerial eradication efforts were sustained at or above 100,000 hectares from 2002 to 2012 before decreasing and eventually ending altogether in 2015. By 2016, coca cultivation had increased substantially and DEA data showed that only 5 percent of growers were ready to abandon coca. Similarly, a UN coca cultivation survey found that the number of households involved in the coca trade increased steadily from roughly 60,000 in 2008 to over 100,000 in 2016. DEA officials we interviewed agreed that it now appears that coca growers do not “abandon” coca farming during periods of sustained eradication, but rather they temporarily stop farming coca until it is economically advantageous to resume. State officials noted that they anticipate increases in eradication levels under President Duque and expect that increased eradication may alter coca farmers’ analysis of the benefits and risks of growing coca. One expert we interviewed was skeptical that eradication could ever raise the economic costs of growing coca high enough to dissuade farmers from growing coca because they find it easy to grow and are very responsive to price changes. The expert stated that the revenues from growing coca are often significantly higher than the costs of growing the plant. Given such high potential profits, there is typically an economic incentive to grow the crop. A number of other factors may also undermine the viability of eradication as a supply reduction strategy more broadly: Protests against eradication: According to a 2017 State cable, rural protestors use blockading tactics at eradication sites to disrupt manual eradication efforts. This cable reported that protesters blocked 428 manual eradication operations in 2016, and 152 operations in 2017. In addition, these protests against manual eradication efforts have led to violent confrontations between local populations and Colombian security forces. One such confrontation in Nariño—Colombia’s top coca-producing region—led to the deaths of a number of civilian protesters. Destruction of licit agriculture: Local civil society organizations in Colombia maintain that glyphosate spraying drifts with the wind and kills legal crops near eradicated areas, negatively affecting local populations. State maintains that its eradication programs had a minimal impact on licit crops; however, those whose licit crops had been harmed as a result of aerial spraying were eligible for compensation. According to State, from 2001 through the end of the aerial spraying program in October 2015, Colombians registered nearly 18,000 complaints of accidental spraying of licit crops. Of these complaints, State noted that only 3 percent were found to have merit and were therefore eligible for compensation. Debate over adverse health effects: The debate over the purported negative health effects of glyphosate has made aerial spraying efforts in Colombia controversial. In March 2015, the World Health Organization’s International Agency for Research on Cancer identified glyphosate as “probably” able to cause cancer in humans. However, two U.S. agencies dispute these findings. From 2002 through 2011, State formally certified to Congress that the glyphosate spraying program posed no unreasonable health risks to humans. The Environmental Protection Agency has also generally concluded that glyphosate exposure from aerial eradication in Colombia has not been linked to adverse health effects. Several other studies we reviewed discussed the potential health effects of glyphosate. International disputes: In 2013, Ecuador and Colombia agreed to a settlement to a case Ecuador filed in 2008 before the International Court of Justice in The Hague seeking a prohibition of the use of herbicides in aerial eradication near the Colombia-Ecuador border as well as indemnification for claimed damages associated with Colombia’s eradication program. Ecuador received $15 million in compensation from Colombia for alleged health and environmental harms and Colombia agreed to a 10 kilometer exclusion zone on the border with Ecuador in which it would not conduct aerial spraying. Third-Party Research Suggests that Eradication Efforts Do Not Have a Substantial Long-Term Effect on Reducing Coca Cultivation and Cocaine Supply and Are Potentially Costly Third-party research we reviewed suggests that eradication efforts do not have a substantial long-term effect on coca cultivation and cocaine supply and are potentially costly. Eight studies in our literature review had key findings on the effectiveness of eradication efforts in Colombia. All eight studies raised questions regarding the effectiveness of eradication as a strategy to substantially reduce coca cultivation and the cocaine supply. Five studies also generally concluded it is a potentially costly supply reduction approach. Five studies found that eradication has only a small effect on reducing coca cultivation, but the estimates for reductions varied by study. For example, one study found that a 1 percent increase in the risk of eradication decreases coca cultivation by roughly .44 hectares. Another study estimated that a 1 percent increase in the risk of eradication would decrease the total area in Colombia under cultivation by .66 percent. Likewise, a third study found that as a result of displacement, the supply reduction effects of spraying were so small that an additional 33 hectares must be sprayed every year in order to reduce coca cultivation by 1 hectare. Three other studies concluded eradication efforts had no net effect on reducing the coca or cocaine supplies, or have led to increased coca cultivation. For example, one of these studies reported that a 1 percent increase in eradication actually increases the amount of land under coca cultivation by 1 percent as growers try to compensate for losses. The author noted that municipal level data on eradication and coca cultivation trends was broadly compatible with their findings. In addition, the author presented data from 2006 through 2012 which indicated a 38 percent decrease in eradication levels as well as a 38 percent decrease in coca cultivation. Another study concluded that the effects of eradication were nullified by coca growers’ ability to rapidly relocate their operations to other areas. Several of the studies we reviewed examined aspects of the costliness of eradication efforts, but relied on cost data that were either limited or we were unable to substantiate. Three studies generally concluded that eradication is costly in absolute terms, while two others suggested that eradication appears to be more costly than other alternative counternarcotics efforts. For example, one study suggested removing 1 kilogram of cocaine from retails markets through eradication would cost the United States roughly $940,000. Another study estimated that an additional $100,000 spent on eradication would reduce coca cultivation in Colombia by 1.5 percent. U.S.-Supported Interdiction Efforts Seized a Substantial Amount of Cocaine and Disrupted Drug Trafficking Organizations in Colombia, but the Long-Term Effect of These Efforts is Unclear U.S. agencies have provided a variety of support for Colombian interdiction efforts, including capacity building and operational support. These efforts resulted in the seizures of a substantial amount of cocaine and precursor chemicals and disrupted drug trafficking organizations by arresting these organizations’ leadership and seizing valuable assets. However, the long-term effects of these efforts are unclear due to continued increases in cocaine production and the emergence of new drug traffickers. U.S. and Colombian officials identified a number of ways to improve the effectiveness of interdiction. A limited number of third-party studies on interdiction suggest mixed findings but indicate interdiction may be more effective than eradication because it targets drug trafficking at a more costly point in the production and distribution process. U.S. Agencies Have Provided Capacity Building and Operational Assistance to Support Colombian Interdiction Efforts Building Partner Capacity: U.S. agencies provided a range of assistance that has improved Colombian authorities’ capacity to conduct interdiction efforts. U.S. and Colombian officials noted that because of these efforts, Colombian security services were able to provide counternarcotics training and support to other countries in the region. Key examples of U.S. efforts to build partner capacity included: Counternarcotics forces: U.S. agencies provided a broad range of assistance to improve the effectiveness of Colombian counternarcotics forces. For example, INL funded the creation and training of the Colombian Army’s counternarcotics brigades—military units responsible for seizing cocaine, destroying cocaine processing labs, and securing eradication sites. In addition, DOD and INL provided training and expertise to the Colombian National Police’s Junglas unit, which is a highly-trained special operations unit used to detect and destroy cocaine labs and capture high value drug traffickers. INL funded the construction of the Colombian National Police training facility where security services from Colombia and neighboring countries receive counternarcotics-related training. Likewise, DOD provided a broad array of programs designed to improve the operational capabilities of Colombian security forces. For instance, the agency’s Regional Pilot Training School helps provide helicopters, training, and certification for up to 50 Colombian and 24 international pilots annually. According to DOD, the goal of this program is to increase the Colombian capacity to rapidly deploy to remote areas of the country to conduct counternarcotics operations. Equipment procurement and maintenance: U.S. agencies provided assistance to procure and maintain equipment for their Colombian counterparts. The largest such effort is INL’s Aviation Program, which procured and maintained a fleet of aircraft for the Colombian National Police. The aviation program allows the police to conduct interdiction operations in areas of the country which are difficult to access, according to INL officials. INL also procured and maintained other equipment, including communications equipment and night vision goggles. In addition, DOD provided equipment to vetted Colombian security forces with counter-narcotics missions, including patrol boats; protective gear; and specialized navigation, communications, and surveillance equipment. Judicial support: For over 20 years DOJ’s Office of Overseas Prosecutorial Development Assistance and Training (OPDAT) has provided a range of assistance to help reform the Colombian judicial system and improve its ability to prosecute crimes. According to OPDAT officials, this assistance is critical for the successful prosecution of drug cases. The office assisted with prosecutor training, case-based mentoring, case efficiency, litigation skills, and plea bargaining. Likewise, DOJ’s International Criminal Investigative Training Assistance Program (ICITAP) provided training, including curriculum development, seminars, and on-the-job training, to improve the Colombian government’s ability to conduct criminal investigations and develop forensics capabilities according to agency officials. ICITAP’s training efforts in Colombia focused, in part, on reforming Colombia’s legal framework as well as fostering cooperation and organizational development between the country’s judicial and law enforcement agencies. Investigative support: A number of U.S. agencies worked closely with Colombian vetted units, to support these agencies’ missions abroad. For example, DEA provided funding, training, and vetting for Colombian Sensitive Investigative Units (SIUs). According to DEA officials, DEA conducted bilateral counternarcotics and money laundering investigations with these Colombian vetted units. Similarly, the FBI and ICE both work with Colombian vetted units and provide investigative support for counternarcotics investigations. For example, the FBI worked closely with its vetted unit in Colombia to investigate transnational criminal organizations. FBI officials told us that these cases were almost exclusively related to drug trafficking organizations in Colombia. Operational Support: U.S. agencies also provided operational support for Colombian interdiction operations. Key examples of U.S. operational support include: Targeting, extraditions, and prosecutions: A number of U.S. offices supported the targeting, extradition, and prosecution of Colombian drug traffickers. For example, DOJ’s Organized Crime Drug Enforcement Task Forces (OCDETF) developed the Consolidated Priority Organization Target (CPOT) list in order to identify and target the leaders of major drug trafficking organizations. Likewise, the FBI targets drug trafficking leadership as well as facilitators—those who support drug traffickers financially or politically—by investigating money laundering and corruption cases according to agency officials. In addition, DOJ officials partnered with the Colombian government to extradite drug traffickers to the U.S. for trial. According to the DEA officials, extradition is one of the most effective investigative tools against drug trafficking in Colombia. The DEA officials noted that the vast majority of persons charged and extradited to the United States from Colombia have been convicted. Additionally, an FBI official stated that the extradition of high level drug traffickers has the potential to degrade the operational ability of their organizations because these extradited leaders may cooperate with U.S. courts to get reduced sentences. This cooperation can then create leads for new cases and provide new information and witnesses for active cases, further undermining the operations of criminal organizations. Detection and monitoring: Several U.S. agencies supported Colombian interdiction efforts by assisting with detecting and monitoring of drug trafficking operations. For example: According to DEA, during bilateral investigations the agency and its Colombian counterparts utilized a number of investigative tools to detect and monitor drug trafficking networks and money laundering organizations with the ultimate goal of prosecution in Colombia and the United States. DEA stated that information gleaned from these efforts is shared and used to coordinate maritime interdiction operations that can lead to additional evidence for prosecution. One DEA official stated that these detection and monitoring efforts yield more leads than U.S. and Colombian security forces have the resources to interdict. Beginning in 2003, INL supported the CNP’s Air Bridge Denial program. This program was developed to help improve the Colombian government’s ability to detect and intercept airplanes smuggling drugs into and out of Colombia. In 2003, Colombia documented 60 to 70 flights per month transporting drugs into and out of the country. Today, Colombia reports detecting no more than two or three flights per year, according to State. The program, including all aircraft, hangars, equipment, and facilities was nationalized in January 2010. Following nationalization, INL’s Air Bridge Denial budget decreased from roughly $20 million in 2004 to $1 million in 2012 and, at present, INL no longer funds the program. DOD also provided intelligence, surveillance, and reconnaissance (ISR) in support of interdiction operations. According to officials the agency uses its ScanEagle unmanned aerial vehicles to help Colombian security forces track maritime vessels moving drugs on Colombia’s Pacific coast. For example, DOD provided various task forces, which include Colombian police, army, navy, marines, and coast guard units, with ISR support via ScanEagle systems, including imaging and video to support interdiction efforts along the Pacific coast of Colombia, according to DOD officials. Monitoring Data Show Interdiction Efforts Seized a Substantial Amount of Cocaine and Disrupted Drug Trafficking Organizations; However, the Long-Term Effects of These Efforts Are Unclear U.S., UN, and Colombian monitoring data indicate that interdiction disrupts drug trafficking operations by seizing large amounts of cocaine, precursor chemicals, and other assets used by drug trafficking organizations. According to UN data, the amount of cocaine seized in Colombia increased from about 198 metric tons in 2008 to an estimated 435 metric tons in 2017 (see fig. 9). These totals accounted for an estimated 42 percent and 32 percent of the cocaine produced in those years, respectively. From 2008 through 2017 the total financial impact of cocaine seizures on drug trafficking organizations exceeded $4 billion. Several factors may explain these increases in the amount of cocaine seized. Several U.S. officials noted that increases in cocaine production means there is more cocaine to be seized in transit, while another official stated that seizure increases without corresponding increases in resources indicate that interdiction efforts may be becoming increasingly effective over time. In addition, interdiction efforts have led to the destruction of numerous drug processing facilities. From 2008 through 2017, nearly 30,000 coca paste and cocaine processing laboratories were destroyed, according to Colombian data. Since 2008, Colombian security forces have also seized over 30 million gallons of the liquid precursor chemicals necessary for the production of cocaine, as well as 8,087 vehicles, 1,083 boats, 18 airplane, 65,778 firearms, over 13 million rounds of ammunition, and 34,800 pieces of communications equipment associated with drug trafficking operations, according to Colombian government data. In addition, since 2008, ICE estimates that Colombian authorities have seized over $35 million in bulk cash and hundreds of millions of dollars in drug related contraband at Colombian ports. U.S. supported interdiction efforts have contributed to the disruption and dismantling of a number of drug trafficking organizations and the arrest and extradition of high value drug trafficking suspects on the CPOT and priority target organization (PTO) lists (see table 1). For example, as part of an “Operation Agamemnon II” that sought to disrupt and dismantle the Clan del Golfo, Colombian forces killed the group’s second-in- command, Roberto Vargas Gutierrez in August 2017; captured its third-in- command, Luis Orlando Padierna Pena in November 2017; and killed or captured many other senior and mid-level leaders. Likewise, in April 2017, Colombian forces arrested Edison Washington Prado Álava in Tumaco and seized $25 million in cash. Prado Álava, known as the “Pablo Escobar of Ecuador,” had issued death threats against police, prosecutors, and judges in both Ecuador and Colombia. In February 2018, with the cooperation of Colombian authorities, Prado Álava was extradited to the United States, where he is facing prosecution. From fiscal years 2008 through 2017, OCDETF reported that Colombian forces arrested 31 Colombians, disrupted 273 Colombian organizations and dismantled 94 others linked to the CPOT list. From calendar years 2008 through 2017, DEA reported that U.S. and Colombian authorities had also disrupted 83 PTOs and dismantled 201 others, including an estimated 5,444 PTO-related arrests. DEA officials stated that nearly all of these extraditions were for drug related crimes and these individuals were all “high value” targets. However, the long-term effect of these efforts is unclear. While seizures remove roughly 40 percent of the total cocaine supply each year on average, increases in cocaine production mean that the net supply of cocaine destined for the United States has increased despite the substantial amount of cocaine seized. U.S. officials also stated that while arrests and extraditions remove drug trafficking leaders, which may temporarily degrade the operational capabilities of drug trafficking organizations, the lucrative nature of the cocaine market ensures that others will replace these individuals. U.S. and Colombian sources identified several other challenges that may impact the effectiveness of interdiction efforts. One FBI official stated that as investigative efforts fragment drug trafficking organizations, it becomes more challenging to target organizations and dismantle their command and control structures. One of the studies we reviewed suggested that as these organizations are dismantled, local populations may be affected by pronounced cycles of violence as competing armed groups vie for control of drug trafficking operations in areas formerly under the control of an established criminal organization which has been dismantled. Sources also stated that extraditions may become less of a deterrent to drug traffickers over time as they and their legal counsels become more familiar with the U.S. judicial system and are able to effectively plead to lesser charges and get lighter sentences. U.S. and Colombian Officials Identified Opportunities to Improve the Effectiveness of Interdiction Efforts U.S. and Colombian officials identified a number of ways to improve interdiction efforts and increase the effectiveness of these operations: Maritime/riverine boat program: State and DOD have already provided assistance to strengthen Colombia’s maritime and riverine interdiction capabilities, but INL officials noted that they were exploring options to provide further support for riverine interdiction efforts given the significance of Colombia’s waterways in drug trafficking. A number of U.S. and Colombian officials, including officials from INL, the Colombian Navy, and the U.S. and Colombian Coast Guards, stated that an enhanced “boat program,” similar to INL’s aviation program, would improve the country’s ability to interdict cocaine shipments traveling along Colombian maritime routes. Officials noted that features of such a program should include the procurement, supply, and maintenance of boats capable of tracking down the “go fast” boats used by traffickers. These vessels cost $1 million each, and provide a significant return on investment, according to Colombian authorities. One such boat, for example, was able to interdict 12 tons of cocaine (valued at $60 million) in 1 year in Tumaco, Colombian officials stated. Port of entry/container interdiction operations: DHS officials from ICE and CBP have supported Colombian efforts to seize drugs and other contraband at air and sea ports of entry. However, one ICE official stated that container smuggling is the “Achilles’ heel” of cocaine interdiction efforts in Colombia. According to this official, Colombian ports vary in their willingness to cooperate with U.S. agencies in order to combat drug smuggling. For example, the official stated that one port provides a lot of information to ICE and CBP officials because it participates in CBP’s Container Security Initiative, while another port is known for corruption and smuggling. This official believes that hundreds of tons of cocaine leave via containers carrying licit merchandise and reported that, for example, one interdiction operation targeting the port in Cartagena had resulted in the seizure of 35 tons of cocaine since 2015. According to ICE officials, assigning more personnel to Colombian air and seaports would greatly increase seizures of cocaine and contraband. Drug trafficking organization funding/finance: A number of U.S. and Colombian sources suggested that interdiction efforts can be improved by targeting drug trafficking organizations’ assets and revenues. Because money is at the top of the value chain, disrupting cash flow before it can return to drug traffickers would have a significant impact on their ability to profit from criminal activities and continue to fund their operations, according to several U.S. and Colombian sources. One expert we spoke to indicated that interdiction efforts could be improved by targeting money laundering, bulk cash shipments, and contraband smuggling. According to one FBI official, drug trafficking organizations cannot operate without financing, and as a result it is important to focus on money laundering cases. Similarly, one ICE official described bulk cash shipments and money laundering as the “fuel” that drives drug trafficking and believes it is critical to devote more resources in this area. DEA stated that in addition to its bilateral investigations with Colombia, the agency also conducts simultaneous money laundering investigations often resulting in seizures of assets and bulk cash. However, INL officials stated that Colombian asset forfeiture laws have made it difficult for authorities to seize and liquidate the assets of drug traffickers. In 2017, revisions to these laws were passed making it easier for Colombian officials to liquidate these assets and use these resources to fund further counternarcotics efforts; however, State noted that the revised asset forfeiture process still faces several challenges including the limited number of judges and long periods of time needed to adjudicate these cases. Regional maritime interdiction operations: U.S. and Colombian officials suggested that sustaining regional maritime interdiction operations between the U.S., Colombia, and other nations in the transit zone can significantly disrupt drug trafficking operations if maintained long term. For example, beginning in March 2017, the U.S. and Colombian navies—along with maritime authorities from Panama, Costa Rica, Mexico, Honduras, Ecuador, Guatemala, and Nicaragua—conducted Operation Orion, a series of coordinated maritime interdiction operations targeting different areas of the transit zone. One of these operations, conducted jointly by Colombia and Panama, seized 2.5 tons of cocaine in 1 month and led to 20 arrests. U.S. Coast Guard officials stated that Operation Orion was a successful, short-term example of how regionally coordinated operations can improve the effectiveness of maritime interdiction and believe that continuous operations of this type would dramatically improve the effectiveness of interdiction efforts overall. U.S. Coast Guard officials also noted that these types of coordination efforts among Colombia and other countries in the region are an important step toward self-sufficiency and away from a reliance on U.S. funding and law enforcement support for maritime operations. However, these officials noted that there are currently not enough resources devoted to interdiction to sustain these types of partnerships in the long term. Colombian Navy officials agreed that countries in the region need to devote more resources to sustain these types of regional efforts. However, these officials also noted that Colombia has taken some steps, such as developing permanent information sharing agreements with regional partners, to develop these types of relationships. A Limited Number of Third- Party Studies on Interdiction Have Mixed Findings, but Suggest Potential Effectiveness Relative to Eradication Third-party research we reviewed had limited findings related to interdiction. While seven of the studies in our literature review discussed aspects of interdiction efforts, four studies had findings related to the effect of these efforts on the cocaine supply. These four studies had mixed findings about the overall effectiveness of interdiction efforts. One study we reviewed found that an increased emphasis on interdiction efforts in Colombia, beginning in 2006, had achieved a substantial reduction in the net supply of cocaine. Another study indicated that increases in the costs to produce cocaine were mainly due to the interdiction of precursor chemicals such as gasoline. However, two other studies concluded that increased cocaine seizures did not have a substantial impact on either the price or the overall supply of cocaine, which has steadily increased since 2013. Several of the seven studies we reviewed suggested that interdiction is more effective or more cost-effective than eradication efforts. Two studies indicated that interdiction policies had a greater impact on the cocaine supply than eradication policies. For example, one study showed that the destruction of cocaine processing labs has a greater impact on cocaine prices than aerial or manual eradication efforts. Two other studies concluded that interdiction was more cost effective than eradication efforts. For example, one study indicated that the cost of removing 1 kilogram of cocaine from retail markets in the United States was $175,000 if resources were devoted to interdiction and $940,000 if resources were devoted to eradication. However, this study relied on cost estimates that were either limited or we were unable to substantiate. A number of the studies in our literature review and experts we interviewed stated that counternarcotics resources should primarily be devoted to interdiction efforts instead of eradication efforts because they target drug traffickers at the top of the “value chain”. According to these studies and experts, counternarcotics actions are more costly to drug traffickers at this stage of the drug trafficking process. For example, two studies indicated that the destruction of cocaine processing labs is the most effective counternarcotics effort. One study stated that the destruction of these labs is an effective interdiction strategy because these labs add significant value to the final product, cocaine lost at this stage is not easily replaced, and the destruction of labs reduces demand for coca leaves and coca cultivation. This study indicated that for every lab destroyed, coca cultivation decreases by 3 hectares as demand for the leaves falls. Another study indicated that the number of processing laboratories destroyed accounts for 75 percent of the price fluctuation of cocaine. U.S.-Supported Alternative Development Programs in Colombia Have Achieved Some Positive Results, but Officials and Research Have Noted Some Implementation Challenges U.S.-supported alternative development programs in Colombia have attained some positive outcomes. USAID evaluations and monitoring data show that alternative development programs have achieved a number of positive results in increasing opportunities to participate in the legal economy in Colombia, but have also faced issues that reduced their effectiveness. U.S. and Colombian officials stated that alternative development programs are important to a long-term counternarcotics strategy, but noted a number of implementation challenges. Third-party research suggests that alternative development has the potential to reduce coca cultivation if properly implemented. USAID Has Supported a Range of Alternative Development Programs Designed to Increase Licit Economic Opportunities in Colombia USAID’s alternative development programs in Colombia provide support in a number of key areas, including programs that are intended to: assist in the development of value chains for agricultural products, such as cacao and coffee, or the development of licit businesses; support land formalization efforts, including the issuance of land titles and the development of Colombia’s national registry of land ownership (known as a cadaster); increase access to rural finance; strengthen producer associations (see fig. 10); leverage private sector investment to support rural development; provide needed infrastructure to strengthen communities and support legal economies including roads, schools, electricity, and sanitation; and support civil society organizations and strengthen governance, including efforts to build social capital and increase the presence of the Colombian government in areas affected by conflict. According to USAID, over time, it has broadened the focus of its alternative development efforts to move beyond crop substitution programs and to instead work to transform underdeveloped regions within Colombia and address the underlying issues that drive the economics and culture of drug trafficking. USAID noted that it has also sought to prioritize particular geographic regions, rather than seeking to implement programs throughout the whole country. Table 2 lists examples of alternative development programs that USAID has funded in Colombia over the past 10 years. USAID, State, and Colombian officials noted that this broader, more comprehensive focus for alternative development is necessary in order to create the conditions that would be conducive for legal alternatives to coca cultivation to be viable in many parts of Colombia. For example, Colombia faces substantial deficiencies in its road network. Without improvements in the road network, many Colombians in rural areas do not have a feasible way of transporting legal crops to markets or accessing basic services. Significant numbers of Colombian farmers also do not possess title to their land, which, among other things, limits their ability to access credit and reduces their incentives to make longer-term investments in legal crops such as cacao, which take years to mature. USAID Evaluations and Monitoring Data Show that Alternative Development Programs Have Achieved Some Positive Results, but Have Also Faced Issues that Reduced Their Effectiveness We reviewed seven independent evaluations that USAID has commissioned since 2008. These evaluations reported that USAID alternative development programs have achieved a range of positive results. For example, a 2016 midterm impact evaluation of USAID’s Consolidation and Enhanced Livelihood Initiative found that, among other things, an increased number of program beneficiaries reported that their economic situation was good or very good compared to the baseline at the beginning of the project. In addition, the evaluation found that program beneficiaries’ sales of supported products had increased significantly and had far exceeded USAID targets. A 2014 post- implementation evaluation of two USAID programs (1) More Investment in Sustainable Alternative Development and (2) Areas for Municipal-Level Alternative Development found positive outcomes for some beneficiaries, including success in helping producer associations get their products to market. However, the evaluations also reported that USAID alternative development programs did not achieve all intended goals and faced certain implementation issues including problems with project design, program funding not being sustained for adequate periods, and a lack of consistent support from the Colombian government, which was a partner in these programs. For example, an April 2009 evaluation of USAID alternative development efforts under Plan Colombia reported, among other things, that many marketable crops in Colombia, such as cacao or coffee, take several years to grow before they are ready to harvest and produce income for farmers. Thus, farmers need income support during this period as they transition from dependence on coca to legal crops, but, according to the evaluation, USAID and the Colombian government frequently did not provide sufficient income to cover food costs or other expenses, making farmers highly vulnerable to resume coca cultivation. An April 2011 evaluation of USAID’s Integrated Governance Response program reported that some funded projects were at a standstill due to the delays by the Colombian local and regional governments in fulfilling their commitments. USAID, for example, had funded the construction of a cold-storage facility to assist milk producers in one region, but the facility had not been provided with electricity because the municipal government had not sent a building inspector to approve its construction. A February 2017 review of alternative development in Colombia reported that a number of alternative development efforts may require longer time horizons than allowed by most USAID contracts or cooperative agreements. In addition to these evaluations, other USAID assessments have reported that alternative development programs have achieved some positive results. For example, data from USAID’s Monitor system report that USAID projects related to “Inclusive Rural Economic Growth” exceeded their targets for 23 of 44 performance indicators for which results were reported for fiscal year 2017. Similarly, for fiscal year 2017, USAID reported that it exceeded its targets for six of nine performance indicators related to inclusive rural growth that were tracked in Embassy Bogotá’s Performance Plan and Report (see table 3). An internal USAID analysis also noted that the agency had been able to increase the ratio of legal crops grown relative to coca in areas where it had funded programs to increase opportunities for such crops. Specifically, USAID reported that in 14 departments where it had funded such programs, the ratio of illegal to USAID-supported legal crops under cultivation had decreased from 302:1 hectares to 13:1 hectares from 2011 to 2016. USAID noted different factors that resulted in three of the nine targets not being met. For example, USAID stated that the target for households with formalized land was not met because the Colombian government eliminated the agency previously responsible for land formalization in December 2015 and created two new agencies in its place. According to USAID, these new agencies did not begin operations until March 2017, which delayed USAID’s work with the Colombian government on the project and created uncertainty about the Colombian government’s land policy and administration. Data reported by UNODC also provides certain information related to the effectiveness of alternative development efforts in Colombia. UNODC, for example, collects and reports data on the number of households involved in coca production as part of its annual illicit crop cultivation surveys. These data show that the number of households in Colombia involved in coca cultivation increased from 59,328 to 106,900 between 2008 and 2016 (an increase of 80 percent). Such data indicate that any gains achieved in encouraging Colombians to switch from illegal to legal livelihoods through alternative development programs have been outweighed by other factors driving increased involvement in coca cultivation. U.S. and Colombian Officials View Alternative Development Programs as Important to a Long-Term Counternarcotics Strategy, but Noted a Number of Implementation Challenges U.S. and Colombian officials stated that alternative development, and the creation of viable opportunities for Colombians to get access to public services and participate in the legal economy, is important to solving the drug problem in Colombia. However, these officials acknowledged that comprehensive alternative development is a long-term approach that requires significant investment. They also pointed out that large portions of rural Colombia have been marginalized for decades and that the Colombian government will need to make substantial, sustained investments in rural areas to establish the necessary conditions for legal economies to develop. According to USAID officials, USAID data indicate that the regions where USAID has intervened have fared better than the areas where it has not, but the scope and scale of its interventions have not been significant enough to counteract overall coca cultivation and cocaine production trends in the country. U.S. government analysis and officials noted that there are also powerful economic disincentives for farmers to shift from the cultivation of coca to legal crops such as coffee or cacao. According to State analysis, while prices per kilo of cacao and coffee are higher than coca, lower investment costs, more frequent harvests, higher yields per hectare, minimal field maintenance costs, and negligible transportation costs, make growing coca the more profitable economic choice in most parts of Colombia. For example, in the Nariño region, State found that growing coca can be up to 14 times more profitable per hectare than cacao, factoring in all costs. DEA analysis has found that average annual profit accrued by Colombian farmers from a hectare of coca increased by more than 120 percent from 2012 to 2016. In addition, DEA analysis has found that as profitability has increased, the number of coca farmers wanting to stop growing coca has declined substantially. According to USAID documents and officials, a number of other factors have also affected USAID’s ability to effectively support alternative development efforts in Colombia, including Colombian policy and legal restrictions, insecure and inaccessible locations, coordination challenges with the Colombian government, the diversity of needs within Colombian communities, and Colombia’s current alternative development focus and U.S. legal restrictions. Colombian policy and legal restrictions. USAID has been limited in its ability to implement alternative development programs in a number of coca cultivating areas due to policy and legal restrictions. For example, according to USAID evaluations and officials, under the Colombian government’s previous “zero coca” policy, it was prohibited from providing any assistance in an area until it was proved that all coca in the area had been eradicated. As a result, USAID was unable to provide assistance for coca growers to switch to and remain in legal livelihoods. In addition, approximately 8 to 10 percent of coca is grown in national parks, where, according to USAID, under Colombian law, it may not implement any development projects. Insecure and inaccessible locations. USAID has been limited in its ability to provide assistance in some key coca growing areas of the country due to security concerns and the remote nature of the locations. According to USAID, the Colombian government has at times prohibited it from operating in “red zones” where there was active, armed conflict. USAID stated that it has also chosen not to fund programs in some regions because it is too dangerous for the agency’s implementing partners to safely operate. In addition, USAID noted that some of the areas with the highest concentration of coca are largely inaccessible, making it challenging to implement assistance programs, since many of them have no roads and can only be reached by boat or by foot. Coordination challenges with the Colombian government. According to USAID officials, USAID has also faced challenges because of the lack of consistent, coordinated support from the Colombian government and difficulties getting Colombian agencies to work together. For example, after the Colombian government announced the National Consolidation Plan in 2009, USAID focused its assistance in 40 of the 58 municipalities that the Colombian government had selected for consolidation. Despite evidence of progress being made in these areas, by 2013 the Colombian government had begun to reduce its support for the policy, according to USAID. USAID stated that impediments to the successful continuation of the plan included, among other things, a lack of political support, disorganization at the top levels of the Colombian government, changes to and the politicization of the Colombian government’s administrative entity leading the effort, and challenges executing national budgets flexibly and efficiently at the local level. As a result, USAID stated that it was forced to adapt its efforts in the later years of the plan to focus on working with local partners rather than the national government. Diversity of needs within Colombian communities. USAID has faced challenges designing appropriate alternative development programs given the diversity of communities within Colombia that have differing needs in terms of alternative development support. For example, there are a wide range of microclimates throughout Colombia which can make it challenging to replicate the same types of technical assistance for farming of legal crops in different parts of the country. USAID noted that it works to tailor its alternative development programming to specific regions. For example, USAID reported that it worked to tailor its assistance to meet the needs of an indigenous community in Northern Cauca. USAID was seeking to improve access to finance in the community; however, due to communal ownership of land, the community could not use land as collateral for loans, according to USAID. Thus, USAID stated that it tailored its assistance by setting up a revolving fund managed and administered by the community itself to expand financing for local businesses. U.S. and Colombian officials noted the need for additional information on various communities to know how to best design programs that would work in the different areas. Colombia’s current alternative development focus and U.S. legal restrictions. According to USAID, its efforts to support alternative development in Colombia have also been challenged by the Colombian government’s current program focus. According to USAID, State, and Colombian officials, a central part of the Colombian government’s counternarcotics strategy under the peace accord is to implement a voluntary eradication and crop substitution program. Under the program, in exchange for voluntarily eradicating their coca crops, farmers receive cash assistance and technical support to help them transition to the cultivation of legal crops. However, according to USAID, the Colombian government is implementing the program in conjunction with the FARC. As a result, USAID officials stated that the U.S. government’s ability to support the program is restricted because the FARC is still designated as a Foreign Terrorist Organization. USAID and State officials also pointed out a range of implementation problems with the program and stated that the plan has had little to no impact on the current coca cultivation trends in Colombia. For example, USAID officials noted that the payment of stipends to farmers has begun before the eradication of their coca has been required or verified. As of April 2018, the Colombian government had signed up approximately 50,000 families for the program, according to State reporting. However, State reported that the Colombian government has publicly acknowledged that the program is lagging in achieving its intended results and was forced to reduce its targets under the program from 50,000 to 22,000 hectares in 2017. Third-Party Research Suggested that Alternative Development Has the Potential to Reduce Coca Cultivation if Properly Implemented, but Noted Limitations Independent research and non-governmental experts we spoke to generally suggested that alternative development programs have the potential to strengthen legal economic activity and encourage communities to shift away from coca cultivation, if properly implemented. Ten studies in our literature review discussed alternative development. Of these 10 studies, 3 included original research that found evidence regarding the potential effectiveness of alternative development programs in Colombia. One study we reviewed found that social investment in infrastructure and human capital could be an effective and complimentary strategy for controlling illegal crops. The study found that $5.55 spent in social investment per inhabitant in a given municipality prevented the cultivation of a new hectare of coca. A different study, looking at land titling efforts in Colombia, found that the formalization of one additional hectare of land for small landholders within a given municipality resulted in a decrease of approximately 1.4 hectares of land allocated to coca cultivation within that municipality. An additional study found that implementing community planning models that involved citizen participation could be effective in encouraging the adoption of alternative development projects and the substitution of legal crops in place of coca. Several other studies did not include original research on the effectiveness of alternative development programs, but made recommendations to increase the emphasis placed on such efforts based on the authors’ review of existing evidence. For example, one review of existing research recommended that policies aimed at reducing illicit crop cultivation should be centered upon alternative livelihood programs. The study noted that the Colombian government should consider expanding and improving a successful alternative development program it had previously implemented in the Macarena region of Colombia. Some studies and experts, however, raised issues about the implementation of alternative development programs and noted potential limitations in their effectiveness. For example, one study that assessed the effectiveness of alternative development found that because coca cultivation is unlikely to change as a result of increases in perceived risk and relative profit, alternative development was likely to have only small effects on coca cultivation levels. Another study noted that alternative development programs have tended to be located far from areas where coca crops have been grown. Thus, the study recommends pursuing more comprehensive counternarcotics efforts in areas affected by coca cultivation. An additional study cited the success of one regional alternative development program, but noted that many alternative development programs in Colombia have faced implementation problems. One expert we interviewed stated that alternative development can work in particular parts of Colombia, yet such efforts were likely not viable in some key coca growing regions, where there is little infrastructure to market legal crops. Thus, the expert stated it is crucial to target where alternative development programs are implemented. Conclusions Since the launch of Plan Colombia almost 20 years ago, the U.S. and Colombian governments have partnered closely to combat drug trafficking through a mix of eradication, interdiction, and alternative development efforts. Since then, violence in Colombia has decreased and the successful negotiation of a peace agreement with the FARC brought an end to that 50-year conflict. However, increasing cocaine production levels in the past 4 years and the continued existence of a range of violent criminal groups underscore the ongoing threat of narcotics trafficking for Colombia. As the U.S. government seeks to support Colombia in this new phase of its fight against drug trafficking, U.S. agencies should consider what combination of eradication, interdiction, and alternative development activities will help to best achieve their counternarcotics goals. There is a range of available information that can help provide U.S. agencies with insight into the effectiveness of their eradication, interdiction, and alternative development activities. However, to date, State and other U.S. agencies involved in eradication and interdiction activities in Colombia have not evaluated these efforts to determine their long-term effectiveness in reducing the cocaine supply. In addition, State has not undertaken a comprehensive review of the U.S. government’s counternarcotics approach in Colombia. Such a review would help State to systematically consider the relative benefits and limitations of the U.S. government’s eradication, interdiction, and alternative development activities. With this information, State would be well positioned to ensure that it and other U.S. agencies are prioritizing limited resources and pursuing the combination of counternarcotics activities with the greatest likelihood of achieving long-term success in the fight against drug trafficking in Colombia. Recommendations for Executive Action We are making two recommendations to State: The Secretary of State should, in consultation with other U.S. agencies involved in counternarcotics efforts in Colombia, conduct an evaluation of the long-term effectiveness of eradication and interdiction in reducing the cocaine supply. (Recommendation 1) The Secretary of State should, in consultation with other U.S. agencies involved in counternarcotics efforts in Colombia, undertake a comprehensive review of the U.S. counternarcotics approach in Colombia and identify what changes, if any, should be made to the types and combination of U.S. activities, taking into consideration how the relative benefits and limitations between eradication, interdiction, and alternative development may impact the effectiveness of these efforts. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of the report to DHS, DOD, DOJ, State, and USAID for review and comment. DHS, DOJ, and State provided technical comments, which we incorporated as appropriate. State and USAID also provided written comments, which are reproduced in appendix III and IV, respectively. In its written comments, State noted that it agreed in general with our recommendations, but suggested that our first recommendation be broadened to encompass an evaluation of the effectiveness of whole- of-government counternarcotics efforts, rather than focusing on eradication and interdiction specifically. We respect State’s argument in favor of broadening the scope of our first recommendation, but we chose not to revise our recommendation based on this rationale. We believe that an evaluation focusing specifically on the long-term effectiveness of eradication and interdiction in reducing the cocaine supply would provide State with important information on two key components of the approach that has characterized U.S. counternarcotics efforts in Colombia for decades but have not been evaluated to date. Such an evaluation would be consistent with analyses already undertaken for alternative development, and would contribute to a better understanding of the strengths and weaknesses of each of these three key efforts. In addition, our second recommendation to State addresses the need for a broader, comprehensive review of the overall U.S. counternarcotics approach, which would expectedly take into account eradication, interdiction, and alternative development, as well as other U.S. efforts to combat drug- related criminal activities. If State opts to pursue a broader evaluation of all U.S. counternarcotics efforts in Colombia, we would consider this responsive to our first recommendation as long as the evaluation includes a meaningful assessment of the effectiveness of eradication and interdiction efforts. Additionally, as part of its comments, State highlighted the importance of a whole-of-government approach to counternarcotics in Colombia that employs a range of efforts that are implemented in a coordinated manner. Consequently, State noted that any review of the individual components of the U.S. counternarcotics strategy will present an incomplete picture and State expressed concern that we had considered eradication, interdiction, and alternative development in isolation. In the report, we note that the U.S. government’s counternarcotics approach in Colombia has long called for a mix of eradication, interdiction, and alternative development efforts and we highlight the fact that U.S. officials believe that finding the appropriate combination of these efforts is critical to achieving the U.S. government’s counternarcotics objectives in Colombia. Thus, while we present more in-depth analyses of eradication, interdiction, and alternative development, we begin our discussion with an overall description of U.S. efforts in Colombia more generally, covering the role of various U.S. agencies in these efforts, the nature of overall collaboration with Colombia, and the events that shaped the current situation. Finally, in its comments, State said that we had failed to consider relevant information on eradication that had been published by various sources. In developing our findings in this report, we reviewed available U.S. government, Colombian government, and United Nations data and analysis on eradication, as well as third-party research, and we sought to accurately present this range of information in a balanced manner. Accordingly, we have made relevant modifications to our narrative to further describe information in UN studies related to the results of eradication efforts in Colombia. In its comments, USAID stated that it concurred with our recommendation that State lead a comprehensive review of the U.S. counternarcotics approach in Colombia. USAID noted that it believes such a review could help identify what changes, if any, are necessary to make to the types and combination of U.S. activities, while taking into consideration how the relative benefits and limitations of eradication, interdiction, and alternative development could affect the effectiveness of these efforts. We are sending copies of this report to the appropriate congressional committees and the Secretaries of Defense, Homeland Security, and State, as well as the Attorney General and the USAID Administrator. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7141 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report examines (1) to what extent the U.S. government has assessed the effectiveness of its counternarcotics efforts in Colombia, (2) what is known about the effectiveness of U.S. government-supported eradication programs in Colombia over the last 10 years, (3) what is known about the effectiveness of U.S. government-supported interdiction programs in Colombia over the last 10 years, and (4) what is known about the effectiveness of U.S. government-supported alternative development programs in Colombia over the last 10 years. To assess to what extent the U.S. government has assessed the effectiveness of its counternarcotics efforts in Colombia, we analyzed Department of Homeland Security (DHS), Department of Defense (DOD), Department of Justice (DOJ), Department of State (State), and U.S. Agency for International Development (USAID) data and documentation that describe U.S-supported counternarcotics efforts since 2008, including available performance monitoring data and evaluations that the agencies use to assess the effectiveness of their counternarcotics activities in Colombia. In doing so, we reviewed performance reporting that the agencies conduct through interagency mechanisms including the Office of National Drug Control Policy’s (ONDCP) annual National Drug Control Strategy Performance Reporting System report and Budget and Performance Summary report, as well Embassy Bogotá’s annual Performance Plan and Report. In addition, we reviewed agency-level performance monitoring data and related reports produced by DHS, DOD, DOJ, State, and USAID, as well as their relevant component agencies and offices. For example, we reviewed State’s annual International Narcotics Control Strategy Report, performance data from USAID’s Monitoring and Evaluation Clearinghouse information system, U.S. Southern Command annual program management reviews, DEA/Colombia impact statements produced through its Threat Enforcement Planning Process, and annual DHS performance reports. We also reviewed evaluations that USAID had conducted of its alternative development programs in Colombia. To identify relevant USAID evaluations, we consulted USAID officials and conducted a search of USAID’s Development Experience Clearinghouse, which is USAID’s online, publicly available repository of program documentation. In evaluating to what extent the U.S. government has assessed the effectiveness of its counternarcotics efforts in Colombia, we compared State’s actions to its evaluation policy. In addition, we compared U.S. agencies’ actions to applicable federal internal control standards. To determine what is known about the effectiveness of U.S. government supported eradication, interdiction, and alternative development programs, we analyzed DHS, DOD, DOJ, State and USAID data and documentation related to counternarcotics efforts in Colombia. As part of our work, we also analyzed data from the United Nations Office on Drugs and Crime’s (UNODC) annual surveys of territories in Colombia affected by illicit crops, which documented coca cultivation and cocaine drug productions trends, as well as counternarcotics efforts. In addition, we analyzed Colombian government data and other reporting describing counternarcotics efforts. These U.S. government, United Nations, and Colombian government data included a range of metrics. For eradication programs, we reviewed metrics including estimated coca cultivation levels, eradication levels, coca plant productivity levels, coca replanting rates, and the territorial distribution of coca cultivation. For interdiction, we reviewed metrics including estimated cocaine production levels; the levels of seizures of cocaine, precursor chemicals, and drug trafficking organization assets; the number of drug trafficking organizations disrupted or dismantled; and the number of drug trafficking organization members arrested and extradited. For alternative development programs, we reviewed metrics including the number of households involved in coca cultivation, the amounts of coca cultivated relative to legal crops in areas receiving U.S. government support, increases in the value of sales of legal products in areas involved in narcotics production, the number of households receiving land titles as a result of U.S. assistance, and the value of agricultural and rural loans generated through U.S. assistance. To assess these data, we reviewed available documentation, and interviewed cognizant U.S. officials. In addition, we were able to compare different sources in some instances, specifically the U.S. government and the UN estimates of coca cultivation and cocaine production in Colombia. We noted several limitations to these data. For example, the coca cultivation and production figures are estimates, and while both the U.S. government and UN have procedures to verify their estimates, there were differences between the two sources in terms of the levels of production and cultivation reported due to differences in their estimating methodologies. For example, one challenge to estimating the hectares of coca eradicated is that crop fields can be eradicated multiple times in 1 year, which means that the total number of hectares eradicated can exceed the total number of hectares cultivated in some years. Likewise we noted that kilograms of cocaine seized in Colombia may be the result of a variety of actions, and can be influenced by the volume of cocaine production, as well as the actions of law enforcement officials. We determined that the U.S. government, United Nations, and Colombian government data were sufficiently reliable to present general trends from 2008 through 2017. Further, we reviewed agency documentation from State, USAID, DOD, and DEA in order to identify plans, reviews, strategies, and assessments related to counternarcotics efforts in Colombia. For example, we reviewed State’s annual International Narcotics Control Strategy Reports, Embassy Bogotá’s annual Performance Plan and Reports, DOD U.S. Southern Command performance management reviews, and DEA’s Threat Enforcement Planning Process assessment. In addition, we reviewed seven evaluations that USAID had commissioned of its alternative development programs in Colombia and identified relevant findings from these evaluations regarding the effectiveness of alternative development efforts in Colombia. Some of these evaluations related to specific alternative development programs, while others evaluated USAID’s alternative development efforts in Colombia more broadly. It was beyond the scope of this engagement to assess the quality of these evaluations. We also reviewed USAID performance data in its Monitor system and in Embassy Bogotá’s annual Performance Plan and Report and compared USAID’s results to the targets it had established. We did not perform an assessment of the underlying metrics that USAID used, as our purpose was to compare actuals to targets. To gather further information regarding what is known about the effectiveness of U.S. government supported eradication, interdiction, and alternative development programs, we interviewed U.S. officials that have responsibility for and insights into U.S.-supported counternarcotics efforts in Colombia from: DHS, including Immigration and Customs Enforcement and the U.S. Coast Guard; DOD, including the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats and U.S. Southern Command; DOJ, including the Criminal Division, the Drug Enforcement Administration, and the Federal Bureau of Investigation; State, including the Bureau of International Narcotics and Law Enforcement Affairs and the Bureau of Western Hemisphere Affairs; and USAID’s Bureau for Latin America and the Caribbean. In addition, we conducted fieldwork in Colombia in March 2018. During our fieldwork, we interviewed U.S. officials from DHS, DOD, DOJ, State, and USAID involved in counternarcotics activities at Embassy Bogotá. In addition, we interviewed various officials from Colombian security and civilian agencies and from the UNODC. We also visited the headquarters of the Colombian National Police Air Service’s headquarters in Guaymaral (near Bogotá) and the Colombian National Police’s International School for the Use of Police Force for Peace (near Ibagué). Finally, as part of our fieldwork, we visited Tumaco in southwest Colombia. Tumaco is the municipality with the highest levels of coca cultivation in Colombia and is also the most significant hub for the trafficking of cocaine out of the country. In Tumaco, we visited the Colombian government’s Strategic Operation Center, observed a manual eradication operation, and met with a number of USAID alternative development program beneficiaries. The information on foreign law in this report is not the product of GAO’s original analysis, but is derived from interviews and secondary sources. Finally, to help validate and supplement U.S. government findings regarding the effectiveness of its counternarcotics programs, we conducted a literature review to determine the extent to which relevant non-U.S. government studies either validated or reached different conclusions than the U.S. government’s findings regarding the effectiveness of U.S.-supported counternarcotics programs in Colombia. To conduct this review, we developed a list of search terms related to eradication, interdiction, and alternative development in Colombia. Then, working with a GAO research librarian, we conducted a search using selected bibliographic databases, including Scopus and SciELO. We conducted searches for materials in both English and Spanish. The searches resulted in the identification of an initial list of 261 English- language articles and 45 Spanish-language articles. The team then conducted a process to narrow down the initial search results to a priority list of studies. In order to narrow down the results, we considered a variety of factors including the relevance of the study to our research questions, the extent to which the study focused on Colombia or was more global in nature, whether the study had been published in 2008 or later, and whether the study included original research. To validate our priority list of studies, we shared our results with a non-U.S. government expert who had studied counternarcotics efforts in Colombia to see if there were further studies that we should include. We added one additional study based upon his review. In total, we selected 23 studies to include in our literature review and to analyze in greater depth for this report. Within our literature review, we identified a relatively small number of authors that had conducted research relevant to our work, in particular, studies related to interdiction efforts in Colombia. As a result, there are several authors who have more than 1 study included within the list of 23 studies we selected. For each of the 23 studies we selected, we completed a data collection instrument to, among other things, identify the study’s key findings and recommendations and to make a high-level assessment that the study was of sufficient quality to include in our review. We ensured that our selection included studies issued or published in 2008 or later. During our review, we noted that several studies analyzed data from slightly earlier time periods. In addition, we noted that some studies analyzed data for particular regions or settings within Colombia. While this does not affect the quality of the studies, it does raise the possibility that their findings might not fully apply to the current situation in Colombia. As part of our work, we also conducted interviews with a nongeneralizable sample of three non-U.S. government experts to gather further information regarding what is known about the effectiveness of U.S. counternarcotics programs. In selecting these experts, we sought to choose people with different types of experiences studying and working on counternarcotics efforts in Colombia, in order to get a range of perspectives about these efforts. We conducted this performance audit from September 2017 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: List of Studies Reviewed This bibliography contains citations for the 23 studies we reviewed regarding the effectiveness of Colombian counternarcotics efforts. Beltrán, S. “La Institucionalidad Rural en Colombia: Reflexiones para Su Análisis y Fortalecimiento.” Mundo Agrario, vol. 17, no. 53 (2016). Camacho, A., and D. Mejía. “The Health Consequences of Aerial Spraying Illicit Crops: The Case of Colombia.” Journal of Health Economics, vol. 54 (2017): 147-160. Ceron, C., I. De los Rios-Carmenado, and S. Fernández. “Illicit Crops Substitution and Rural Prosperity in Armed Conflict Areas: A Conceptual Proposal Based on the Working With People Model in Colombia.” Land Use Policy, vol. 72 (2018): 201-2014. Davalos, E. “New Answers to an Old Problem: Social Investment and Coca Crops in Colombia.” International Journal of Drug Policy, vol. 31 (2016): 121-130. Fisher, D., and A. Meitus. “Uprooting or Sowing Violence?: Coca Eradication and Guerrilla Violence in Colombia.” Studies in Conflict & Terrorism, vol. 40, no. 9 (2017): 790-807. Ibanez, M., and F. Carlsson. “A Survey-Based Choice Experiment on Coca Cultivation.” Journal of Development Economics, vol. 93 (2010): 249-263. Ibanez, M., and S. Klasen. “Is the War on Drugs Working? Examining the Colombian Case Using Micro Data.” The Journal of Development Studies, vol. 53, no. 10 (2017): 1650-1662. Ince, M., “Filling the FARC-Shaped Void.” The RUSI Journal, vol. 158, no. 5 (2013): 26-34. Jonsson, M., E. Brennan, and C. O’Hara. “Financing War or Facilitating Peace? The Impact of Rebel Drug Trafficking on Peace Negotiations in Colombia and Myanmar.” Studies in Conflict & Terrorism, vol. 39, no. 6 (2016): 542-559. López, L., J. Castro, and A. España. “Los Efectos Globo en los Cultivos de Coca en la Región Andina (1990-2009).” Apuntes del CENES, vol. 35, no. 61 (2016): 207-236. McDermott, J., “La Nueva Generación de Narcotraficantes Colombianos post-FARC: ‘Los Invisibles’.” InSight Crime (2018). Mejía, D., “Plan Colombia: An Analysis of Effectiveness and Costs.” The Brookings Institution (2015). Mejía, D., and P. Restrepo. “The Economics of the War on Illegal Drug Production and Trafficking.” Journal of Economic Behavior and Organization, vol. 126 (2016): 255-275. Mejía, D., P. Restrepo, and S. Rozo. “On the Effects of Enforcement on Illegal Markets: Evidence from a Quasi-experiment in Colombia.” World Bank Group (2015). Muñoz-Mora, J.C., S. Tobón, and J. d’Anjou. “The Role of Land Property Rights in the War on Illicit Crops: Evidence from Colombia.” World Development, vol. 103 (2018): 268-283. Quintero, S., and I. Posada. “Estrategias Políticas para el Tratamiento de las Drogas Ilegales en Colombia.” Revista Facultad Nacional de Salud Pública, vol. 31, no. 3 (2013): 373-380. Reyes, L., “Estimating the Causal Effect of Forced Eradication on Coca Cultivation in Colombian Municipalities.” World Development, vol. 61 (2014): 70-84. Rincón-Ruiz, A., H. Correa, D. Léon, and S. Williams. “Coca Cultivation and Crop Eradication in Colombia: The Challenges of Integrating Rural Reality into Effective Anti-Drug Policy.” International Journal of Drug Policy, vol. 33 (2016): 56-65. Rincón-Ruiz, A., U. Pascual, and S. Flantua. “Examining Spatially Varying Relationships between Coca Crops and Associated Factors in Colombia, Using Geographically Weight Regression.” Applied Geography, vol. 37 (2013): 23-33. Sánchez, M., “Cultivos Ilícitos y Confianza Institucional en Colombia.” Politica y Gobierno, vol. 21, no. 1 (2014): 95-126. Sandoval, L., A. Lopez, and C. Cárdenas. “Determinantes y Caracteristicas de la Oferta de Cocaina en Colombia (1989–2006).” Revista Facultad de Ciencias Económicas: Investigación y Reflexión, vol. 17, no. 2 (2009): 199-208. Seatzu, F., “‘If Ya Wanna End War and Stuff, You Gotta Sing Loud’—A Survey of the Provisional Agreement between FARC and Colombia on Illicit Drugs.” Araucaria. Revista Iberoamericana de Filosofia, Política y Humanidades y Humanidades, vol. 18, no. 36 (2016): 373-389. Thoumi, F., “Políticas Antidrogas y La Necesidad de Enfrentar las Vulnerabilidades de Colombia.” Análisis Politico, no. 67 (2009): 60-82. Appendix III: Comments from the Department of State Appendix IV: Comments from the U.S. Agency for International Development Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Juan Gobel (Assistant Director), Ryan Vaughan (Analyst-in-Charge), Owen Starlin, Pedro Almoguera, Martin De Alteriis, Leia Dickerson, Neil Doherty, Mark Dowling, Dawn Locke, and Aldo Salerno made key contributions to this report.
Why GAO Did This Study Colombia is the world's leading producer of cocaine, with production levels more than tripling from 2013 through 2017 (see figure). The U.S. and Colombian governments have been longstanding partners in the fight against drug trafficking. Since the launch of Plan Colombia in 1999, the U.S. government has invested over $10 billion in counternarcotics efforts in Colombia. This assistance has supported a range of eradication, interdiction, and alternative development programs. GAO was asked to review U.S. counternarcotics assistance to Colombia. This report examines (1) to what extent the U.S. government has assessed the effectiveness of its counternarcotics efforts in Colombia and (2) what is known about the effectiveness of U.S.-supported eradication, interdiction, and alternative development programs in Colombia. GAO reviewed data and documentation from U.S. agencies, performed a literature review of relevant research on counternarcotics efforts in Colombia, conducted fieldwork in Colombia, and interviewed U.S. and Colombian officials. What GAO Found U.S. agencies that provide counternarcotics assistance to Colombia conduct performance monitoring of their activities, such as by tracking the hectares of coca fields eradicated and the amount of cocaine seized, but have not consistently evaluated the effectiveness of their activities in reducing the cocaine supply. The U.S. Agency for International Development (USAID) has evaluated some of its alternative development programs, but the Department of State (State), which has lead responsibility for U.S. counternarcotics efforts, has not evaluated the effectiveness of its eradication and interdiction activities, as called for by its evaluation policies. Additionally, State has not conducted a comprehensive review of the U.S. counternarcotics approach, which relies on a combination of eradication, interdiction, and alternative development. Without information about the relative benefits and limitations of these activities, the U.S. government lacks key information to determine the most effective combination of counternarcotics activities. GAO's review of U.S. agency performance monitoring data and third-party research offers some information about the relative effectiveness of eradication, interdiction, and alternative development activities. For example, available evidence indicates that U.S.-supported eradication efforts in Colombia may not be an effective long-term approach to reduce the cocaine supply, due in part to coca growers responding to eradication by moving coca crops to national parks and other areas off limits to eradication. Agency data show that U.S.-supported interdiction efforts in Colombia seized hundreds of tons of cocaine and arrested thousands of drug traffickers, yet the net cocaine supply has increased and third-party studies have mixed findings on the long-term effectiveness of interdiction efforts. USAID evaluations indicate that alternative development programs in Colombia have provided legal economic opportunities to some rural populations previously involved in illicit crop production. However, USAID as well as third-party research suggests that alternative development requires significant and sustained investment and some programs have had design and sustainability challenges. What GAO Recommends GAO recommends that State, in consultation with relevant agencies, (1) evaluate the effectiveness of eradication and interdiction in reducing the cocaine supply in Colombia and (2) undertake a comprehensive review of the U.S. counternarcotics approach in Colombia that considers the relative benefits and limitations between eradication, interdiction, and alternative development efforts. State generally concurred with the recommendations.
gao_GAO-19-176
gao_GAO-19-176_0
Background The mission of IRS’s HCO includes providing “human capital strategies and tools for recruiting, hiring, developing, retaining, and transitioning a highly-skilled and high-performing workforce to support IRS mission accomplishments,” and developing and implementing “technology- enabled systems and processes to improve human capital planning and management and empower employees to achieve their potential.” HCO is headed by the Human Capital Officer who reports to the Deputy Commissioner for Operations Support and is to “provide executive leadership and direction in all matters relating to the Service’s employees, overseeing the design, development, and delivery of comprehensive, agency-wide human capital management and development programs that contribute to the Service’s vision and mission.” Worklife Benefits and Performance (WBP) and Employment, Talent and Security (ETS) are two subdivisions within HCO responsible for supporting many of IRS’s strategic human capital management activities. Among WBP’s responsibilities are: agency-wide strategic workforce planning; workforce planning consultation and support; OPM/Treasury/IRS workforce planning pilots, projects, and initiatives; IRS workforce data reporting; analyzing workforce projections; and attrition analysis. ETS is responsible for providing policies, products, and services that support business efforts to identify, recruit, hire, and advance a workforce with the competencies necessary to achieve current and future organizational performance goals. In particular, ETS “partners with business units to develop strategic hiring plans that drive the hiring decision by planning, executing and evaluating the type of position to be filled based on agency-wide workforce, attrition and workload needs.” Strategic Human Capital Management – High-Risk Area Strategic human capital management, which includes workforce planning activities, is a persistent challenge across the federal government. We designated strategic human capital management across the government as a high-risk issue in 2001 because of the federal government’s long- standing lack of a consistent strategic approach to human capital management. In February 2011, we narrowed the focus of this high-risk issue to the need for agencies to close skills gaps in mission-critical occupations. Agencies can have skills gaps for different reasons: they may have an insufficient number of employees or their employees may not have the appropriate skills or abilities to accomplish mission-critical work. Moreover, current budget and long-term fiscal pressures, the changing nature of federal work, and a potential wave of employee retirements that could produce gaps in leadership and institutional knowledge threaten to aggravate the problems created by existing skills gaps. Mission-critical skills gaps both within federal agencies and across the federal workforce continue pose a high risk to the nation because they can impede the government from cost-effectively serving the public and achieving results. IRS Budget IRS’s budget declined by about $2.1 billion (15.7 percent) from fiscal years 2011 through 2018 (see figure 1). The President’s fiscal year 2019 budget request was $11.135 billion. This amount is less than the fiscal year 2000 level for IRS, after adjusting for inflation. IRS requested an additional $397 million to cover implementation expenses for Tax Cuts and Jobs Act over the next 2 years and received $320 million for implementation pending submission of a spending plan, which IRS provided in June 2018. We previously reported IRS would direct the majority of the money toward technological updates. The Tax Cuts and Jobs Act made a number of significant changes to the tax law affecting both individuals and corporations. For example, for individual taxpayers, for tax years 2018 through 2025, tax rates were lowered for nearly all income levels, personal exemptions were eliminated while the standard deduction was increased, and certain credits, such as the child tax credit, were expanded. To implement the changes, IRS must (1) interpret the law; (2) create or revise hundreds of tax forms, publications, and instructions; (3) publish guidance and additional materials; (4) reprogram return processing systems; and (5) hire additional staff and train its workforce to help taxpayers understand the law. IRS’s HCO estimated that the agency would need to hire and train new staff to fill approximately 1,100 positions requiring a variety of competencies, and provide additional training on tax law changes for current employees. HCO will be responsible for recruiting and hiring new employees with the needed skills. IRS is in the Early Stages of Defining and Addressing its Workforce Needs IRS Strategic Workforce Planning Is Fragmented and Activities to Address Skills Needs Are Not Routinely Performed IRS has scaled back strategic workforce planning activities in recent years. Prior to 2011, IRS staff within its HCO or other dedicated program office conducted and coordinated agency-wide strategic workforce planning efforts. IRS officials told us that resource constraints and fewer staff with strategic workforce planning skills due to attrition since 2011 required HCO to largely abandon strategic workforce planning activities. Instead, HCO generally focused its efforts on completing HR transactions, such as retirements and benefits processing, meeting legal compliance activities, and facilitating hiring of seasonal employees. Since 2011, key human capital activities—such as developing an inventory of skills, identifying skills gaps, and attrition forecasting— became increasingly fragmented and shifted to the individual business divisions and program offices. IRS officials cited management familiarity of programmatic needs, challenges, processes, and culture as a benefit of workforce planning autonomy at business divisions and program offices. However, the officials told us these activities were often performed only to the extent those divisions had the time, resources, and top management interest. As a result, the quality of key human capital activities was uneven across the agency, if performed at all. In addition, HCO officials told us the lack of an agency-wide strategy and HCO authority to manage and coordinate strategic workforce planning efforts put the agency at greater risk for unnecessary duplication of effort in HR activities; development of redundant and generally noninteroperable systems used to maintain human capital information; and failure to effectively identify and retain personnel with critical skills and experience across the agency. IRS’s Information Technology (IT) is an example of an individual program office that has taken steps to address skills needs. IT developed a skills and competency inventory of its workforce. IRS officials told us maintaining and updating the inventory has been particularly helpful to informing IT hiring and training decisions, given the rapid nature of change in the technology industry and competition for top talent from the private sector. In June 2018, we found IRS had not fully implemented any of the key IT workforce planning practices we have previously identified. We recommended IRS should fully implement IT workforce planning practices, including (1) setting the strategic direction for workforce planning; (2) analyzing the workforce to identify skills gaps; (3) developing strategies and implementing activities to address skills gaps; and (4) monitoring and reporting on progress in addressing skills gaps. IRS agreed with our recommendation, but stated its efforts to address these issues were limited solely due to diversion of IT resources to implementation of the Tax Cuts and Jobs Act. We concluded that until the agency fully implemented these practices, it would continue to face challenges in assessing and addressing the gaps in knowledge and skills that are critical to the success of its key IT investments. A number of indicators led IRS to determine that continuing to make short-term, largely nonstrategic human capital decisions was unsustainable, according to IRS officials. For example, IRS has relatively high rates of employees eligible to retire. Nearly half of IRS’s Senior Executive Service (SES) is eligible to retire (see figure 2). Retirement eligibility rates among both SES and non-SES employees is not only greater than the rate at other federal agencies, but are also trending higher according to our analysis of OPM data. We have previously reported that the high rate of federal employees eligible for retirement creates both an opportunity and a challenge for agencies. If accompanied with appropriate strategic and workforce planning, it may create an opportunity for agencies to align their workforce with needed skills and leadership levels to meet their existing and evolving mission requirements. However, it means agencies will need succession planning efforts as well as effective sources and methods for recruiting and retaining candidates to avoid the loss of technical expertise in mission-critical skills. IRS is trying to mitigate the loss of institutional memory and meet its current obligations by re-employing recently retired employees (also known as re-employed annuitants). However, according to HCO officials, as of October 2018, the agency is struggling to bring recently retired employees back in part because many had taken other employment. HCO is focusing on other activities, such as contract staffing services, to meet workload demands. As we discuss later in this report, IRS is taking a number of actions to address staffing shortages, but the effectiveness of those efforts are not yet known. IRS’s FEVS results also indicate the agency is at risk of losing employees with critical skills. For example, IRS’s results for the Global Satisfaction Index—a measure generated by OPM that combines employees’ responses about satisfaction with their job, pay, the organization, and their willingness to recommend their organization as a good place to work—fell below the government-wide average in 2013. Relatedly, our analysis of fiscal year 2016 IRS exit survey results found 32 percent of separating employees indicated poor office morale strongly influenced their decision to leave. Though improving since 2015, IRS continued to lag behind the government-wide average as of 2017, the most recent year of data available at the time of this study (see figure 3). Key Initiative Delay Has Hampered IRS’s Ability to Fully Address Its Workforce Needs In 2016, IRS determined the agency needed to develop a strategic workforce plan and conduct related workforce planning activities to help mitigate the risks associated with fragmented human capital activities as discussed above, according to HCO officials. IRS provided authority to HCO to be the central coordinating body to lead that effort, hereafter referred to as the workforce planning initiative. In March 2018, IRS issued an update to its Internal Revenue Manual stating HCO’s responsibilities. For example, IRS provided HCO authority to: conduct strategic workforce planning annually that is aligned with Treasury’s mission, goals, and objectives; perform data analysis of the current and future workforce, identify gaps, and submit solutions that will enable the organization to meet its mission, goals, and objectives; ensure the existence and integration of human capital planning functions into the workforce planning process, including skills assessments, competency models, recruitment planning, training and development, and retention and succession planning; provide guidance and direction for IRS-wide workforce planning ensure the implementation of an agency-wide skills assessment and competency model framework; and communicate commitment for a consistent, repeatable, and systematic workforce planning process to enable improved and integrated management of human capital initiatives. The IRM also describes IRS’s workforce planning process, which includes a five-phase strategic workforce planning model that is intended to align with OPM’s workforce planning model (see figure 4). Implementing the strategic workforce planning model and conducting related initiative activities could help the agency ensure its human capital programs align with its mission, goals, and objectives through analysis, planning, investment, and measurement, as required in federal regulation. Furthermore, we determined elements of the initiative addressed key principles we have previously identified for effective workforce planning. For example, the model includes steps to analyze the workforce to determine the critical skills and competencies the agency needs to achieve current and future programmatic results, and to monitor and evaluate the agency’s progress toward its human capital goals. As a result, the initiative could position IRS to systematically identify the workforce needed for the future, develop strategies for identifying and closing skills gaps, and shape its workforce. However, IRS’s implementation of its workforce planning initiative has been delayed. Phase 1 (Enterprise Strategy and Planning) of the workforce planning initiative was underway as of the first quarter of fiscal year 2018, and IRS was scheduled to complete this phase by the second quarter of fiscal year 2018. IRS reports show the agency originally anticipated completing all five phases by June 2018. According to IRS officials, however, IRS now anticipates Phase 1 activities to resume after the opening of the 2020 tax filing season and, as of November 2018, could not estimate a completion date for any of the five phases. The workforce planning initiative has been delayed for three primary reasons, according to IRS documents and officials: 1. Redirection of resources to Tax Cuts and Jobs Act implementation. IRS granted extensions at the request of business divisions and commissioner-level organizations that needed to redirect resources to support the implementation of Tax Cuts and Jobs Act. To implement the 119 provisions of the Tax Cuts and Jobs Act, we reported that IRS would need to (1) interpret the law; (2) create or revise nearly 500 tax forms, publications, and instructions; (3) publish guidance and additional materials; (4) reprogram 140 interrelated return processing systems; (5) hire additional staff and train its workforce to help taxpayers understand the law and how it applies to them; and (6) conduct extensive taxpayer outreach. In addition to redirecting staff, IRS has used overtime and compensatory hours to complete necessary activities in time for the 2019 filing season. 2. Lack of workforce planning skills. As part of a Treasury pilot, IRS conducted a self-assessment of key competencies within HCO as well as within business division-based HR offices. The assessment found competency around workforce planning was among the lowest ranked skills within HCO. According to HCO officials, IRS lacks training and resources available to help its human capital staff develop competency in workforce planning. HCO officials told us they plan to leverage IRS’s Workforce Planning Council to develop strategic workforce planning skills. HCO officials told us the council has training designed to help the HR staff understand how to gather data, use technology, and perform other activities that contribute to IRS’s strategic workforce planning efforts. In addition to a lack of strategic workforce planning skills, a number of key HCO personnel with strategic workforce planning expertise have recently separated from IRS, according to HCO officials. 3. Information system deployment delay. Treasury is developing the Integrated Talent Management system (ITM). Treasury intends ITM to provide the agency with greater visibility of its total workforce, and help its bureaus, including IRS, with workforce planning activities such as succession planning and competency management. Treasury officials told us as of November 2018, ITM is still in development and its deployment has been delayed for a number of reasons, including the need for Treasury to complete system implementation plans and user guides, and address system administration issues at the bureaus. IRS HCO officials told us they opted to wait on ITM rather than moving forward with a number of Phase 2 (Workforce Analysis) activities. IRS HCO officials said they needed this, or a similar software tool, to ensure reliable data capture, make analysis more efficient, and help managers conduct routine updates of workforce planning efforts rather than static, one-time data calls. HCO also opted to wait for ITM to avoid potentially redundant reprogramming of existing systems. However, HCO officials noted that even when ITM is eventually deployed, IRS would need to train business divisions on its use, further lengthening the time needed before conducting Phase 2 activities. Treasury officials told us that ITM would complement rather than replace existing systems and processes. Our analysis of Treasury documents and interviews with Treasury and IRS HCO officials found it was unclear when an ITM module related to talent management and strategic workforce planning will be deployed and available for IRS’s use, the functions it will include, and how IRS’s existing systems and processes would be affected. As a result, IRS lacks the information needed to make staffing and technology decisions related to the workforce planning initiative, putting the initiative at risk of further delay. IRS Could Improve Reporting on the Status of its Workforce Planning Initiative Treasury is required to conduct data-driven reviews via HRstat. HRStat is a strategic human capital performance evaluation process that identifies, measures, and analyzes human capital data to inform the impact of an agency’s human capital management on organizational results with the intent to improve human capital outcomes. HRstat is also a proven leadership strategy that can help agency officials monitor their progress towards addressing important human capital efforts, such as closing skills gaps. Treasury uses HRstat to monitor the progress of its bureaus in meeting their human capital goals, including IRS’s implementation of the workforce planning initiative. In preparation for the data-driven reviews, each bureau, including IRS, submits HRStat information to Treasury. Treasury and bureau officials discuss the results and make related strategic decisions during bi-monthly Human Capital Advisory Council meetings. Our review of IRS HRstat reports, however, found additional information is needed to more fully reflect the status of the workforce planning initiative and related challenges. For example: in the January, March, May and July 2018 HRstat submissions, IRS 1) reported a status of green (on schedule) for “Increased efforts for development of long-term IRS workforce staffing plan”, and 2) indicated under Key Issues/Challenges that completing the initiative was dependent on ITM deployment; in the July 2018 HRstat submission, IRS moved several milestones to future fiscal years, and identified ITM delays as a significant risk to the workforce planning initiative schedule; in the September 2018 HRstat submission, IRS reported the status of the workforce planning initiative was no longer on schedule. The report identified ITM delays as the cause, but did not include other reasons for the delay, specifically the redirection of resources to Tax Cuts and Jobs Act implementation and a lack of strategic workforce planning skills within HCO. Federal strategic human capital standards state agencies are to communicate in an open and transparent manner to facilitate cross- agency collaboration to achieve mission objectives. In addition, agency leaders should hold managers accountable for knowing the progress being made in achieving goals and, if progress is insufficient, understand why and having a plan for improvement. More complete HRStat information could help IRS and Treasury take fuller advantage of a key opportunity to discuss and address workforce planning initiative delays at Human Capital Advisory Council meetings. IRS is Not Fully Addressing Skills Gaps in Its Workforce Strict Hiring Limits Contributed to Annual Declines in IRS Full-Time Equivalents Since 2011 IRS full-time equivalents (FTE) have declined each year since 2011, and declines have been uneven across different mission areas (see figure 5). From fiscal years 2011 through 2017, IRS FTEs declined from 95,501 to an estimated 77,685, an 18.7 percent reduction. Our analysis of the President’s Budget data produced by OMB found the reductions have been most significant within IRS Enforcement, where staffing declined by 27 percent (fiscal years 2011 through 2017). In comparison, staff supporting Taxpayer Service activities declined by 8.2 percent, while staff within Operations Support declined by 12.7 percent (fiscal years 2011 through 2017.) IRS estimated FTEs would continue to decline across the three areas in fiscal year 2018. IRS attributed staffing declines primarily to a policy decision to strictly limit hiring. According to IRS, declining budgets over multiple years necessitated decisions for how to reduce and control labor and labor- related costs, which accounted for around 74 percent of its budget allocations in fiscal year 2017. One way IRS sought to control costs was its decision to implement the Exception Hiring Process beginning in fiscal year 2011. The process effectively froze replacement of employees lost to attrition in most program areas, placed limits on external (nonseasonal) hiring, added additional approval steps for new hires, and placed priority on acquiring information technology and cybersecurity staff, according to IRS officials. The Exception Hiring Process remains in place, but as we discuss later, has evolved over time because IRS has received supplemental funding and other priority areas have emerged. IRS also limited overtime and training as a means of controlling costs. Declining Staffing Contributed to IRS Decisions to Scale Back Enforcement Activities Available staff was a key factor in decisions to scale back a number of program activities, most predominantly in enforcement, according to IRS officials. IRS officials told us that, unlike other areas where the agency is legally required to perform certain functions, the agency has flexibility to curtail many enforcement activities when attrition rates increase. Auditing tax returns, for example, is a critical part of IRS’s strategy to ensure tax compliance and address the tax gap, or the difference between taxes owed and those paid on time. Our analysis of IRS data shows the number of individual returns audited has declined each year between fiscal years 2011 through 2017, a 40 percent decline (see figure 6). Reduced audit rates were not limited to individual returns. IRS data show that audit rates of large corporations with assets $10 million or greater declined from 17.7 percent in fiscal year 2011 to 7.9 percent in fiscal year 2017. We have previously reported on other areas in which staffing declines affected IRS operations, including fewer nonfiler investigations, private letter rulings, elimination of a bankruptcy program, and increases in the time needed to close innocent spouse appeals. In addition, we have made recommendations to IRS to better target its limited enforcement resources so it can, for example, 1) maximize revenue yield of the income tax, and 2) more effectively audit large partnerships. IRS agreed with the recommendations and took some action to close them. As of October and July 2018, respectively, those recommendations have not been fully addressed. IRS Has Skills Gaps in Key Occupations As previously discussed, IRS is in the initial stages of implementing a strategic workforce planning model, which could provide IRS with information needed to understand what critical skills and competencies are needed to meet its mission. However, according to IRS officials, the agency has not used such a framework in recent years, making it difficult to determine where skills gaps exist. Nonetheless, our analysis of Treasury documents, Enterprise Human Resources Integration data, and interviews with agency officials found IRS currently has skills gaps in key occupations. In fiscal year 2017, Treasury conducted a department-wide analysis of mission critical occupations (MCO) at risk of skills gaps. Treasury analyzed four factors to determine and rank MCOs at highest risk for skills gaps: 1) 2-year retention rate, 2) quit rate, 3) retirement rate, and 4) applicant quality. Analysis of these factors can help build the predictive capacity of agencies to identify mission critical skills gaps as they emerge. The following are the MCOs relevant to IRS that Treasury determined to be at medium or moderate risk for skills gaps, in order of risk: human resources specialist, tax law specialist. In light of staff attrition since 2011, particularly within enforcement occupations, we selected tax examiners and revenue officers to demonstrate how IRS has implemented strategies, policies, and processes for identifying and addressing skills gaps, and to identify critical instances where those efforts have affected IRS’s ability to identify and close critical skills gaps. Tax Examiners Tax examiners are responsible for responding to taxpayer’s inquiries regarding preparation of a variety of tax returns, related schedules and other documentation; resolving account inquiries; advising taxpayers of enforcement actions; and managing sensitive case problems designated as requiring special case handling. In addition, tax examiners analyze and resolve tax-processing problems; adjust taxpayer accounts; prepare and issue manual refunds; and compute tax, penalty, and interest. IRS documents note that the level of supervision, complexity, contacts, and the scope of assigned workload varies for tax examiners across performance levels. At the entry level, tax examiners are responsible for receiving and initiating contacts with taxpayers to gather information and resolve issues, and to gain compliance with laws and regulations while dealing with taxpayers that may be evasive under sensitive situations. At the intermediate level, tax examiners are responsible for handling a wide variety of the most difficult or sensitive tax processing problems. Their work products affect the taxpayer’s filing status and tax liability for current, prior, and future reporting requirements. At the senior—or expert—level, tax examiners serve as a work leader over employees engaged in accomplishing tax examining work, as well as perform a full range of examination duties that include adjusting tax, penalty, and interest on taxpayers’ accounts and closing cases. Our analysis of OPM data found that, from fiscal years 2011 through 2017, the agency lost 18 percent of its total tax examiner workforce (see figure 7). Additionally, the number of tax examiners in the intermediate level declined by 34 percent during that same period. IRS officials told us replacing tax examiners is particularly difficult not only because of the general hiring restrictions affecting the entire IRS, but also because of the significant amount of specialized expertise that must be developed to perform in a specific area of tax law. According to IRS officials, in 2018 and in response to declining tax examiner personnel, IRS doubled the dollar amount thresholds tax examiners use to select refunds for additional audit. IRS officials told us this means thousands of refunds that would have received additional scrutiny due to errors or anomalies are no longer considered for follow-up review by tax examiners, and the government is potentially missing significant opportunities to collect revenue and enforce tax laws. Three of the four business divisions within IRS identified skill gaps among its tax examiners. Large Business and International (LB&I). According to LB&I officials, long-term vulnerability with their tax examiners is a major concern, in part because LB&I has been unable to replenish its tax examiner workforce given external hiring constraints and internal promotion concerns (i.e., internal promotions can leave staffing gaps at the lower ranks putting them at risk for skills gaps). According to LB&I officials, having fewer tax examiners—specifically fewer tax examiners in key geographic locations—is affecting its mission. For example, LB&I reviews tax returns of foreign nationals and overseas taxpayers, which are predominantly paper-based returns and have to be processed manually. LB&I officials told us manual paper return processing is time intensive and, with fewer tax examiners, puts IRS at greater risk of having to pay interest to taxpayers for withholding refunds due to processing delays. Small Business/Self-Employed (SB/SE). According to SB/SE officials, gaps among tax examiners are evident and, as a result, SB/SE has reduced work plans and increased the use of overtime. Within SB/SE’s Campus Exam/Automated Underreporter program, officials identified staffing gaps that they attributed to the general inability to hire behind attrition. According to SB/SE officials, as managers and lead vacancies arise, tax examiners are often detailed to fill the positions, which reduce the number of tax examiners available to perform the work. Wage and Investment (W&I). According to W&I officials, they have identified tax examiner skills gaps within their Accounts Management, Submission Processing, and Return Integrity and Compliance Services programs. To address identified skills gaps within W&I, officials said they conduct annual Strategic Hiring Summits bringing together stakeholders and business partners to jointly address filing season staffing needs, staffing barriers and gaps, and hiring lessons learned from prior filing seasons. According to W&I officials, these efforts continue to improve their targeted hiring and timeliness of its onboarding efforts. Other strategies that W&I plans to implement are to bring in tax examiners earlier and provide them with the full spectrum of training upfront rather than spreading the training out over months or years. Additionally, they said tax examiners are going to be cross trained on multiple types of inventory to increase their skills and to address inventory backlogs. Revenue Officers Revenue officers are IRS civil enforcement employees who are trained to conduct face-to-face contact with business and individual taxpayers who have not resolved their tax obligations in response to prior correspondence or contact. The role of revenue officers involves explaining to taxpayers why they are not in compliance, advising them of their financial obligation, and when necessary, taking appropriate enforcement action. According to IRS, the goal is voluntary taxpayer compliance through payment arrangements or compromises. However, for taxpayers that remain noncompliant, revenue officers are trained to take civil enforcement actions, such as filing a notice of lien to protect the government’s interest, including and up to seizing personal and business property. According to IRS officials, it takes 4 to 5 years to train a new hire to become an experienced senior or expert revenue officer. The senior or expert levels are of particular importance to IRS’s enforcement efforts. An internal IRS study completed in June 2018 found that 84 percent of all successful fraud referrals came from revenue officers at the senior/expert skill level. Senior revenue officers also serve as classroom instructors and perform on-the-job training of intermediate and entry-level staff. According to IRS officials, this additional responsibility directly affects senior revenue officers’ ability to work fraud cases. Our analysis of OPM data shows that the total number of revenue officers at IRS declined by nearly 40 percent from fiscal years 2011 through 2017, and entry-level revenue officers declined by 86 percent during that same period (see figure 8). IRS officials told us the declines were due to a combination of attrition, limited hiring, and promotions. IRS decided to scale back nonfiler investigations in light of declining staffing, according to IRS officials. We reported in tax year 2010 that IRS started 3.5 million individual nonfiler cases and 4.3 million business nonfiler cases. In tax year 2014, nonfiler cases dropped to 2 million for individuals and 1.8 million for businesses, a reduction of 43 percent and 58 percent, respectively. More recently in fiscal year 2018, IRS data show nonfiler investigations declined to 0.8 million for individuals and 0.4 million for businesses. IRS Collaborated with OPM and Treasury to Address Skills Gaps among Revenue Agents Since we designated addressing agencies’ mission critical occupation skills gaps as a high-risk area in 2011, OPM and agencies have launched a number of initiatives to close skills gaps. For example, in 2011, OPM and the Chief Human Capital Officer Council established an interagency working group to identify mission critical occupations (MCO) at high risk for skills gaps. The working group, also known as the Federal Agency Skills Team (FAST), identified skills gaps in six government-wide occupations, such as cybersecurity, human resources (HR) specialists, and acquisition. The FAST also identified agency-specific MCOs at high risk for skills gaps, which included IRS revenue agents. Subsequently, Treasury was designated leader of a FAST subteam to develop a plan for closing skills gaps among revenue agents. Treasury convened a group of revenue agents from each of IRS’s business divisions, IRS human resource specialists with workforce planning expertise, and members of IRS’s training group. Table 1 shows the process the subteam used to identify and address the causes of revenue agent skills gaps. The FAST brainstormed potential causes for skills gaps among revenue agents (see figure 9). According to FAST documents, this process helped the team understand the range of contributing factors that led to lower than acceptable 2-year retention rates and a high quit rate among revenue agents. Now that FAST identified the potential causes for the two indicators, Treasury officials told us IRS is responsible for developing and implementing strategies to close skills gaps among its revenue agents and reporting on its progress. According to IRS documents, as of July 2018, the agency established communications with revenue agents to increase awareness about detail and developmental opportunities that are posted on IRS’s Service-wide Detail Opportunities web page, and is developing a plan for more effectively including revenue agents in management training. Related IRS performance measures show that posted detail opportunities for revenue agents have increased from 24 in fiscal year 2016 to 69 in fiscal year 2018. IRS’s HCO Provides Services to Help Address Skills Gaps, but Does Not Have the Capacity to Fully Meet Needs For a limited number of mission critical occupations, HCO provides support to business divisions and program offices that need help addressing workforce capacity concerns. For example, HCO conducts competency assessments when a business division or program is seeking to identify the top candidates for hire or promotion. Determining critical competencies can help agencies effectively meet demographic, technological, and other forces that are challenging government agencies to change activities they perform and the goals that they must achieve, how they do their business, and even who does the government’s business. HCO also conducts skills assessments when a division or program office needs to determine the skill level of their existing employees for the purposes of training, hiring, retention, or staffing decisions. Agencies can use both competency and skills assessments to help identify and address skills gaps. For competency assessments, HCO officials told us they develop annual work plans that prioritize assessment scheduling for certain occupations based on factors including available funding, business division, or program office staff availability to assist HCO with subject matter expertise, and the age of the competency model or assessment. For example, in 2017, HCO supported a competency assessment for special agents within its Criminal Investigations (CI) division. CI special agents are forensic accountants searching for evidence of criminal conduct. HCO officials told us competency assessments for special agents are a priority due to rapidly evolving sophistication of schemes to defraud the government and increasing use of automated financial records. IRS used information resulting from the competency assessment to revamp the special agent hiring process. According to HCO officials, results from the competency assessment have helped IRS reduce the cost and time to assess applicants while improving the overall candidate pool. Skills assessments supported by HCO have been used in some limited cases to help IRS identify and address skills gaps among certain MCOs. According to HCO officials, they provide skills assessments upon request by a business division and program office, assuming personnel and funding resources are available. IRS business divisions or program offices cover costs associated with large-scale assessments where contractor support is needed to supplement HCO’s staff. Skills assessments among occupations with smaller populations usually do not incur costs to the divisions. HCO has supported requested skills assessments of information technology specialists, revenue agents, and human resources specialists in recent years. IRS documents show these assessments were used in part to identify and address skills gaps within these occupations. Unlike competency assessments, however, IRS does not create a work plan or otherwise prioritize skills assessments to address those occupations most in need. As discussed above, Treasury has identified MCOs at moderate to high risk for skills gaps, yet skills assessments have not addressed all the occupations identified as highest risk. Leading practices in strategic workforce management state that agencies should determine the critical skills and competencies its workforce needs to achieve current and future agency goals and missions, and identify gaps, including those that training and development strategies can help address. A work plan for addressing skills gaps could help IRS remediate gaps on a timely basis. Without a plan, IRS risks having to continue scaling back mission-critical activities as it has done in recent years. IRS Faces Challenges in Its Ability to Hire Key Employees HCO Has Limited Staffing Capacity to Hire New Employees As previously discussed, Treasury found IRS is at risk of skills gaps among its mission critical occupations, including its HR specialists. In light of related agency-wide hiring limits, IRS offered early retirement incentives for eligible hiring specialists and did not backfill other specialists when they left the agency. HCO has lost more than half of its hiring specialists since 2011. According to HCO, the hiring skills of remaining specialists atrophied as those specialists were redirected to other priority HR areas. Many of HCO’s hiring and other HR responsibilities, however, have remained constant or increased. For example, in fiscal year 2017, IRS hired around 6,700 seasonal employees to assist with the filing season and HCO expects that number to increase in future fiscal years. HCO officials told us the pace of internal hiring (i.e., promotions) remained constant over the past several years. IRS has recently prioritized hiring to address information technology and cybersecurity areas, as well as implementation of the Tax Cuts and Jobs Act. As a result of the combination of fewer hiring specialists and new hiring requirements, HCO officials said its capacity to hire and carry out other important human capital and HR functions is highly strained. In 2018, HCO identified improving hiring capacity as its top priority and is exploring a variety of options, including: HCO surge contracting: Contractors will be used in locations across the employment offices to assist with hiring and personnel security. Leverage Administrative Resource Center (ARC) services. ARC is part of Treasury and provides administrative services, including HR support for various federal agencies. HCO engaged ARC in May 2018 to assist with developing hiring qualifications. OPM shared services. IRS is exploring use of OPM shared services for help in the hiring process. Business-based HR teams: Teams within the divisions have been given authority to post internal merit promotion supervisory vacancy announcements, which will reduce HCO’s workload for this function. HCO will retain responsibility for building positions, setting pay, and processing personnel actions, and will provide a dedicated point of contact for questions and quality review. Federal Executive Board team: A group of Interagency Agreement detailees supported by Wage and Investment (W&I) to work W&I vacancy announcement backlogs. IRS officials told us that, as of November 2018, this option had not been successful. HCO interagency detail opportunity: Employees detailed from other federal agencies into HR positions throughout HCO using interagency agreements. HCO officials told us they are generally monitoring the status of these activities, but cited competing priorities as a reason they have not determined how each activity will be evaluated in achieving increased hiring capacity and associated outcomes. Periodic measurement of an agency’s progress toward human capital goals and the extent that human capital activities contributed to achieving programmatic goals provides information for effective oversight by identifying performance shortfalls and appropriate corrective actions. Without a means for gauging the relative success of its capacity-building activities, IRS risks spending its limited HCO resources on activities that may not help the agency meet its desired hiring outcomes. IRS Has Identified Hiring Risks Related to Tax Cuts and Jobs Act Implementation IRS established a risk register as part of efforts to identify, prioritize, and mitigate risks to IRS’s implementation of the Tax Cuts and Jobs Act, including a number of risks related to its ability to hire. A risk register is used to identify the source of risks, owners to manage the treatment of those risks, and track the success of risk mitigation strategies over time. Risk registers or other comprehensive risk reports are an essential element of a successful enterprise risk management program. The risk register shows that a lack of strategic workforce planning in recent years is contributing to a number of risks IRS has faced in implementing the Tax Cuts and Jobs Act. For example: Large Business and International (LB&I) is having difficulty hiring senior advisors needed to develop training and compliance strategies. The risk register indicates mitigation efforts in this area, such as extending detail opportunities, have failed and there are potential major impacts to the program. According to LB&I officials, staffing declines in related skills prior to the Tax Cuts and Jobs Act have exacerbated difficulties in this area. Business units have been unable to identify critical hiring needs for the Tax Cuts and Jobs Act. As of October 2018, HCO is coordinating with business units to help determine hiring needs so that it can prioritize agency hiring efforts. In a related risk, IRS determined the lack of personnel and resources within W&I may hinder its ability to identify hiring needs for the fiscal year 2019 filing season. According to IRS, “the filing season may be impacted by significant resource constraints largely due to onboarding concerns, resulting in lost revenue, increased cost, and significant reputational impact to the IRS.” As of October 2018, IRS stated it has completed necessary hiring plans and determined this risk has minimal to no impact to IRS’s ability to carry out the upcoming filing season. Table 2 shows additional examples of risks related to hiring identified by IRS, steps the agency is taking to mitigate those risks, and the status as of October 2018. In September 2018, the Treasury Inspector General for Tax Administration (TIGTA) reviewed IRS’s information technology readiness for implementing Tax Cuts and Jobs Act. TIGTA reported IRS used standard position descriptions for hiring efforts and had not defined specific knowledge, skills, abilities, and other requirements necessary for positions it expects to hire for Tax Cuts and Jobs Act implementation, and/or back-filling existing positions due to personnel performing related activities. We did not review position descriptions for the purposes of this report. However, as previously discussed, without information about what skills and skills gaps exist across the agency, IRS lacks important information needed to inform hiring and training resource decisions. Changes to IRS’s Hiring Process Have Contributed to Hiring Delays It can take a year or longer from the time when a supervisor notifies his or her division of a staffing need to the time the employee is on board, according to IRS documents and our interviews. HCO officials attributed much of this time to gathering required information and approvals associated with IRS’s “Exception Hiring Process.” In fiscal year 2011, IRS instituted the process in part to help the agency prioritize hiring decisions in a highly constrained budget environment. The Exception Hiring Process added approval layers to IRS’s regular hiring requirements, including direct approval from the Deputy Commissioner for Operations Support, the Deputy Commissioner for Services and Enforcement, or the Chief of Staff for direct reports to the Commissioner. Also as part of this process, the Chief Financial Officer performs a cost assessment to determine the affordability of any requested new hire, and HCO determines if multiple hiring requests can be consolidated into a smaller number of positions. Our review of IRS budget operating guidance and interviews found Exception Hiring Process requirements have changed over time. Initially in 2011, every new hire was subject to the Exception Hiring Process. Since 2011, hiring requirements have eased in some circumstances. For example, in 2014, business division directors were given authority to approve internal hires (i.e., promotions) within their own business division. More recently, new hires in cybersecurity, information technology, or those needed to implement the Tax Cuts and Jobs Act were not subject to the same requirements as hiring requests in other occupations. According to HCO officials, easing hiring requirements in certain circumstances was necessary to help the agency bring on critical hires more quickly. However, based on their interactions with managers in the business divisions, HCO officials said the evolving and nonuniform Exception Hiring Process requirements has been confusing to managers requesting new hires. Business divisions and program offices often submitted hiring requests without required information or approvals. This has resulted in hiring delays, according to HCO officials. HCO officials told us that issuing clearer guidance to business managers would help ensure business divisions submit hiring requests that are complete, which would reduce the risk of hiring delays. Conclusions In light of declining resources and increasing requirements, IRS is taking the initial steps to reinstate a strategic approach to workforce planning that the agency scaled back in recent years. IRS has recently provided its HCO with authority to lead and coordinate agency-wide strategic workforce planning efforts. However, full implementation of an IRS initiative to conduct agency-wide strategic workforce planning has been put on hold as other activities have taken priority, and a key workforce planning system being developed by Treasury has been delayed. As a result, these efforts remain fragmented, and IRS lacks an inventory of its current workforce, has neither developed the competency and staffing requirements nor conducted agency-wide activities associated with analyzing the workforce to identify skills gaps, or developed strategies to address skills gaps. Additionally, IRS could improve reporting of its progress in addressing skills gaps. This critical information will help provide assurance that its fragmented human capital activities are well managed or that resources are being effectively allocated. High attrition among IRS employees, particularly in complex enforcement occupations and lower-than-average employee satisfaction rates, puts IRS at continued risk of skills gaps. These skills gaps have already been a significant contributor to IRS’s decisions to scale back important enforcement activities that are critical to promoting voluntary compliance and closing the tax gap. However, IRS has not targeted its limited resources to addressing issues among the mission critical occupations most at risk of skills gaps. Instead, activities such as skills gaps assessments are only conducted to the extent business divisions and program offices make resources available, and management is aware of and inclined to seek assistance from IRS’s HCO. Reporting on the results of efforts to close skills gap and developing a work plan or other mechanism for prioritizing assessments would better position IRS to address key gaps. Additionally, the results of an interagency working group effort intended to address skill gaps among IRS revenue agents and other occupations with skills gaps across the government may hold important lessons for addressing skills gaps among mission critical occupations at IRS. Each of these issues is exacerbated by limited capacity within HCO, which has redirected its resources to implementing the Tax Cuts and Jobs Act and meeting other routine transactional human resource requirements. HCO is leveraging a range of activities intended to help the agency meet immediate hiring needs. Measuring the extent to which each of activities is effective would help HCO target resources to the most effective activities as it seeks to improve its capacity for hiring employees in hard to fill positions in the future. In addition, issuing clear guidance on hiring request requirements would better position IRS to avoid hiring delays for mission-critical occupations. Recommendations for Executive Action We are making seven recommendations, six to IRS and one to Treasury. Specifically: The Commissioner of the IRS should fully implement the workforce planning initiative, including taking the following actions: (1) conducting enterprise strategy and planning, (2) conducting workforce analysis, (3) creating a workforce plan, (4) implementing the workforce plan, and (5) monitoring and evaluating the results. (Recommendation 1) The Secretary of the Treasury should issue clarifying guidance to IRS about the Integrated Talent Management system, including when the workforce planning and talent management modules will be deployed and available for IRS’s use, the functions it will include, and how IRS’s existing systems and processes within business divisions and program offices will be affected. (Recommendation 2) The Commissioner of IRS should ensure the Human Capital Officer improves reporting for its workforce planning initiative in its bi-monthly HRstat information submissions to Treasury. The submissions should include the original implementation schedule, changes to the original schedule, delays in implementation and each of their causes, and IRS’s strategy to address the causes of those delays. (Recommendation 3) The Commissioner of IRS should ensure the Human Capital Officer and Deputy Commissioner for Services and Enforcement report the results of efforts to close skills gaps among revenue agents, including lessons learned, that may help inform strategies for conducting skills gap assessment efforts for other mission critical occupations. (Recommendation 4) The Commissioner of IRS should ensure the Human Capital Officer and Deputy Commissioner for Services and Enforcement collaborate to develop a work plan or other mechanism that prioritizes and schedules skills assessments for mission critical occupations at highest risk of skills gaps, such as those identified by Treasury or where key activities have been scaled back, for the purposes of developing a strategy to close the gaps. (Recommendation 5) The Commissioner of IRS should direct the Human Capital Officer to measure the extent to which each of its activities for improving hiring capacity are effective in producing desired hiring capacity outcomes, including strategies used to mitigate hiring risks associated with Tax Cuts and Jobs Act implementation hiring. (Recommendation 6) The Commissioner of IRS should direct the Human Capital Officer and Chief Financial Officer to issue clarifying guidance on the current Exception Hiring Process, including clarifying areas where hiring limitations that were used in previous years are no longer applicable. (Recommendation 7) Agency Comments and Our Evaluation We provided a draft of this report to the Commissioner of the Internal Revenue Service, the Secretary of the Treasury, and the Acting Director of the Office of Personnel Management for review and comment. In a letter from IRS’s Deputy Commissioner for Operations Support, reproduced in appendix II, IRS agreed with our six recommendations directed to it. The letter states there is room for improvement in implementing its strategic workforce plan and the associated workforce planning initiative, and IRS will provide a detailed corrective action plan in their 180-day response to Congress. IRS also provided technical comments, which we incorporated as appropriate. For Treasury, the Acting Director, Human Capital Strategic Management, the Office of the Deputy Assistant Secretary for Human Resources and Chief Human Capital Officer, emailed comments stating Treasury agreed with the one recommendation directed to it. In the comments, Treasury wrote, “the [Deputy Assistant Secretary for Human Resources and Chief Human Capital Officer] will continue to provide guidance, policy and direction on how the ITM is used to meet Workforce Planning objectives.” Treasury provided technical comments on the recommendation directed to it, and we revised the recommendation as appropriate to recognize that bureaus, not Treasury, implement the ITM. OPM did not have comments. We are sending copies of this report to interested congressional committees, the Commissioner of IRS, the Secretary of the Treasury, and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512- 9110 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology You asked us to review the Internal Revenue Service’s (IRS) enterprise- wide strategic workforce planning efforts. In this report, we assess (1) how IRS defines its workforce needs and develops strategies for shaping its workforce; (2) the extent to which IRS identified the critical skills and competencies it will require to meet its goals, and describe its strategy to address skills gaps in its workforce; and (3) the extent to which IRS’s Human Capital Office has the capacity to hire employees in hard to fill positions. For our first objective, to determine how IRS defines its workforce needs, we conducted a review of IRS’s implementation of its strategic workforce planning process. We compared IRS’s strategic workforce planning guidance, policies, and procedures, as well as the Department of the Treasury’s (Treasury) guidance and policies to (1) Office of Personnel Management (OPM) regulations and guidance on strategic workforce planning, (2) our reports on key principles for effective strategic workforce planning, and (3) standards for internal controls. To describe how IRS workforce planning process aligns with standards, we reviewed IRS’s documentation of its programs, policies, and practices for recruiting, developing, and retaining the staff needed to achieve program goals. We compared that information with requirements articulated in OPM regulations and best practices we has identified. To include prior actions and concerns previously identified as related to IRS’s strategic human capital planning, we reviewed our prior relevant reports and those from the Treasury Inspector General for Tax Administration. We also used several databases to examine IRS’s workforce trends. To analyze trends in IRS’s full-time equivalent employment, we used the Office of Management and Budget’s (OMB) budget database, MAX Information System (MAX), for fiscal years 2011 through 2017. To analyze employee engagement and employee global satisfaction at IRS, we analyzed IRS results from OPM’s fiscal years 2011 through 2017 Federal Employee Viewpoint Survey (FEVS). To determine retirement eligibility of SES and non-SES IRS staff, we analyzed data in OPM’s Enterprise Human Resources Integration (EHRI) database. To assess the reliability of EHRI, OMB Max, and FEVS data, we reviewed our past data reliability assessments and conducted electronic testing to evaluate the accuracy and completeness of the data used in our analyses. For EHRI and FEVS, we also interviewed knowledgeable agency officials. We determined the data used from these three systems to be sufficiently reliable for our purposes. We supplemented our review of documentation by interviewing relevant IRS, Treasury, and OPM officials. We interviewed IRS officials from the Human Capital Office including the Human Capital Officer, Large Business & International (LB&I), Small Business Self Employed (SB/SE), Tax Exempt and Government Entities (TE/GE), and Wage & Investment (W&I) business divisions to understand how IRS assesses its workforce needs and develops strategies for shaping its workforce. We interviewed OPM officials about regulatory requirements and their perspective on strategic human capital planning requirements, as well as their experience working with Treasury and IRS. We met with Treasury and Taxpayer Advocate Service officials to understand their role and responsibilities for coordinating with and providing oversight of IRS activities. We reviewed IRS’s practices and related documentation for monitoring and evaluating progress toward human capital goals, including Treasury’s HRStat reports. For objective 2, to assess the extent IRS identified and described critical skills required to meet its goals, in addition to activities performed to address objective 1, we selected a nongeneralizable sample of occupations identified by IRS as mission critical to illustrate how IRS has implemented strategies, policies, and processes for identifying and addressing skills gaps, and to identify critical instances where those efforts have affected IRS’s ability to identify and close critical skills gaps. Because IRS’s workforce planning efforts are generally conducted by mission critical occupations (MCO), we selected MCOs as our unit of analysis. We excluded MCOs with characteristics that made them unlikely to yield new or useful information for the purposes of our report. MCOs were excluded from our analysis if they (1) were under review as part of our recent or ongoing work, (2) had small numbers of staff (less than 100), or (3) were assessed by Treasury to be at low risk for skills gaps. The Treasury assessment ranked MCOs in order of risk for skills gaps based on 2-year retention rate, applicant quality. Based on these criteria, we selected revenue officers and tax examiners as occupational case illustrations representing tax enforcement activities. These two occupations, in tandem with discussion of Treasury’s efforts to close skills gaps among revenue agents, while not generalizable, provided illustrative examples for this objective. We analyzed IRS’s audit rate of individual and corporate returns to show a change in the number of audits for fiscal years 2011 through 2017 based on data reported by IRS in its annual Data Book. To obtain information to illustrate the current state of the selected MCOs located within the four business divisions, we sent the business divisions a semistructured set of written questions coupled with a request to provide corroborating documents to support their responses. We asked each business division for information about related MCOs, including: hiring data and retirement eligibility rates for MCOs; skills, competency, or staffing gaps identified among its MCOs; and any resource tradeoff decisions made as a result of skills gaps. To supplement the information we gathered from responses to our written question responses, we also reviewed IRS and Treasury documents for addressing skills gaps for revenue agents that were conducted after we identified mission critical skills gaps as a government-wide high-risk issue in 2011. For objective 3, to assess the extent IRS’s Human Capital Office has the capacity to hire employees in hard to fill positions, we reviewed documentation related to IRS hiring requirements, including the Internal Revenue Manual and policy explaining the Exception Hiring Process. We interviewed division directors from each of IRS’s major business divisions (W&I, LB&I, TE/GE, and SBSE) to understand their hiring experience and impressions of time-to-hire and candidate quality results related to the exception hiring process. We interviewed senior officials responsible for IRS’s hiring function. We reviewed documentation related to systems used to process and onboard new hires. We conducted this performance audit from August 2017 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Internal Revenue Service Appendix III: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments: In addition to the contact named above, Tom Gilbert (Assistant Director), Shea Bader (Analyst-in-Charge), Crystal Bernard, Jacqueline Chapin, James Andrew Howard, Meredith Moles, Steven Putansu, and Robert Robinson made major contributions to this report. Devin Braun, Regina Morrison, Erin Saunders-Rath, and Sarah Wilson provided key assistance.
Why GAO Did This Study IRS faces a number of challenges that pose risks to meeting its mission if not managed effectively. Key to addressing IRS's challenges is its workforce. Cultivating a well-equipped, diverse, flexible, and engaged workforce requires strategic human capital management. GAO was asked to review IRS's enterprise-wide strategic workforce planning efforts. GAO assessed (1) how IRS defines its workforce needs and develops strategies for shaping its workforce; (2) the extent to which IRS identified the critical skills and competencies it will require to meet its goals, and its strategy to address skills gaps in its workforce; and (3) the extent to which IRS's Human Capital Office has the capacity to hire employees in hard to fill positions. GAO analyzed trends in staffing across IRS and in selected mission critical occupations; compared IRS strategic workforce management processes, practices, and activities with federal regulations and leading practices; analyzed IRS documents and interviewed agency officials. What GAO Found The Internal Revenue Service (IRS) has scaled back strategic workforce planning activities in recent years. IRS officials told GAO that resource constraints and fewer staff with strategic workforce planning skills due to attrition required IRS to largely abandon strategic workforce planning activities. However, a number of indicators, such as increasing rates of retirement eligible employees and declining employee satisfaction, led IRS to determine that continuing to make short-term, largely nonstrategic human capital decisions was unsustainable. One way IRS sought to address these issues was to develop a strategic workforce plan and associated workforce planning initiative. Initiative implementation, however, is behind schedule and on hold. IRS attributed the delay to a combination of: 1) personnel resources redirected to implement Public Law 115-97—commonly referred to as the Tax Cuts and Jobs Act, 2) lack of workforce planning skills within its Human Capital Office, and 3) delayed deployment at the Department of the Treasury (Treasury) related to a new workforce planning system. As a result, IRS lacks information about what mission critical skills it has on board, where skills gaps exist, and what skills will be needed in the future. IRS staffing has declined each year since 2011, and declines have been uneven across different mission areas. GAO found the reductions have been most significant among those who performed enforcement activities, where staffing declined by around 27 percent (fiscal years 2011 through 2017). IRS attributed staffing declines primarily to a policy decision to strictly limit hiring. Agency officials told GAO that declining staffing was a key contributor in decisions to scale back activities in a number of program and operational areas, particularly in enforcement, where the number of individual returns audited from fiscal years 2011 through 2017 declined by nearly 40 percent. IRS has skills gaps in mission critical occupations, and the agency's efforts to address these skills gaps do not target the occupations in greatest need, such as tax examiners and revenue officers. However, the results of an interagency working group effort that began in 2011, and was intended to address skill gaps among IRS revenue agents and other occupations with skills gaps across the government, may hold important lessons for addressing skills gaps in other mission critical occupations at IRS. IRS's Human Capital Office has limited staffing capacity to hire employees in hard to fill positions, which holds risks for the agency's ability to implement the Tax Cuts and Jobs Act. IRS is undertaking a variety of activities to improve its hiring capacity, but has not determined how each activity will be evaluated and will contribute to increased hiring capacity or associated outcomes. In addition, changes in the agency's hiring processes have been confusing to managers and contributed to hiring delays. Clear guidance on hiring request requirements would better position IRS to avoid the risk of hiring delays for mission critical occupations. What GAO Recommends GAO is making six recommendations to IRS that include implementing its delayed workforce planning initiative, evaluate actions to improve the agency's hiring capacity, and address changes in its processes that have contributed to hiring delays. IRS agreed with GAO's recommendations. GAO also recommends Treasury clarify guidance to IRS on a forthcoming workforce planning system. Treasury agreed with the recommendation.
gao_GAO-18-471
gao_GAO-18-471_0
Background IRS Budget IRS’s budget declined by about $658 million (5.5 percent) between fiscal years 2013 and 2018 (see fig. 1). Furthermore, full-time equivalents funded with annual appropriations declined by 10,876 (12.7 percent) between fiscal years 2013 and 2018. The President’s fiscal year 2019 budget request was $11.135 billion. This amount is less than the fiscal year 2000 level for IRS, after adjusting for inflation. IRS requested an additional $397 million to cover implementation expenses for the Tax Cuts and Jobs Act over the next 2 years and received $320 million for implementation pending submission of a spend plan, which IRS provided in June 2018. IRS officials said the majority of the money would be directed toward technological updates. IRS Customer Service IRS uses multiple channels to provide customer service to taxpayers, as follows: Telephone service. Taxpayers can contact IRS assistors via telephone to obtain information about their accounts throughout the year or to ask basic tax law questions during the filing season. Taxpayers can also listen to recorded tax information or use automated services to obtain information on the status of refund processing as well as account information such as balances due. During fiscal years 2013 through 2017, IRS received an average of about 107 million calls from taxpayers each year, according to IRS data. Correspondence. Taxpayers may also use paper correspondence to communicate with IRS, which includes responding to IRS requests for information or data, providing additional information, or disputing a notice. IRS assistors respond to taxpayer inquiries on a variety of tax law and procedural questions and handle complex account adjustments, such as amended returns and duplicate filings. IRS tries to respond to paper correspondence within 45 days of receipt; otherwise, such correspondence is considered overage. In fiscal year 2017, about 35 percent of the nearly 17.5 million pieces of correspondence IRS received was overage, down from approximately 47 percent of 20.8 million pieces of correspondence in fiscal year 2013. Minimizing overage correspondence is important because delayed responses may prompt taxpayers to write again, call, or visit IRS Taxpayer Assistance Centers (TAC); each of which lead to additional costs. Additionally, IRS is required to pay interest on refunds owed to taxpayers if it did not process amended returns within 45 days. Online services. IRS’s website is a low-cost method for providing taxpayers with basic interactive tools to check their refund status or balance due, make payments, and apply for plans to pay taxes due in scheduled payments (installment agreements). Taxpayers can use the website to print forms, publications, and instructions and can use IRS’s interactive tools to get answers to tax law questions without calling or writing to IRS. IRS data show that total visits to IRS’s website in fiscal year 2017 were about 500 million. In-person services. Face-to-face assistance remains an important part of IRS’s service efforts, particularly for low-income taxpayers. Taxpayers can receive face-to-face assistance at one of about 370 IRS TACs or at thousands of sites staffed by volunteer partners during the filing season. At TACs, IRS representatives provide services including answering basic tax law questions, reviewing and adjusting taxpayer accounts, taking payments, authenticating ITIN applicants, and assisting IDT victims. Based on IRS data, nearly 3.3 million taxpayers visited an IRS TAC in fiscal year 2017. At sites staffed by volunteers, taxpayers can receive free return preparation assistance as well as financial literacy information. In fiscal year 2017, nearly 3.6 million taxpayers had their returns prepared at volunteer sites, according to IRS data. Systemic Verification Systemic verification is one element of IRS’s Return Review Program, its primary system to detect fraud and noncompliance. The Return Review Program is a platform that runs individual tax returns through a set of rules and models to detect potential taxpayer fraud and other noncompliance. During systemic verification, IRS checks information that taxpayers report on their returns against W-2 data in order to verify wage and withholding information and identify discrepancies. We previously reported that the wage information that employers report on the W-2 had not been available to IRS until after it issued most refunds. In an effort to address issues such as refund fraud and improper EITC payments, Congress enacted the Protecting Americans from Tax Hikes Act of 2015, which included provisions that took effect in 2017. The act required employers to submit W-2s to the Social Security Administration (SSA) by January 31, which is about 1 to 2 months earlier than in prior years. SSA then provides W-2 data to IRS for verifying employee wage and withholding data on tax returns. The act also required IRS to hold refunds for all taxpayers claiming the EITC or ACTC until February 15. Now that IRS has earlier access to W-2 information, IRS is using it to conduct additional verification checks before issuing billions of dollars in potentially fraudulent refunds. IRS issues individual taxpayer identification numbers (ITIN) to certain non-U.S. citizens who have federal tax reporting or filing requirements and do not qualify for SSNs. The Protecting Americans from Tax Hikes Act required taxpayers that filed a U.S. federal tax return containing an ITIN to renew the number if the ITIN was not used on at least one tax return in the past 3 years or it was issued prior to 2013 and contained certain middle digits. IRS reported that it deactivated approximately 12.4 million ITINs in 2017 and notified affected taxpayers via mail and public notices. If affected taxpayers did not renew their ITINs either before filing or in conjunction with filing, their refunds may have been delayed. Tax Cuts and Jobs Act The Tax Cuts and Jobs Act made a number of significant changes to the tax law affecting both individuals and corporations. For example, for individual taxpayers, for tax years 2018 through 2025, tax rates were lowered for nearly all income levels, some deductions from taxable income were changed (personal exemptions were eliminated while the standard deduction was increased), and certain credits, such as the child tax credit, were expanded. For individuals with business income reported on their tax return (pass-through entities), effective tax rates can be reduced with a 20 percent deduction of qualified business income. For corporate filers, the tax rate was changed from a range between 15 and 35 percent to a flat rate of 21 percent, and the corporate alternative minimum tax was eliminated. IRS must take action to make the necessary changes to process tax returns in 2019 and to help taxpayers understand the new law and its effect on their tax obligations. For example, IRS has planned and begun conducting outreach to employees, employers, and industry associations encouraging employees to reassess their withholdings in light of changes the law made to deductions and credits that may affect tax liability and withholding for a large number of taxpayers. IRS Improved Customer Service, Managed Multiple Challenges Processing Returns, and Identified More Potential Fraud and Noncompliance Compared to Last Year Customer Service Generally Improved During the 2018 Filing Season IRS’s telephone, online, and in-person services generally improved during the 2018 filing season compared to prior years. However, timeliness in responding to written correspondence declined from last year. Our prior recommendations could help IRS better manage its correspondence performance and develop a comprehensive customer service strategy to improve its efforts. Telephone Service During the 2018 filing season, IRS slightly improved its telephone level of service—the percentage of callers seeking and receiving live assistance—and reduced wait times (see fig. 2). From January 1 through April 21, 2018, IRS estimated that it answered 80 percent of calls seeking live assistance, which is a slight increase from about 79 percent for the same period last year, and reduced the average caller’s wait time to speak to an assistor from 6.5 to 5.1 minutes. This marks the third year of measured improvements since IRS reached a low of 37.5 percent level of service in 2015 with a 23.1-minute average wait time. IRS officials attributed the improvements to decreased telephone call volume and sufficient staff levels to meet the demand for service. IRS expected its level of service for the entire fiscal year 2018 to be 75 percent, which is similar to fiscal year 2017 when IRS achieved a 77.1 percent level of service. Total call volume to IRS taxpayer service lines has declined by about 43 percent since 2013 (see fig. 3). IRS officials attributed the decline in call volume to several factors, including targeted media campaigns to ensure taxpayers had the information they needed to prepare and file their tax returns prior to the filing season, fewer attempts by callers to re-dial multiple times after receiving busy signals or disconnects or abandoning the call after long wait times, and moving calls inquiring about balances due and installment payments to the compliance division, which, according to IRS data, accounted for approximately 2 million calls in the 2018 filing season. The percentage of calls that IRS assistors have answered since 2013 has generally increased, while calls answered by automated services has generally decreased. IRS officials attributed the decrease in automated calls answered to discontinuation of the e-file personal identification number (PIN) automated retrieval service in June 2016, along with a decrease in callers using the Where’s My Refund automated service. In December 2014, we recommended that IRS systematically and periodically compare its telephone service to the best in business to identify gaps between actual and desired telephone performance. In response, IRS benchmarked its telephone service, measures, and goals to comparable agencies and companies in an internal 2016 study. IRS projected that achieving an 83 percent level of service would optimize its balance between wait-time, disconnects, and assistor availability. However, officials told us in June 2018 that they are adjusting this projection based on new services and procedures introduced since the 2016 study. The study also recommended exploring using new technology, including email, online chat, and telephone call-back features as well as establishing regularly scheduled follow-up benchmarking. In March 2018, IRS officials told us they are implementing some of the recommendations from the study, including requesting funding to implement a customer call- back feature. IRS is also developing new methods of monitoring and reporting service performance across telephone, online, and in-person channels to identify changes in taxpayer behavior and better adapt to their needs. IRS telephone performance data for 2018 were unavailable from November 2017 until March 2018. IRS officials explained that IRS was upgrading the Enterprise Telephone Data System—IRS’s official source for all data related to its toll-free telephone performance measures—to a more current version. Before IRS completed the upgrade, the system crashed. Due to the system outage, IRS was unable to publish its reports on telephone performance. IRS officials told us that while the system remained offline, they could still monitor daily call demand and staff resources, which they used to develop an estimated level of service to monitor telephone performance. Once the system was operational, IRS recovered and validated the data, confirming that the data they used while the system was offline were sufficiently accurate. In addition, IRS replaced the approximately 15-year-old telephone equipment it uses for answering taxpayer calls because of ongoing failures that contributed to poor service. For example, at times the assistor could hear the customer speaking, but the customer could not hear the assistor. The new equipment will enable future service improvements such as a call-back feature so customers will not have to wait on the line for a response. IRS completed the upgrades as planned in June 2018. Correspondence Because the same staff answer telephone calls and respond to correspondence, IRS has continued to struggle to balance competing demands for maintaining quality telephone level of service with timely responses to written correspondence. Between October 1, 2017 and April 21, 2018, IRS received over 9 million pieces of correspondence. IRS staff focus on answering the telephones during the filing season, so they have less time to respond to correspondence, resulting in inventory and processing time increases. As it had in prior years, IRS directed staff to focus on correspondence early in December 2017 and January 2018 to reduce the inventory before the filing season. However, through April 21, 2018, the overage rate of correspondence—the percentage of cases generally not processed within 45 days of receipt by IRS—was 36.8 percent compared to 26.4 percent at the same time last year. To improve the management of taxpayer services, in 2015 we recommended that the Secretary of the Treasury update the Department of the Treasury’s (Treasury) performance plan to include overage rates for handling taxpayer correspondence as a part of Treasury’s performance goals. To implement this recommendation, we suggested that Treasury include this performance measure as part of a comprehensive customer service strategy. Treasury neither agreed nor disagreed with our recommendation, and as of June 2018, it had not included correspondence overage rates as a performance goal in its performance plan. We continue to believe that this recommendation is valid. IRS established its new online account service in November 2016 and taxpayer use of this service has increased since then. The online account service was unavailable to new users between mid-October and early December 2017 because of a security breach at Equifax, the service IRS used to verify users’ identities. In September 2017, Equifax announced that criminals had exploited a vulnerability in its systems and obtained personally identifiable information on 145.5 million individuals, including names, SSNs, birth dates, addresses, and in some cases, driver’s license information. IRS suspended its online account service, eventually re- activating it when it replaced Equifax’s identity verification service with another provider. IRS’s online account allows taxpayers to view their IRS account balance (including the amount they owe for tax, penalties, and interest), take advantage of various online payment options, and access the Get Transcript application where taxpayers can obtain copies of their prior tax returns. Despite these challenges, use of IRS’s online account has increased since its launch. Between January 1, 2018 and April 30, 2018, total unique users of the online account reached over 1 million compared to 327,000 for the same period in 2017 when the service was newly launched. In addition, taxpayers increasingly used the online account to access payment options, including payment agreements. For example, taxpayers made four times as many payments using the online account to access Direct Pay, IRS’s online payment option, between January 1 and April 30, 2018 compared to the same period last year. IRS experienced a separate online service disruption prior to the 2018 filing season. Tax professionals could not access e-services between September and October 2017 because of an IRS delay in a scheduled upgrade to the system and improvement to the security of the application. This service is used by tax professionals to conduct transactions, including applying for authorization as an e-file provider. As a result of this delay, tax professionals were unable to use this key service during a critical planning period prior to the filing season, shortening the amount of time available to complete the necessary actions before filing season. Despite this delay, IRS officials told us that more than 60,000 tax professionals were able to complete their transactions in preparation for the 2018 filing season. Finally, IRS launched a redesigned website in August 2017 to make it easier to use and find information. Website use during the 2018 filing season showed the greatest year-to-year increase over the past 5 years (see fig. 4). From January 1 through April 21, 2018, visits to irs.gov increased by about 24.2 percent compared to the same period last year (from 311.4 million to 386.9 million). During that same period, total page views increased by about 50.4 percent (from 1.27 billion to 1.91 billion). In-person visits to IRS’s Taxpayer Assistance Centers (TAC) have declined since IRS began requiring appointments for in-person service in 2016. During the 2018 filing season (January 1 through April 21, 2018), IRS served 1 million taxpayers at the TAC locations compared to about 1.3 million during the same period in 2017. However, IRS officials reported that, between January 1 and April 30, 2018, over half of the approximately 1.6 million taxpayers requesting an appointment had their questions resolved on the telephone and did not need an appointment. IRS policy mandates that, under special circumstances, taxpayers who arrive at a TAC without an appointment receive service if staff members are available, even when the assistors do not have appointment openings. Officials acknowledged that not all taxpayers receive service if they walk in because there are not always assistors available. As of April 30, 2018, IRS served nearly 63,000 taxpayers during the 2018 filing season under an exception to the required appointment process. IRS officials noted that the lines at TACs have shortened in recent years, which they attribute to the appointment system and services available through the telephone. Nationwide, 5.8 percent of taxpayers waited over 30 minutes for assistance between January 1 and April 21, 2018, compared to 5.6 percent during the same period in 2017, according to IRS data. Service improved compared to the same period for 2013 to 2016 when between 27 and 33 percent of taxpayers waited over 30 minutes for assistance. To improve the appointment process, in 2018 IRS developed the Field Assistance Scheduling Tool, which helps IRS manage appointments at the TACs and monitor availability and demand. IRS expects to add to this tool by developing reporting capabilities for managing staff availability and appointments, including the capability to measure the time lapse between when a taxpayer calls to schedule an appointment and the actual appointment. According to IRS officials, by using the tool’s current capabilities, they identified the need to recruit and train nearly 100 employees from other areas of IRS to support increased demand at 27 TAC locations near the end of the filing season. IRS also provided alternative options for in-person taxpayer services. In January 2017, IRS opened four co-locations with the Social Security Administration (SSA). During the 2018 filing season, 708 taxpayers received in-person service at these co-locations as of April 21, 2018. In May 2018, IRS officials said they were working to open an additional co- location with SSA. In addition, IRS added six virtual assistants—kiosks that provide video calling to an IRS assistor—to the 31 existing terminals across the United States. Customer Service Strategy We have made several recommendations for IRS to improve its customer service. In December 2012, we recommended IRS develop a strategy to improve telephone and correspondence service. While IRS has taken steps toward implementing related recommendations, including the telephone benchmarking study mentioned earlier, IRS has not completed the actions we recommended, including (1) outlining a comprehensive strategy that defines appropriate levels of correspondence service and wait time and (2) listing specific steps to manage service based on an assessment of time frames, demand, capabilities, and resources. However, IRS officials told us in June 2018 that they had begun drafting a customer service strategy that they expected to complete by September 2018. We will assess this strategy once it is issued. Additionally, in December 2011 and April 2013 we made recommendations that call for IRS to develop a long-term strategy for providing and improving web-based services to taxpayers. In June 2018, officials in the Office of Online Services stated that they do not have a specific strategy that outlines their long-term vision for increasing online services and web offerings. Rather, they rely on IRS’s fiscal year 2018–2022 Strategic Plan to provide that vision. The fiscal year 2018– 2022 Strategic Plan includes objectives related to expanding digital options for taxpayers and professionals to interact efficiently with IRS, and developing additional self-assistance and correction tools for enhanced online account capabilities. However, this plan is at a high level and does not include business cases for new online services that describe the potential benefits and costs of the projects, timelines and a prioritization of proposed projects. In July 2018, IRS officials provided additional documentation that we are reviewing to assess the steps being taken to develop a long-term strategy to improve web services for taxpayers. IRS Managed Multiple Processing Challenges During the 2018 Filing Season Including Changes in Tax Law and Issues with Hiring and Redistributing Work Responsibilities IRS started the filing season on January 29, 2018, approximately 1 week later than it has in recent years to ensure the security and readiness of processing systems and to assess the potential impact of recently passed tax laws on 2017 tax returns. IRS also extended the filing deadline by 1 day after a system outage occurred on tax day, April 17, 2018, that prevented IRS from processing electronically filed returns. Taxpayers were able to prepare and submit returns electronically during the day; but a flaw in the mainframe prevented data from being accepted and released for processing. IRS officials said the problem was caused by a hardware issue in a 1.5 year old mainframe subcomponent and was not related to IRS applications or any of the agency’s legacy computer systems. The system failure affected a number of electronic applications, including Direct Pay and the online account service, and delayed return processing until the end of the day. IRS officials said that the agency recovered the system without data loss and worked with software companies to coordinate their transmission of returns that were held earlier in the day. These officials said the agency was able to process all returns submitted electronically by the end of the day. Neither the system issue nor the later start had a significant effect on returns processing during the filing season. As of April 20, 2018, IRS had processed 130.48 million returns, compared to 128.85 million by the same time last year. IRS experienced several additional challenges during the 2018 filing season, including multiple pieces of legislation affecting individual tax returns that passed soon before the beginning of the filing season or after it had begun, as well as issues hiring and redistributing work responsibilities in some IRS processing facilities. Changes in Tax Law Disaster relief. On September 29, 2017, Congress passed a law which provided tax relief related to retirement plan distributions and casualty losses for people affected by Hurricanes Harvey, Irma and Maria. The law allowed storm victims to deduct disaster losses on their 2017 returns or on amended 2016 returns. On February 9, 2018, Congress extended these benefits to certain taxpayers affected by wildfires in California. The President also issued major disaster declarations for many areas affected by the hurricanes and wildfires, allowing IRS to use its authority to postpone certain tax-related deadlines under the Robert T. Stafford Disaster Relief and Emergency Act. The laws also offered other forms of tax-relief—such as hardship distributions from employer-sponsored retirement plans. To address issues resulting from disaster-related legal changes, IRS issued press releases and public notices informing taxpayers of tax- relief options; postponed various filing and payment deadlines for individuals and businesses affected by disasters; ensured that sites offering in-person taxpayer assistance in Puerto Rico, Florida, and Texas were open and developed special products to support these sites in dealing with affected taxpayers; and adapted procedures to accommodate disaster-relief efforts. IRS officials also said they corresponded with taxpayers they thought were eligible for new disaster relief benefits as a result of legal changes put in place. The officials told us that as of May 26, 2018, the agency had assisted 37,000 taxpayers seeking live telephone assistance and worked or closed 6,196 amended returns and 8,847 correspondences related to Hurricanes Harvey, Irma, and Maria. Tax Cuts and Jobs Act. While many of the provisions included in the Tax Cuts and Jobs Act will not affect filing until the 2019 filing season, a few changes affected filing in 2018. For example, the threshold to claim the medical expense deduction was temporarily lowered, allowing individuals to claim deductions for medical expenses totaling more than 7.5 percent of their adjusted gross income for tax years 2016 and 2017. Also, provisions similar to those described above were implemented for certain qualified federally declared disasters that occurred in 2016. The law passed shortly before the start of the filing season and IRS had to recall, revise, and re-issue more than 100 products that had already been published. In addition, several provisions affecting business filers presented processing challenges during the 2018 filing season. For example, IRS made changes to its forms to address fiscal year filers whose earnings will be taxed at different rates for 2017 and 2018 (referred to as blended rate) and developed forms and instructions for filers whose returns involve the foreign earnings of foreign subsidiaries of U.S. companies. Officials told us they processed returns subject to the blended rate provision manually and held returns affected by the foreign earnings provision until they completed necessary programming changes for the systems to process them in accordance with the new law. As of May 18, 2018, the agency was holding 2,265 affected individual and business returns. IRS officials said they completed the programming required to process all of these returns automatically by July 2, 2018. However, depending on when IRS completes processing these returns, it may need to pay interest on some refunds. IRS officials said they do not expect many of the held returns affected by the foreign earnings provision to claim refunds. Extension of expired tax provisions. On February 9, 2018, after some taxpayers had already filed their 2017 taxes, Congress extended to 2017 a number of temporary tax provisions that expired at the end of 2016. These provisions include deductions for qualified tuition and related expenses and the ability to deduct premiums for mortgage insurance as interest. Testifying before Congress, the Acting Commissioner of IRS described the extensions as a major processing challenge and said this is the only time the agency has been required to implement retroactive tax extensions after the beginning of a filing season. To address the extensions, IRS officials told us they reprogrammed systems to accept taxpayer claims related to these retroactively extended provisions; recalled, revised, and re-released more than 50 already published products; and held 5,624 individual returns while necessary programming changes were made to ensure proper processing. Issues Hiring and Redistributing Work Responsibilities IRS faced challenges in two of its five paper processing centers related to hiring and redistributing work responsibilities. The center in Ogden, UT experienced issues related to changes in work assigned to the site while the center in Austin, TX experienced ongoing hiring difficulties. Despite these challenges, IRS officials reported that the agency was able to meet all of its target dates for processing returns and issuing refunds. Ogden. To realize cost savings from the decrease in paper filing as a result of increased electronic filing, IRS began to consolidate its paper processing centers in 2018. As part of this plan, IRS moved some individual paper return processing to its facility in Ogden. This facility had not processed individual returns since 2000 and IRS officials told us that the lack of recent experience with this kind of work caused processing to fall behind targets. For example, as of March 2, 2018, Ogden had missed IRS targets for return processing time by between 14 and 15 days, depending on the form type. Officials told us the agency had reintroduced Ogden to the work gradually, by assigning fewer returns to the site in the first year; nevertheless, the site still experienced delays. For example, as of March 2, Ogden had processed 10.6 percent of the 202,000 returns expected, while the processing centers in Fresno, CA and Kansas City, MO had processed 98.5 percent (723,000 out of 734,000) and 98.2 percent (545,000 out of 555,000) of their expected returns respectively on the same date. IRS minimized the effects of these delays on overall processing by transferring returns initially sent to Ogden to the Kansas City location, which enabled IRS to meet its overall processing goals. Later in the filing season, processing at Ogden had improved, but still had not reached IRS’s goal for the site. For example, as of May 11, 2018, Ogden was at approximately 73 percent of schedule, having processed 716,000 out of 977,000 scheduled returns. IRS officials said that responding to changes in work flows is a normal aspect of processing across all locations, but noted that the agency continued to monitor the situation in Ogden and learn from the experience to guide future consolidation efforts. Austin. This processing facility, slated for closure in 2024, also experienced processing delays. As we reported in 2017, and as IRS officials told us again this year, IRS was unable to hire enough personnel to process paper tax returns at this site, which may be due to low unemployment rates in the area. IRS officials told us Austin planned to hire 567 employees by early March to transfer data from paper returns to an electronic format, but had only been able to hire 142 people, or 25 percent of that target. IRS officials told us the position was perceived as undesirable in a low-unemployment environment. The officials said they had addressed the issue by (1) moving resources as needed within the service center and (2) transferring returns to the Kansas City facility for processing. IRS Identified More Potential Fraud and Noncompliance by Verifying Wage Information Than It Did at the Same Point in the 2017 Filing Season IRS identified more potential fraud and noncompliance through February 15, 2018, than it had by the same time last year. In its second year of receiving earlier W-2 data from SSA to match against returns, IRS identified a larger number of potentially fraudulent or noncompliant returns claiming the EITC or ACTC prior to issuing refunds—340,000 compared to 162,000 at the same point in 2017. IRS also reduced the percentage of returns for which it was unable to verify wage information to 13 percent, compared to 58 percent in 2017. IRS officials told us this was, in part, a result of receiving 224 million W-2s by February 15 compared to 214 million by the same time in 2017. Having more W-2 data available earlier also allowed IRS to better target its selection of returns for review, helping to reduce taxpayer burden and IRS workload. For example, IRS had excluded 10,000 returns from review as of February 15, 2018, compared to 3,000 during the same time in 2017. In addition, IRS improved its ability to identify potentially false and fraudulent returns for returns with EITC or ACTC—including those for which it did not have W-2 data at the time of identification—by developing two new filters that automated some aspects of the manual review process used in 2017. IRS developed the new filters based on cases of confirmed fraud identified through systemic verification in 2017 and selected returns with characteristics that are more likely to be fraudulent or noncompliant. The filters select returns for review among those reporting information that does not match corresponding W-2 data and that IRS could not verify because it did not have W-2 data at the time of selection. Last year, IRS identified 12,000 cases of confirmed fraud from the 162,000 cases it selected for review. IRS officials told us that they do not have final data at this time, but that they anticipate they will confirm more cases of fraud and noncompliance in 2018 as a result of these filters. Returns with refunds not claiming EITC or ACTC benefits are also subject to systemic verification as well as additional fraud filters. However, for returns not claiming these benefits, IRS does not hold refunds when it is unable to verify wages reported by the taxpayer unless the returns are selected by other fraud filters for review. As we reported in January 2018, IRS cannot verify information reported for more than half of returns submitted early in the filing season prior to issuing refunds because it receives W-2 information throughout the filing season. In 2017 and 2018, IRS received and processed the majority of W-2s by mid- to late- February. In addition, IRS verified most wage information on returns submitted in mid-February as being accurate. IRS verified that accurate wage information was reported on 77 percent of returns not claiming the EITC or ACTC submitted between February 9 and 15, 2018, representing $10.91 billion in refunds. However, IRS does not have data available early in the filing season that would help it better identify which returns are potentially fraudulent or noncompliant. As a result, IRS issues refunds for a large percentage of returns without the EITC or ACTC that cannot be verified against W-2 data prior to February 15. For example, among 2017 returns without EITC or ACTC, IRS was unable to verify 91 percent of returns submitted before January 25, 2018— representing $4.27 billion in refunds; and 60 percent of returns submitted prior to February 15—representing $29.27 billion in refunds. IRS has the authority to hold refunds for these returns (as it does for returns that do claim the EITC or ACTC) until any date deemed necessary to make inquiries, determinations, and assessments in conjunction with those determinations. However, IRS officials told us that IRS has not held those refunds because of the volume of existing cases, challenges of processing large numbers of refunds on a single day, and other costs to the agency, such as inquiries from taxpayers about their refunds. In January 2018, we recommended that IRS study the benefits and costs of the refund hold and consider modifying it based on the study results. For example, IRS could hold refunds for taxpayers not claiming EITC or ACTC and release the refunds once it has the W-2 data available and has verified the wage information. IRS officials reiterated that the potential of verification to detect more fraud and noncompliance is limited by delays caused by filing extensions and use of paper W-2s—which are transcribed at SSA before being transmitted to IRS. For example, IRS had not received any paper W-2 data for tax year 2017 by the February 15 refund hold date. IRS is continuing to study systemic verification’s potential, and is working to identify additional fraud and noncompliance by beginning to match non-wage income reported by taxpayers against data reported on Forms 1099-MISC by companies or individuals that paid the taxpayer miscellaneous income. IRS Continued to Deactivate and Renew ITINs The Protecting Americans from Tax Hikes Act also contained a number of provisions relating to individual taxpayer identification numbers (ITIN). The provisions required IRS to deactivate (1) all ITINs issued prior to 2013 and (2) all ITINs not used at least once during the 3 most recent consecutive tax years. As of February 26, 2018, IRS said it had deactivated 14.7 million ITINs, approximately 12.4 million of those in 2017 and an additional 2.3 million in 2018. Following this initial round of deactivations, ITIN renewal requests have been significantly lower than IRS anticipated. IRS expected it would receive 1.3 million renewal applications by the end of 2018 for ITINs that expired in 2017. However, by April 21, 2018, IRS had only received 23 percent (297,825 of 1.3 million) of the expected renewals. IRS officials said they based their renewal projections on a computation assuming that all ITINs with middle digits 78 and 79—which were issued 16 or more years prior to their deactivation and were the first set of older ITINs to be deactivated—would be renewed. However, the actual renewal rate in 2017 was only 60 percent for these ITINs. IRS officials said the agency used actual renewal data to revise its renewal estimate for the remaining ITINs issued prior to 2013 and containing certain middle digits that will be deactivated. Based on these new estimates, IRS will accelerate the completion date for deactivation of older ITINs. IRS Developed a Management Structure to Implement the Tax Cuts and Jobs Act and Address Associated Challenges and Took Steps to More Fully Involve Human Capital Decision Makers IRS Developed a Management Structure to Implement the Tax Cuts and Jobs Act and Took Steps to More Fully Involve Its Human Capital Decision Makers To address the changes included in the Tax Cuts and Jobs Act, in January 2018 IRS established the Tax Reform Implementation Office (TRIO), a central office that coordinates implementation efforts. IRS officials said that the 2017 tax law will affect all IRS divisions and responsibilities. Each of the 119 provisions in the Tax Cuts and Jobs Act that fall under IRS responsibility has been assigned to one of IRS’s four business divisions—Wage and Investment, Large Business and International, Small Business/Self-Employed, and Tax-Exempt and Government Entities—each of which will be responsible for planning and executing the assigned provisions. In addition to TRIO, IRS also established the Tax Reform Executive Steering Committee and the Tax Reform Implementation Council (TRIC), described below: Tax Reform Implementation Office (TRIO). TRIO principally consists of executive-level IRS employees and coordinates efforts by each business operating division to revise and develop forms, instructions, tools, and guidance and to execute programming changes, communications, and training initiatives required to implement the individual provisions of the Tax Cuts and Jobs Act. The office is intended to monitor the implementation action plans of each business division and ensure risks associated with implementation efforts are captured and addressed. TRIO has developed an integrated project plan to track critical implementation activities identified by the business divisions and discussed by TRIC (described below). Personnel can access the project plan and update it with accomplishments and milestones. Tax Reform Executive Steering Committee. TRIO reports to the Executive Steering Committee, which includes IRS’s Acting Commissioner, Deputy Commissioners, Treasury officials, and heads of offices. The steering committee serves as a forum to provide leadership guidance, direction, and advice on implementation activities for the Tax Cuts and Jobs Act. Tax Reform Implementation Council (TRIC). TRIC consists of representatives from business divisions and functional units—such as Information Technology (IT) and Communication and Liaison—that are performing implementation activities. The group first met on February 8, 2018, and meets weekly to discuss activities, concerns, and needs that might involve other IRS divisions. The meetings are also a forum to discuss accomplishments and deadlines. Figure 5 illustrates TRIO’s role in coordinating the various changes IRS expects to make. To implement the Tax Cuts and Jobs Act, IRS’s Human Capital Office (HCO) estimated that the agency will need to hire and train staff to fill approximately 1,100 positions requiring a variety of competencies and provide additional training on tax law changes for current employees. HCO will be responsible for recruiting and hiring these new employees and ensuring they have the needed skills and HCO will play a key role in training them. It is HCO’s mission to provide human capital strategies and tools for recruiting, hiring, developing, retaining, and transitioning a highly skilled and high-performing workforce to support IRS’s mission. TRIO and other senior IRS officials acknowledged that HCO’s role in implementing the new tax law is as valuable as other supporting stakeholders, such as IT. Nevertheless, HCO did not initially have representation in TRIC, as did IT and other essential operational support units. TRIC meetings provide a forum not only for the business operating divisions directly implementing the provisions of the Tax Cuts and Jobs Act to discuss and coordinate needs and activities, but for supporting stakeholders to understand the status of implementation efforts as well as future expectations and needs. HCO officials said that when the formation of TRIO was first announced, they contacted TRIO leadership to request that HCO have representation. However, they were told that the purpose of the group was to discuss the tax law itself, not hiring or other human resources matters affected by the law. In our discussions with IRS officials, they told us that HCO has an informal liaison to TRIO, participates in the executive steering committee, and has existing human resource partners in the business operating divisions, and that additional HCO representation in tax law implementation—including the weekly TRIC calls—was not necessary. However, a senior HCO official told us that it would be beneficial for HCO to participate in the weekly TRIC meetings to stay abreast of current developments and future plans and share relevant timelines and processes related to hiring and training. Participation will help HCO to manage its operations more strategically, for example, by planning for training required ahead of the 2019 filing season. Based on our discussions with IRS officials about HCO’s role in tax law implementation, in June 2018, HCO began participating in the weekly TRIC calls. HCO’s participation will likely help IRS make more informed decisions concerning implementation of major tax law changes. It will also position HCO to proactively understand human capital needs and timelines across the agency and to hire and train personnel at the appropriate times. At the same time, IRS will also be better positioned to improve its management and strategy for executing implementation plans while also fulfilling the agency’s mission. IRS Identified the Scope, Nature, and Time Frame of the Tax Cuts and Jobs Act as Implementation Challenges IRS officials identified a number of challenges associated with implementing the Tax Cuts and Jobs Act: Scope of changes. To implement 119 provisions of the Tax Cuts and Jobs Act, IRS will need to (1) interpret the law; (2) create or revise nearly 500 tax forms, publications, and instructions; (3) publish guidance and additional materials; (4) reprogram 140 interrelated return processing systems; (5) hire additional staff and train its workforce to help taxpayers understand the law and how it applies to them; and (6) conduct extensive taxpayer outreach. IRS officials stated that these provisions will require extensive changes relevant to both individual and business filers and affect all areas of IRS. Complex and extensive nature of changes. According to IRS officials, many of the revisions are complex and interrelated and require central coordination and oversight. While IRS has to make changes to its products every year, many of the changes needed to implement the Tax Cuts and Jobs Act are more extensive than usual and affect some of the forms with which taxpayers are most familiar. For example, all Form 1040 products—the forms and instructions for individual tax return filing—will be changed in accordance with the law. One-year time frame. IRS officials told us that implementing the Tax Cuts and Jobs Act in 1 year will be challenging. Officials said the agency is using implementation of the Patient Protection and Affordable Care Act as a general guide for its current efforts, but noted this earlier legislation was less expansive. IRS was responsible for 47 provisions of the Patient Protection and Affordable Care Act and had multiple years to implement some of its provisions, including those officials identified at the time as the most challenging. Implementing individual provisions of the Tax Cuts and Jobs Act involves multiple, dependent actions. For example, IRS cannot determine the changes it will need to make to various tax forms until it has interpreted the law and cannot reprogram its return processing systems until those forms are changed. To complete necessary changes in time for the 2019 filing season, IRS has used overtime and compensatory hours. For example, according to IRS officials, as of May 26, 2018, IRS had used 1,749 overtime hours to make changes to forms and publications, between two and three times as many overtime hours as it did in the entirety of fiscal years 2016 or 2017. In addition, the agency delegated authority to approve requests for work to a larger group of managerial staff and temporarily reassigned existing staff to assist with time-sensitive changes to tax forms and publications. In March 2018, IRS also made a request for direct hiring authority, which would allow the agency to hire IT staff more quickly. While this authority could be helpful to fill specific positions more timely, IRS officials explained that these staff will require training on tax processing procedures. According to a senior IRS official, as of June 2018 the Office of Personnel Management had not yet authorized this request. IRS has taken a number of steps to implement time-sensitive provisions of the new law. IRS officials noted that while some provisions of the Tax Cuts and Jobs Act are retroactive or relevant to the 2018 filing season, most will not take effect until the 2019 filing season. As part of the planning process, IRS determined when various provisions of the law would become relevant and acted to release information on the provisions with the earliest relevance first. For example, IRS released new withholding tables and associated guidance; revised the form and online withholding calculator that taxpayers use to provide information to employers about the amount of tax that employers should withhold from their wages; and provided guidance on the transition tax on untaxed foreign earnings of foreign subsidiaries of U.S. companies, a new section of the Tax Cuts and Jobs Act that changes how business income is calculated and tax is paid for the 2018 filing season. IRS is continuing to revise its forms and issue guidance in advance of the 2019 filing season. Agency Comments and Our Evaluation We provided a draft of this report to the Internal Revenue Service for review and comment. IRS provided written comments, which are reproduced in appendix I. In its written comments, IRS generally concurred with our findings and noted a concern regarding interpretation of correspondence overage data. IRS said that the overage rate we report is based upon the open inventory at the end of the fiscal year. We clarified the basis of the overage rate in our report. However, we believe the total that IRS cites in its letter could also be misinterpreted in that it does not represent the total overage inventory; rather it is a total for the last week of the fiscal year. IRS tracks the overage correspondence rate on a weekly basis, which can vary somewhat during the year given fluctuations in correspondence receipts and staff availability to respond, but is relatively consistent throughout the year. Therefore, the overage rate at the end of the fiscal year provides a basis for assessing IRS’s annual performance in responding to written correspondence. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Treasury, the Acting Commissioner of Internal Revenue, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix II. Appendix I: Comments from the Internal Revenue Service Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Tom Gilbert (Assistant Director); Erin Saunders Rath (Analyst-in-Charge); Shea Bader; Jacqueline Chapin; Jehan Chase; Kirsten B. Lauber; Regina Morrison; Robert Robinson; and Sarah Wilson made significant contributions to this report.
Why GAO Did This Study During the tax filing season, generally from January to mid-April, IRS processes over 100 million individual tax returns and provides telephone, correspondence, online, and in-person service to tens of millions of taxpayers. In 2018, IRS had to begin taking steps to implement major tax law changes passed in what is commonly referred to as the Tax Cuts and Jobs Act that affect both individuals and businesses. GAO was asked to review IRS's performance during the 2018 filing season and its efforts to implement the Tax Cuts and Jobs Act. GAO assessed IRS's (1) performance providing service to taxpayers and processing individual tax returns and (2) early efforts to implement the Tax Cuts and Jobs Act. GAO analyzed IRS documents and data and interviewed IRS officials. What GAO Found The Internal Revenue Service (IRS) generally improved its customer service during the 2018 filing season compared to prior years and managed multiple return processing challenges. For the third year in a row, IRS improved its telephone service by answering 80 percent of calls seeking live assistance and reducing wait times to about 5 minutes, as of the end of the 2018 filing season. This compares to 37.5 percent of calls answered with an average wait time of about 23 minutes during the 2015 filing season. Taxpayer use of online services also increased, including irs.gov and its online account tool for taxpayers to view their balances due. However, answering taxpayer correspondence remains a challenge—IRS was late responding to about 37 percent of correspondence as of the end of the 2018 filing season compared to about 26 percent at the same time in 2017. In 2015, GAO recommended that the Department of the Treasury (Treasury) include timeliness in handling taxpayer correspondence as part of its performance goals, but as of June 2018 Treasury had not done so. Overall, despite multiple challenges including mid-filing season changes to tax law and a computer system failure, IRS met its processing targets for individual tax returns. In 2018, IRS began taking steps to implement significant tax law changes from Public Law 115-97—commonly referred to by the President and many administrative documents as the Tax Cuts and Jobs Act. To implement the changes, IRS established a centralized office to coordinate implementation across IRS offices and divisions. IRS officials cited the broad scope and complexity of the changes—which will require extensive changes to tax forms, publications, and computer systems—along with the 1 year time frame as key implementation challenges. Although IRS has taken steps to address these challenges, such as developing a project planning tool, GAO found that the new coordination office did not initially fully include the Human Capital Office (HCO), the division responsible for managing the agency's workforce. Based on GAO's discussions with IRS officials, representatives from HCO now attend weekly coordination meetings discussing and planning the tax law changes. Involving HCO in these discussions will better position IRS to hire new employees and train them and the existing workforce. It will also help HCO better understand training requirements and staffing needs ahead of the 2019 filing season. What GAO Recommends Because HCO is now attending the weekly meetings, GAO is not making a related recommendation. In addition, GAO believes that its 2015 recommendation to Treasury to include timeliness in handling correspondence as part of its performance goals, which Treasury neither agreed or disagreed with, is still valid. IRS generally concurred with GAO's findings but noted concerns with interpreting the percentage of correspondence considered “overage” (more than 45 days old). GAO clarified its report but notes that while the open inventory of overage correspondence at the end of the fiscal year is not representative of total overage items for the year, the overage rates are relatively consistent throughout the year.
gao_GAO-19-145
gao_GAO-19-145_0
Background According to NOAA documentation on domestic aquaculture production, shellfish aquaculture represents a large and growing segment of seafood production in the United States, with aquaculture operations present in all coastal regions of the United States (see table 1). The economic value of shellfish varies based on factors such as market, location, and species. For example, one species of clam, the geoduck—a large saltwater clam found in the Pacific Northwest—has sold for as much as $100 per pound in the Asian market, where it is valued as a luxury food. NOAA and scientific research have recognized the role that shellfish aquaculture can play in supporting healthy coastal ecosystems. For example, scientific research has shown that the filter feeding activity of oysters can help improve water clarity and quality by reducing concentrations of suspended materials such as algae. Additionally, research has demonstrated that oyster reefs can serve as natural breakwaters that may protect shorelines against damage from wind, waves, and flooding. In contrast, some effects of shellfish aquaculture are less well known or understood. For instance, there are knowledge gaps of the effects that aquaculture activities may have on submerged aquatic vegetation, according to NOAA reports. Geoduck clams are the world’s largest burrowing clam, generally weighing between 1 and 3 pounds, with a shell length that can exceed 7 inches. Geoducks can be found in the wild in the Pacific Northwest, and growers in Washington State have cultivated geoducks through aquaculture on a commercial scale since the 1990s. Washington State produced about 90 percent of farmed geoducks globally in 2013, according to a report by the University of Washington. In Asian markets, geoducks are sought-after in high-end seafood restaurants where they can be prepared for cooked or raw consumption. In general, commercial growers cultivate shellfish by two methods: on the bottom of coastal waters, or in the water column, which extends from the surface to the bottom of those waters. Commercial growers harvest oysters and clams grown on the bottom of waters by hand or by mechanical means such as dredging, raking, or other tilling activities. Commercial growers who cultivate shellfish within the water column generally grow them in racks or cages suspended in the water (see fig. 1). Growers use different methods of cultivation depending on the target commercial market, the environment for cultivation, and the need to protect the shellfish from predatory species such as fish or crabs. Shellfish aquaculture activities can be subject to various requirements at local, state, tribal, and federal government levels. For example, local authorities in the county, town, or other jurisdiction where shellfish activities are planned may require a shellfish grower to ensure compliance with local policies before commencing cultivation activities. In addition, some states have specific regulations that apply to shellfish aquaculture activities. These can include, for example, a certification that aquaculture activities meet state water quality standards, or a requirement that the activities are covered by an aquatic lease. Treaties grant certain tribes the rights to a portion of shellfish harvest in a particular area. At the federal level, a shellfish grower may need authorization from the Corps to undertake shellfish aquaculture activities. The Corps is responsible for ensuring compliance with Section 10 of the Rivers and Harbors Act of 1899, which requires authorization for structures in or work affecting navigable waters of the United States, or both, that could interfere with navigation. Structures used in shellfish aquaculture activities may include buoys, floats, racks, nets, and lines. The Corps is also responsible for ensuring compliance with section 404 of the Clean Water Act, which requires authorization for the discharge of dredged or fill material, or both, into waters of the United States. Shellfish aquaculture activities such as seeding, rearing, cultivating, transplanting, and harvesting shellfish may affect waters of the United States, and the Corps reviews these activities in accordance with applicable laws and regulations. Nineteen Corps districts have coastal waters within their geographic areas of responsibility and therefore may authorize shellfish aquaculture activities (see fig. 2). Under the direction of eight regional division offices and headquarters, the district offices are responsible for reviewing, authorizing, and ensuring appropriate levels of coordination for shellfish aquaculture activities in their districts. In authorizing shellfish activities, the Corps must implement various legal requirements, which may entail consulting or coordinating with other federal agencies, states, tribes, the public, and other parties. These legal requirements include: National Environmental Policy Act. Under the act, the Corps generally must evaluate the potential environmental effects of projects proposed for approval (e.g., by permit), such as shellfish aquaculture activities, by preparing either an environmental assessment or a more detailed environmental impact statement. Endangered Species Act. Under section 7 of this act, if a proposed Corps action may affect a listed species or designated critical habitat, formal consultation is required with the U.S. Fish and Wildlife Service or the National Marine Fisheries Service. The Corps may also undertake programmatic consultations with these agencies, which generally combine reviews for similar activities into one consultation. Magnuson-Stevens Fishery Conservation and Management Act. Under this law, the Corps must consult with the National Marine Fisheries Service if a proposed federal action may adversely affect essential fish habitat that a regional fisheries management council has identified. National Historic Preservation Act. Under section 106 of the act, the Corps must take into account the effects of shellfish aquaculture activities on historic properties and afford the Advisory Council on Historic Preservation a reasonable opportunity to comment on such activities. The Corps must also consult with the relevant state or tribal historic preservation officer, as appropriate. The Corps uses different types of general and individual permits to authorize a wide range of activities, including shellfish aquaculture activities, as shown in table 2. In some cases, if an entity’s shellfish aquaculture activities comply with the terms and conditions laid out in a general permit, then the entity may undertake the activities without written authorization from the Corps. In such instances, according to its permitting guidance, the Corps would consider those activities to be authorized under the specified general permit. In other cases, however, entities who wish to undertake shellfish aquaculture activities under a general permit may need to submit an application to the Corps for written authorization to conduct such activities. For example, some terms and conditions may require entities to notify the Corps if their proposed activities may affect areas inhabited by submerged aquatic vegetation or endangered species or their designated critical habitats. In such instances, entities must submit applications to the Corps with required information, including the location and technical information about the proposed activity. Based on Corps guidance, the agency then assesses the applicant’s proposed activities to determine whether they comply with all of the general permit’s terms and conditions. If the Corps verifies compliance, it issues a written authorization for the entity to undertake the proposed activities. In March 2007, the Corps developed a nationwide permit—Nationwide Permit 48—to help streamline the process for authorizing existing commercial shellfish aquaculture activities, effective for a 5-year period. In 2012, the Corps revised and reissued Nationwide Permit 48 to, among other things, authorize new activities and to clarify some reporting requirements. The Corps most recently reissued Nationwide Permit 48 in March 2017, which defined the activities that constitute new commercial aquaculture activities, among other revisions, and remains in effect until March 2022. Corps districts may also develop and use other types of programmatic and regional general permits to authorize shellfish aquaculture activities. Generally, entities that submit an application and receive authorization under a general permit need to resubmit their application upon expiration of their permit to re-seek authorization to continue their aquaculture activities. The Corps Authorized Most of the 3,751 Applications Received for Shellfish Aquaculture Activities from 2012 through 2017 Using Various Types of Permits Based on our analysis of data from the Corps’ permitting database, we found that the Corps authorized most of the 3,751 shellfish aquaculture applications it received from 2012 through 2017 using various types of general and individual permits. Of the 19 Corps districts that have coastal waters within their geographic areas of responsibility, 17 Corps districts received shellfish aquaculture applications, with the Seattle District receiving the most applications and the New England District receiving the next highest amount (see table 3). The number of applications does not correspond to the level of shellfish activity in a particular district, however, as some activities may be authorized under a general permit without triggering the need for an entity to submit an application for Corps authorization, as previously noted. Of the 3,751 applications the Corps received from 2012 through 2017, the Corps authorized 3,281, or about 87 percent of the applications, according to our analysis of Corps data. Four applications (less than 1 percent) were denied, and the remaining 466 applications (about 12 percent) were withdrawn. Applications were denied or withdrawn for a variety of reasons. For example, Corps officials we interviewed said that the Corps would deny an application if the applicant was denied the necessary approvals from state or other relevant regulatory authorities. An application may have been withdrawn, according to the Corps officials, if the applicant decided to seek an individual rather than a general permit or did not provide sufficient information in its application for the Corps to determine that the applicant could meet the terms and conditions of the requested permit, among other reasons. According to Corps data, the applications the Corps authorized from 2012 through 2017 corresponded to 2,631 unique shellfish aquaculture projects. Almost half of these projects (49 percent) were located in the Seattle District, about 29 percent were located in the New England District, and the remaining 22 percent were spread across 15 other coastal districts. The majority of Corps districts (13 of 17) authorized shellfish aquaculture applications using Nationwide Permit 48, according to our analysis of Corps data. Specifically, nearly two-thirds of the applications (2,138 of 3,281) were authorized under Nationwide Permit 48, as shown in table 4. Four districts did not authorize activity under Nationwide Permit 48, but instead used a different type of general permit to authorize shellfish aquaculture activity. For example, the New England District, which includes the states of Connecticut, Rhode Island, and Maine, authorized shellfish activity using state-specific general permits. The majority of districts (13 of 17) also authorized shellfish activities under individual permits, but overall individual permits represented about 3 percent (85 of 3,281) of authorized activity. While many applications were authorized under Nationwide Permit 48, we found that Corps districts added conditions to this or other general permits to account for state or regional environmental or other relevant regulatory concerns. For example, two districts we reviewed—Norfolk and Seattle—generally used Nationwide Permit 48 to authorize shellfish aquaculture activities in their districts, but added conditions to the nationwide permit to address concerns specific to their regions as follows: In the Norfolk District, the Corps developed several regional conditions applicable to the Nationwide Permit 48 issued in March 2017. These regional conditions prohibit activity within submerged aquatic vegetation beds or saltmarshes and prohibit removing or damaging vegetation in these areas, among other things. Norfolk District officials said that these regional conditions align with requirements under Virginia state regulations. As long as shellfish aquaculture growers meet those requirements, according to these officials, then growers may conduct their projects without a state permit or submitting an application to the Corps for authorization. Because these growers do not submit applications to either the state of Virginia or the Corps for authorization for their activities, district officials said they do not know how much shellfish activity may be occurring under Nationwide Permit 48 in the district, but Virginia is among the largest shellfish producing states. In the Seattle District, the Corps also developed several regional conditions applicable to Nationwide Permit 48. For example, one regional condition prohibits harvesting clams using certain hydraulic harvesting equipment. Any entity seeking to undertake shellfish aquaculture activities in the Seattle District needs to submit an application to the Corps for authorization, district officials explained. According to the National Marine Fisheries Service, almost all locations for shellfish activity in Washington State are designated as critical habitat for one or more threatened or endangered species listed under the Endangered Species Act. The presence of listed species or their designated critical habitats is one trigger under nationwide permits, including Nationwide Permit 48, requiring entities to submit an application to the Corps for review and authorization for conducting those activities. In certain instances, Corps headquarters officials said that some districts may find that a nationwide permit, such as Nationwide Permit 48, does not address the activity or requirements in their districts. Corps officials said that in such cases a district may have a region-specific general permit that more closely follows state or local requirements. Two Corps districts we reviewed—New Orleans and Baltimore—generally used or have used regional permits to authorize shellfish aquaculture activities in their regions. Specifically, In the New Orleans District, when Nationwide Permit 48 was first issued in 2007, Corps officials said that the district was generally using a programmatic permit that incorporated existing Louisiana regulations on coastal development. The New Orleans District was generally using this programmatic permit to authorize shellfish aquaculture and other coastal activities in Louisiana. Among the conditions in the permit are prohibitions on structures with proximity to flood control and hurricane damage risk-reduction levees, and activities that would impact barrier islands, bird rookeries, and coral reefs—coastal areas of Louisiana that are regarded by the state as environmentally sensitive. As a result, district officials said they continue to use their programmatic permit to allow the state of Louisiana a lead role in regulating coastal activities. The Baltimore District used a regional permit to authorize shellfish aquaculture activities until August 2016. According to district officials, Maryland had few existing commercial shellfish aquaculture projects before 2010, and at that time the Corps restricted the use of Nationwide Permit 48 to existing shellfish aquaculture activities. Any new activities required an individual permit, which involved a more extensive review process. The state of Maryland began to promote shellfish aquaculture in 2010, and many new growers entered the industry, district officials said. In response, the Baltimore District created a regional permit for new shellfish aquaculture projects, which district officials said allowed for a more streamlined process than the process needed for an individual permit. The regional permit expired in August 2016; instead of updating it, the Baltimore District replaced it with Nationwide Permit 48. Nationwide Permit 48 issued in March 2012 and in March 2017 covers new as well as existing shellfish aquaculture activities, and district officials said that there was no longer a need to use their regional permit and could use the Nationwide Permit 48 upon expiration of the regional permit. Applicants Across the Four Selected Districts Had Mixed Views on their Experiences in Seeking Authorization for their Shellfish Activities Through our interviews with 15 permit applicants from the four districts we reviewed and with Corps district and headquarters officials, we found that applicants had mixed views on their experiences in seeking authorization for their various shellfish aquaculture activities. Overall, 10 of the 15 applicants across the four districts said they understood the application process. Several of these applicants said that their knowledge stemmed from previous experience seeking authorization from the Corps or from information provided by state or Corps officials. Similarly, 10 applicants from the four districts described the length of time the Corps took to authorize their activities as reasonable, with several applicants commenting that the Corps was efficient in reviewing and authorizing their application. For these applications, the length of time ranged from 1 day to about 4 months. In contrast, 11 permit applicants across the four districts cited facing one or more difficulties with various aspects of the application process. For example, 5 of the 15 applicants indicated they were unclear about what steps were involved in the application process such as the information they needed to submit as part of the application or how to meet the requirements outlined in the permit terms and conditions. One applicant in the Seattle District said it was difficult to know how to address a condition in Nationwide Permit 48 that restricts shellfish activity in areas adjacent to potential spawning habitat for certain species of forage fish. When seeking clarification from the Corps, he said Corps officials could not specify how far away from spawning habitat his project should be located. Seattle Corps District officials said the Corps has been reviewing how to consistently define adjacent spawning areas, among other requirements, but had not yet made a determination when this application was reviewed. Eight of the 15 permit applicants from three Corps districts expressed concern that they did not receive sufficient information about the status of their application after submitting it to the Corps for review. Two of these applicants said they contacted the Corps to get information on the status of their applications but that sometimes it was difficult to reach Corps officials. The applicants said their shellfish activities had time-sensitive needs and that not knowing the status or time frames associated with the permitting process was problematic. For example, one permit applicant in the New Orleans District said not knowing when permitted activity would be authorized jeopardized the ability to take advantage of the naturally occurring seasonal oyster spawn that was critical to the viability of the project. New Orleans District officials agreed that it may be difficult for applicants to quickly determine the status of their applications, as a phone call to the Corps is the only way to obtain such information. Officials from two districts we reviewed said their goal is to generally respond to inquiries within 2 days, but this is not always possible due to heavy workloads or staffing constraints. For example, in the New Orleans District, officials said the workload across their permitting program is high, with a typical project manager responsible for reviewing 35 to 40 permit applications at any one time. In addition, five permit applicants from three Corps districts said they believed that the amount of time it took for the Corps to authorize their shellfish aquaculture activities was unreasonable. For these applications, the length of time ranged from 18 days to about 8 months. One applicant from the Seattle District who waited about 8 months to receive authorization for the application in 2012 said that he continued his shellfish operations while waiting authorization, but was concerned that operating without the Corps’ authorization put his operations at risk from potential legal challenges. Officials in the Seattle District said they have seen an increase in applications for shellfish aquaculture authorizations over the last several years, which has significantly increased their workload and, in some cases, affected their ability to issue authorizations in a timely manner. Corps officials from headquarters and the four districts said it is generally their goal to authorize applications within 60 days, but limited staffing, heavy workloads, and the need to coordinate or consult with other federal, state, or tribal agencies may prevent them from doing so. Corps officials from the four selected districts have taken some steps to address difficulties applicants have experienced with understanding permit terms and conditions. For example, officials in the Seattle and Baltimore Districts have taken steps to help explain some permit terms and conditions. In Seattle, district officials said they have held quarterly meetings since 2015 for interested applicants and other stakeholders to address concerns or clarify certain Nationwide Permit 48 conditions. Seattle District officials said that attendees generally provided positive feedback about these quarterly meetings and that they plan to continue holding such meetings to discuss permit conditions or other issues that may arise. Similarly the Baltimore District has held aquaculture workshops on an as-needed basis for applicants and other stakeholders to clarify permit conditions. For example, in September 2016, the Baltimore District held a workshop to explain a permit condition intended to prevent endangered sea turtles from entanglements in aquaculture gear. One applicant we interviewed said this workshop was helpful and provided a better understanding of permit conditions. Officials from the Baltimore District said that they plan to conduct additional aquaculture workshops in 2019 and will invite representatives from the Maryland Department of Natural Resources to participate. The four Corps districts have also taken some steps to address difficulties applicants have experienced with the time it takes to authorize shellfish aquaculture activities. For example, in 2017, the Seattle District developed an approach to expedite its application process for the Nationwide Permit 48 issued in March 2017. Specifically, for those applicants who had previously been authorized under Nationwide Permit 48 in 2012 and who did not anticipate changes to their activities for the 2017 permitting cycle, district officials said they could base their review on previously submitted documentation from the applicants, allowing them to more quickly reauthorize those activities. The five permit applicants we interviewed from the Seattle District said that the Corps’ expedited process initiated in March 2017 improved the timeliness of receiving their authorizations. For instance, one applicant who waited about 8 months to receive his authorization in 2012 said the Corps issued his most recent authorization in 2017 in 2 months. In addition, Corps officials from across the four districts said they have taken steps to reduce the time needed to review applications through efforts to more efficiently conduct reviews under the Endangered Species Act. For example: Corps officials from the Baltimore and Norfolk Corps districts worked with the National Marine Fisheries Service in 2017 to develop a regional programmatic consultation to help streamline Endangered Species Act assessments of the potential impact that shellfish aquaculture activities may have on listed species or their designated critical habitats. Corps officials from the Baltimore District said the review process, developed in association with the programmatic consultation, decreased their review time from over 30 days to 1 to 2 days, which in turn has helped reduce the Corps’ time frames for issuing authorizations. In 2015, New Orleans District officials said they implemented a standardized process for evaluating applications for Endangered Species Act compliance. The district developed a standardized form, called the Standard Local Operating Procedure for Endangered Species in Louisiana, which district officials said helps to facilitate evaluations by allowing program managers to quickly assess whether or not an application requires further review and consultation and reducing the overall time to process shellfish aquaculture-related applications. Corps officials from the Seattle District worked with the National Marine Fisheries Service and U.S. Fish and Wildlife Service to develop a programmatic consultation, issued in 2016. The programmatic consultation identified methods for carrying out shellfish aquaculture activities that would avoid adverse environmental effects on listed species and their critical habitats, and reduce water quality impacts. Corps officials from the Seattle District said that this programmatic consultation has resulted in a more efficient review process for applicants seeking authorization under Nationwide Permit 48 by reducing the amount of time needed to assess whether an applicant’s proposed activities may have the potential to affect listed species or their critical habitats. To further improve the application process, Corps headquarters officials said that they are initiating training in fiscal year 2019 through online modules that will cover various aspects of permitting such as clarifying the necessary elements needed from entities in submitting an application. Also, in October 2018, Corps headquarters launched a community of practice on shellfish aquaculture permitting, which officials said will allow project managers and others with an interest in shellfish aquaculture to share lessons learned and to collaborate on relevant issues in the future. A Corp official said the Corps plans to hold quarterly meetings for the shellfish aquaculture permitting community of practice going forward. Agency Comments We provided a draft of this report to the Department of Defense for review and comment. The department provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Army, the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology This report describes, for 2012 through 2017, (1) the number and outcomes of the applications the Corps received for shellfish aquaculture activities and the types of permits the Corps used to authorize such activities, and (2) the experiences of permit applicants in selected districts in seeking Corps authorization for their shellfish aquaculture activities. To conduct our work, we reviewed relevant federal laws, regulations, and Corps documents on permitting, and interviewed officials from Corps headquarters. We selected a non-generalizable sample of four Corps districts—Baltimore, New Orleans, Norfolk, and Seattle—for a closer examination of the nature of shellfish aquaculture activities and the types of permits used by districts to authorize such activity. We selected these districts based on several factors: Geographic region. We selected at least one district from each of the Pacific, Atlantic, and Gulf coasts to cover any differences in shellfish activity by geographic location. Commercial value of shellfish. The states in which the four districts reside—Washington (Seattle District); Maryland (Baltimore District); Virginia (Norfolk District); and Louisiana (New Orleans District)— account for more than 60 percent of the commercial shellfish sales in the United States as of 2013, the most recent data available as of December 2018. Type and level of permitting activity authorized by the Corps. We also chose districts to represent different types of general and individual permits the Corps districts used to authorize shellfish aquaculture as well as the level of permitting activity. The four districts received more than half of the shellfish aquaculture applications authorized by the Corps during 2012 through 2017. We conducted site visits from July 2017 to March 2018 to each of the four selected districts to observe aquaculture activities and learn about the types of permits the districts use to authorize shellfish aquaculture activities. We also interviewed stakeholders with a regulatory role in shellfish aquaculture and non-governmental organizations with an advocacy role, as follows: Federal Officials. We interviewed officials from three regional offices of the National Marine Fisheries Service and U.S. Fish and Wildlife Service to understand how they work with the four Corps districts on shellfish aquaculture permitting. We gained their perspectives on how they coordinate with the Corps to meet various legal requirements, such as those under the National Environmental Policy Act, Endangered Species Act, and Magnuson-Stevens Fishery Conservation and Management Act. State Officials. We interviewed state agency officials involved in permitting at the state level to learn about state permitting requirements and coordination undertaken with the Corps districts on various aspects of shellfish aquaculture permitting. Specifically, we interviewed officials from the Maryland Department of Natural Resources, Washington Department of Ecology, Virginia Marine Resources Commission, Louisiana Department of Natural Resources, and Louisiana Department of Wildlife and Fisheries. Non-governmental Officials. We also interviewed non-governmental organizations with an advocacy role related to shellfish aquaculture or conservation to gain their perspectives on the Corps’ permitting process. We interviewed officials from the Chesapeake Bay Foundation, The Nature Conservancy, Pacific Coast Shellfish Growers Association, East Coast Shellfish Growers Association, Oyster South, Center for Food Safety, the Coalition to Protect Puget Sound Habitat, the Coalition to Restore Coastal Louisiana, and the Coastal Protection and Restoration Authority of Louisiana. We selected these organizations because each had interacted with one or more of the four Corps districts we reviewed on shellfish aquaculture issues during the period of our review. The information we obtained from officials from the four districts and stakeholders is not generalizable to other Corps districts or stakeholders but illustrates the variation in Corps’ shellfish aquaculture permitting at the district-level. To examine the number, outcomes, and types of permits the Corps used to authorize shellfish aquaculture activity, we obtained and analyzed data from the Corps’ permitting database, the Operations and Maintenance Business Information Link Regulatory Module 2. Specifically, we analyzed nationwide data on shellfish aquaculture applications submitted to Corps district offices with a decision date from January 1, 2012, through October 26, 2017, which were the most recent data available at the time of our review. The information we analyzed from the database included applications for various types of shellfish aquaculture activities for which entities sought Corps authorization, including commercial operations, as well as shellfish aquaculture and oyster reef restoration activities. For all Corps districts, we analyzed the number of applications received, authorized, withdrawn, or denied, and under which type of permit an application was submitted to the Corps. We took steps to determine the reliability of the Corps’ data, including comparing the data to the administrative files for three to five randomly selected applications from the four districts we reviewed. We also reviewed agency guidance on data entry and interviewed agency officials knowledgeable about the Corps’ permitting data, including officials from the four districts and headquarters. Based on these steps, we found the data to be sufficiently reliable to provide nationwide and district-level summary information on applications, authorizations, and the types of permits the Corps used during the period of our review. To determine the experiences of permit applicants in selected districts in seeking Corps’ authorization for their shellfish aquaculture activities, we randomly selected 15 applications submitted by different applicants during the time period of our review to the four Corps districts in our review. We reviewed the materials included in the Corps’ administrative files to determine the nature of activities being proposed, documentation of any interactions between the Corps’ and the applicants throughout the review process, and the time frames for the review, among other things. We conducted semi-structured interviews with each of the applicants to gain their experience during the application process, including their perspectives on the steps involved in submitting an application, the times frames for receiving authorization, their understanding of permit terms and conditions, and the nature of any interactions with the Corps, among other things. We also conducted semi-structured interviews with the Corps managers responsible for reviewing these applications to obtain their perspectives about their review process for the selected applications. We then analyzed and categorized the interview responses based on common themes that we identified across the interviews. The information we obtained from the permit applicants and Corps officials we interviewed is not generalizable to other applicants, but illustrates the types of experiences permit applicants in the four districts have had in seeking authorization for their shellfish aquaculture activities. In addition, we also interviewed Corps officials in the four districts and headquarters and reviewed related documentation to identify any steps the Corps has taken to address difficulties raised by the permit applicants. We then requested and reviewed supporting documentation when officials identified examples of steps they have taken to help improve the application process. We conducted this performance audit from June 2017 to January 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Alyssa M. Hundrup (Assistant Director), Ginny Vanderlinde (Analyst in Charge), Justin Fisher, Melissa Greenaway, Rich Johnson, Ying Long, Danny Royer, Sheryl Stein, and Jina Yu made key contributions to this report. Mark Braza, Juan Garay, and Gina Hoover also made contributions to this report.
Why GAO Did This Study Entities undertaking shellfish aquaculture activities (i.e., the breeding and harvesting of oysters, clams, and mussels) may need to submit an application to the Corps in certain circumstances for authorization to conduct these activities. The Corps authorizes such activities using various permits, as long as the activities comply with various environmental and other laws. GAO was asked to review the Corps' process for authorizing shellfish aquaculture activity in U.S. coastal waters. This report describes, for 2012 through 2017, (1) the number and outcomes of the applications the Corps received for shellfish aquaculture activities and the types of permits the Corps used to authorize such activities, and (2) the experiences of permit applicants in selected districts in seeking Corps' authorization for their shellfish aquaculture activities. GAO reviewed laws and permitting documents and analyzed data on the number, outcomes, and types of permits the Corps used for 2012 through 2017 from the Corps' permitting database and assessed its reliability. GAO also reviewed detailed information from a non-generalizable sample of 15 permit applications and interviewed the applicants and Corps officials from four Corps districts, selected to reflect variation in geographic location and shellfish activity; the information from the four districts is not generalizable to other Corps districts. What GAO Found The U.S. Army Corps of Engineers (Corps) authorized most (87 percent) of the 3,751 shellfish aquaculture applications it received from 2012 through 2017, according to Corps data. Of the 19 Corps districts that have coastal waters within their geographic areas of responsibility, 17 districts received and authorized applications. The majority of those districts (13 of 17) authorized applications using Nationwide Permit 48, a type of permit intended to streamline the authorization process for shellfish aquaculture activities. Additionally, districts may add conditions to nationwide permits or develop region-specific permits to address state or regional environmental concerns. Of the four districts GAO reviewed in detail, two districts added regional conditions applicable to Nationwide Permit 48, such as prohibiting shellfish activity within submerged aquatic vegetation beds or saltmarshes. The 15 permit applicants from the four districts GAO reviewed had mixed views on their experiences with seeking authorization for their shellfish aquaculture activities. For example, 10 applicants across the four districts described the length of time to authorize their activities—ranging from 1 day to about 4 months—as reasonable, with several applicants indicating the Corps was efficient in reviewing their applications. In contrast, five applicants from three Corps districts said that the amount of time it took for the Corps to authorize their shellfish aquaculture activities—ranging from 18 days to about 8 months—was unreasonable. Corps officials from the four districts indicated they have taken some steps to help reduce authorization review time. For example, the four districts took steps to more efficiently conduct reviews under the Endangered Species Act. This has in turn helped reduce the Corps' time frames for issuing authorizations, according to Corps officials GAO interviewed. For instance, officials from one district said their review time declined from over 30 days to 1 to 2 days as a result of the change in the review process.
gao_GAO-18-40
gao_GAO-18-40_0
Background The drug industry encompasses a variety of companies involved in the research, development, distribution, and payment for chemically synthesized and biologic drugs. For the purpose of our review, the drug industry includes pharmaceutical companies that traditionally concentrate on developing or manufacturing drugs derived from chemicals and biotechnology companies that develop or manufacture biologics—more complex drugs derived from living cells. The federal government plays a role in various aspects of the drug supply chain as well. To market drugs in the United States, drug companies must apply and receive approval from the FDA that their drugs are safe and effective. The federal government also supports R&D for new drugs, such as through grants by the National Institutes of Health (NIH), NSF, and other agencies, and through tax incentives administered by the IRS. In addition, mergers and acquisitions affecting the drug industry are subject to review by the federal government to ensure compliance with applicable antitrust laws. Drug Research, Discovery, Development, and Approval Process The process of bringing a new drug to the market is long and costly and involves multiple public and private entities that fund and perform R&D. (See fig. 1.) For a new drug, the entire drug discovery, development, and review process can take up to 15 years, often accompanied by high costs. The process consists of several main stages: Basic research: This is research aimed at acquiring new knowledge or understanding without immediate commercial application or use. Basic research is often federally funded and conducted to better understand the workings of disease, which increases the potential of discovering and developing innovative drugs. Drug discovery: This is undertaken by numerous researchers from drug companies, academia, and government searching for and identifying promising chemical entities, or chemical and biological compounds, capable of curing or treating diseases. Preclinical testing: During preclinical testing, compounds are tested in laboratories and in animals to predict whether a drug is likely to be safe and effective in humans. If the compound is found to be promising, a drug company may decide to test it as a new drug on humans and it proceeds to the clinical trials stage. Before doing so, the company must submit to FDA and have in effect an investigational new drug application that summarizes the data that have been collected on the compound and outlines plans for the clinical trials. Clinical trials: Clinical trials test potential drugs in human volunteers to determine if they should be approved for wider use in the general population. An investigational new drug typically goes through three phases of clinical trials before it is submitted to FDA for marketing approval. Clinical trials proceed through Phases I, II, and III, beginning with testing in a small group of healthy volunteers and then moving on to testing in larger groups of patients whom the drug is intended to treat to assess the compound’s effectiveness, rate of adverse events, and uses in combination with other drugs. FDA Review and Approval: To market a drug in the United States, drug companies submit their research in a new drug application (NDA) or biologic license application (BLA) to FDA, which then reviews and approves the drug for marketing if it is shown to be safe and effective for its intended use. An NDA is an application to market a new chemically synthesized drug—either an innovative drug or a variation of a previously marketed drug. A BLA is an application for a license to market a new biological product (complex drugs derived from living organisms). Companies may also submit a supplement to an already approved NDA or BLA—known as an efficacy supplement—to propose changes to the way an approved drug is marketed or used, such as adding or modifying an indication or claim, revising the dose or dose regimen, providing a new route of administration, or changing the marketing status from prescription to over-the-counter use. For the purposes of its review, FDA classifies certain NDAs as new molecular entities—products that contain active chemical substances that have not been approved by FDA previously—and certain BLAs as new therapeutic biologics. FDA generally considers drugs approved either as new molecular entities or new therapeutic biologics to be “novel” drugs—products that are often innovative and serve previously unmet medical needs or otherwise significantly help to advance patient care and public health. Post-approval: After FDA has approved a drug for marketing, the drug company may begin marketing and large-scale manufacturing of the drug. FDA also continuously monitors the safety of the drug which includes, amongst other activities, oversight of postmarket clinical studies that it can require or request companies to complete (known as phase IV clinical trials). Drug companies may also undertake these studies independently to identify modifications to the drug such as new delivery mechanisms or additional indications for use. The company may then submit a new application or supplement application with new clinical data to FDA to market the modification as a new drug, or market it for the new use. Patent and Market Exclusivity and Other Incentives for Drug Development Patents and market exclusivity periods are two ways brand-name drug companies may recoup their R&D investments by limiting competition for specified periods of time. Typically, early in the R&D process, companies developing a new brand-name drug apply for a patent on the active ingredient and may additionally apply for patents on other aspects of the drug, such as the method of use, from the U.S. Patent and Trademark Office. Once a patent is granted, other drug companies are excluded from making, using, or selling the patented aspect of the drug during the term of the patent, which generally expires after 20 years from filing. In addition, federal law authorizes certain periods of exclusive marketing rights, or market exclusivity, for new FDA-approved drugs, during which time FDA generally cannot approve a similar competing version of the drug for marketing. These exclusivities are independent of the rights granted under patent and can relate to chemical entities never approved before by FDA (5 years of exclusivity); new biologics (12 years); approval of a supplement for a new condition or use or other change to a previously approved chemically synthesized drug based on new clinical studies (3 years); and orphan drugs—drugs designated to treat rare diseases or conditions (7 years); among others. Patent protection and market exclusivity are independent of one another and can run concurrently or not. When brand-name drug products’ patents expire and exclusivity periods end, similar versions of the drug product that have been approved by FDA may enter the market. These are referred to as generics for chemically synthesized drugs and biosimilars for biologics. The Drug Price Competition and Patent Term Restoration Act of 1984—commonly known as the Hatch-Waxman Amendments—facilitated earlier, and less costly, market entry of generic drugs. A generic drug must generally be demonstrated to be equivalent to the brand-name drug product in active ingredient, dosage form, safety, strength, route of administration, quality, performance characteristics, and intended use. For biologics, the Biologics Price Competition and Innovation Act of 2009 provided an abbreviated pathway for companies to obtain approval of “biosimilar” and “interchangeable” biological products. A biosimilar must be demonstrated to be highly similar to an already approved biological product and to have no clinically meaningful differences in terms of safety and effectiveness from the reference product. See table 1 for a description of drug application types. In addition to incentivizing drug development through patent and market exclusivity, the federal government supports new drug research both directly, through grants from—and intramural research by—agencies such as NIH and indirectly through tax incentives for companies that develop new drugs. Specifically, the Internal Revenue Code includes incentives for research-related spending in three ways: through two income tax credits—the credit for clinical testing expenses for certain drugs for rare diseases (known as the orphan drug credit) and the credit for increasing research activities (known as the research credit)—and through special methods for treatment and reporting of research and experimental expenditures, including current-year deduction to arrive at net income. In general, the credit incentives are available to companies with qualified research spending in the United States. Companies include businesses organized as corporations or non-corporate businesses such as partnerships. These provisions are described below: Orphan drug credit: Companies may claim the orphan drug credit for half the “qualified clinical testing expenses” for drugs intended to treat rare diseases. Expenditures that give rise to the orphan drug credit may include expenses related to testing outside the United States. A company may claim foreign clinical testing expenses if there is an insufficient testing population in the United States to test the safety and efficacy of the drug. The orphan drug credit is nonrefundable; that is, while the credit can be used to reduce a company’s income tax liability generally, the credit cannot be used to generate a refund if the business has no tax liability or fully used if the credit would reduce tax liability below zero. The credit is also a component of and subject to the limitations of the general business credit. Research credit: Companies may claim a research credit for qualified research expenditures they undertake in a given year that exceed a threshold or base amount. This incremental design of the credit is intended to create an incentive for companies to do more research than they otherwise would. Qualified research expenses are certain expenses for qualified research incurred by the taxpayer during the taxable year in carrying on a trade or business. Qualified research is research that is undertaken for the purpose of discovering information that is technological in nature and the application of which is intended to be useful in the development of a new or improved business component of the taxpayer. In general, substantially all the activities that constitute a process of experimentation relating to new or improved functions, performance, or reliability or quality are qualified research. The rate of credit can be 14 or 20 percent. Like the orphan drug credit, the research credit is nonrefundable and is a component of, and subject to, the limitations of the general business credit. Deductions of qualified research expenses: If elected, the tax code allows businesses to currently deduct “research or experimental expenditures” from gross income in the tax year they are incurred rather than depreciate (or amortize) the assets the R&D created over time. Research and experimental expenditures include all costs incident to research, including research conducted outside the United States. Since “qualified research expenses” and “qualified clinical testing expenses” are a particular subset of research and experimental expenditures, expenditures that can give rise to either the research or orphan drug tax credits can be deducted in the year that they occur. However, these deductions must be reduced by the amount of tax credits claimed in order to prevent expenses from both generating a tax credit and being deducted from income. Drug Distribution, Payment, and Pricing The distribution of, and payment for, prescription drugs involve interactions and negotiated transactions among multiple commercial entities along the supply chain from the drug manufacturer to the consumer (see fig. 2). Brand-name and generic drug manufacturers typically sell their drugs to drug wholesalers, who in turn sell the drugs to retail pharmacies or to health care providers (such as hospitals, clinics, and physicians). Pharmacies or providers dispense or administer prescription drugs to consumers. Most consumers purchasing drugs pay a portion of the drug’s price in the form of a copayment or coinsurance, with the specifics of this cost sharing dictated by the consumers’ insurance plan. Insurance plans often use pharmacy benefit managers (PBMs) to help them manage their prescription drug benefits, including negotiating prices with manufacturers, processing claims, and negotiating with retail pharmacies to assemble networks where the beneficiaries can fill prescriptions. PBMs negotiate with manufacturers for rebates on behalf of the insurance plan based on market share, volume, and formulary placement. PBMs also contract with pharmacies; contract terms and conditions may include specifics about negotiated reimbursement rates (how much the pharmacy will be paid for dispensed drugs) and payment terms. Health care providers may also negotiate with insurers for the drugs they administer. The price that payers, PBMs, and ultimately consumers pay for prescription drugs depends in part on the amount of competition and the purchasers’ negotiating power. The negotiating power is influenced by the ability to choose from competing drugs and the volume of drug purchased. According to economic experts, the usual mechanisms that enforce market discipline may not work in the same way in the health care market as they do in other markets. In most markets—automobiles, for example—consumers are expected to be conscious of the price of goods. If a company raises the price of its goods, consumers would likely purchase fewer goods, causing the company’s revenues to decline. However, in the health care market, the purchase of goods and services is largely influenced by health care providers, who may not be well- informed about, or incentivized to consider, the prices involved. In the case of drugs, some experts argue that marketing and advertising may further distort provider decision making. In addition, if the patients’ medical bills are largely paid by insurance plans (other than copayment or coinsurance costs), then patients’ demand may not be significantly influenced by changes in price to the extent that it might be in other markets where the consumers see and pay the bill themselves. Certain payment policies may also limit the negotiating power of insurers. For example, Medicare Part D is required to cover all drugs in six protected classes, which some experts argue reduces the negotiating power of its contractors (known as plan sponsors). In addition, some brand-name drug companies are providing coupons to consumers to mitigate patient drug costs when a company’s drugs are not covered by payer formularies or require higher patient costs than preferred drugs. Some research and experts we interviewed have noted that this practice erodes the negotiating power of insurers and the cost management utility of formularies, which may result in lower prices for the patient using the coupon but higher prices overall. In addition, patients and providers in many cases may not have clear information about the benefit relative to cost of one drug over another drug or treatment. Consolidation and the Antitrust Review Process Experts have said that consolidation as a result of mergers and acquisitions is one of multiple factors that could influence competition. Fewer companies producing and marketing drugs can lead to greater market dominance by certain companies and less competition. The Federal Trade Commission (FTC) and the U.S. Department of Justice (DOJ) enforce federal antitrust laws that prohibit activities, such as price fixing and mergers and acquisitions where the effect may be substantially to lessen competition or tend to create a monopoly. Drug companies are subject to these antitrust laws. Companies are required to notify FTC and DOJ of certain pending mergers, also known as the premerger notification program. As part of its premerger review process, these agencies can approve mergers contingent on company divestiture of assets, including those related to products in development—a process known as a negotiated merger remedy. These agreements are subject to public notice and comment and result in an enforceable order. The goal of a merger remedy is to preserve or restore competition in the relevant markets. Although FTC and DOJ each have authority and responsibilities under the antitrust laws, FTC typically examines proposed drug industry mergers. In addition, FTC has authority to investigate and take action against unfair methods of competition in or affecting commerce, as well as mergers and acquisitions that may substantially lessen competition or tend to create a monopoly, including in the drug industry. Drug Industry Profit Margins and Merger and Acquisition Deal Values Increased, and the Industry Underwent Structural Changes Among the worldwide drug companies included in the data we reviewed, reported pharmaceutical and biotechnology revenues and profit margins for most companies grew from 2006 through 2015. The number of mergers and acquisitions among companies in the industry generally held steady from 2006 through 2015, but merger and acquisition deal values increased. Market concentration varied by the specific market level considered. Industry experts we interviewed noted that market pressures have driven structural changes in the industry. Company-Reported Revenues and Profit Margins Grew from 2006 through 2015 According to the data we reviewed, between 2006 and 2015 estimated aggregate worldwide pharmaceutical and biotechnology sales revenue for drug companies grew from $534 billion to $775 billion in real 2015 dollars (about 45 percent), with most of the growth occurring between 2006 and 2011. The largest 25 of these companies (by 2015 pharmaceutical and biotechnology revenue) saw their aggregate sales revenue increase from $448 billion in 2006 to $569 billion in 2015, or about 27 percent. Aggregate sales revenue for all other drug companies in our data grew more sharply, from $86 billion in 2006 to $206 billion in 2015—an increase of about 140 percent (see fig. 3). Drug companies’ average profit margins also grew from 2006 to 2015, though the trends differed for the largest 25 companies compared to the remaining companies in our data. Overall, about 67 percent of companies saw their profit margins increase between 2006 and 2015. While there was some fluctuation over time, the average profit margin was 17.1 percent in 2015 for all drug companies; profit margins were higher for the largest 25 companies (20.1 percent in 2015) than for all others (8.6 percent in 2015; see fig. 4). To better place large drug companies’ profit margins into context, we conducted a similar examination of profit margins for large companies in other industries, specifically software companies and the largest 500 companies (by 2015 total worldwide revenue as reported in Bloomberg) representing a wide range of industries. We included the software industry separately because, like the drug industry, it has been cited as having high R&D investment and low production and distribution costs, though caution should be taken in making this comparison. Among the largest 25 software companies (by 2015 software revenue), the average profit margin began at 21.7 percent in 2006 and remained relatively stable through 2014, before decreasing to 13.4 percent in 2015 (see fig. 5). As a broader comparison, the average profit margin among the largest 500 companies was consistently lower than the average among the largest 25 drug companies and software companies. Among the largest 500 companies, the average profit margin decreased from 8.9 percent in 2006 to 6.7 percent in 2015. The Number of Mergers and Acquisitions Generally Held Steady from 2006 through 2015, but the Values Fluctuated The annual number of mergers and acquisitions involving drug companies generally held steady between 2006 and 2015, with some fluctuations in intervening years, based on our review of Bloomberg data. Overall, the number of transactions generally held steady, with 312 in 2006 and 302 transactions in 2015 (see fig. 6). The number of mergers and acquisitions involving one of the largest 25 companies (by 2015 pharmaceutical and biotechnology revenue) increased from 29 transactions in 2006 to 61 transactions in 2015. In contrast, the number of transactions in our data for the smaller drug companies decreased from 283 transactions in 2006 to 241 transactions in 2015. See appendix II for additional information on merger and acquisition activity of 10 large companies in the drug industry as of 2014. While the number of transactions generally held steady between 2006 and 2015, the total value of transactions completed over this period fluctuated considerably. These fluctuations were driven by a small number of high value transactions, which tended to occur among the largest 25 companies (see fig. 7). For example, in 2009, there were three transactions each valued above $20 billion in real dollars, all of which were conducted by companies in the largest 25: Pfizer Inc. acquired Wyeth LLC for about $71 billion, Merck & Co Inc. acquired Schering-Plough Corp. for about $56 billion, Roche Holding AG acquired Genentech Inc. for about $48 billion. In 2015, about half of the total merger and acquisition transaction value came from five transactions each valued over $10 billion in real dollars, including one very large transaction by Allergan for about $72 billion. The other four transactions also involved companies among the largest 25. Much as the total value of mergers and acquisitions fluctuated considerably from year to year, median disclosed transaction values generally increased between 2006 and 2015, with considerable fluctuation among years. Concentration in the Drug Industry Varied by the Level of the Industry Considered For the overall drug industry, the share of total sales accounted for by the 10 largest companies—a measure of concentration—declined between 2007 and 2014, the years for which public data were available from QuintilesIMS. The largest 10 companies (by 2014 pharmaceutical revenue) had 48.9 percent of the drug industry’s sales revenue in 2007; by 2014, their share of the industry sales revenue declined to 38.2 percent. Concentration, which can be measured by share of sales, provides a basic indication of the competitiveness of companies in an industry or specified market level within an industry. Competition in the drug industry generally is examined at the level where products are viewed as substitutes, according to FTC officials. Substitutes can be products that are the same molecular entity or, in some cases, different molecular entities that treat the same condition. At levels narrower than the entire industry, such as drugs within the same therapeutic class or of the same molecular entity (levels that are more relevant to competition), concentration in shares of sales can be higher than in the overall industry. For example, EvaluatePharma reported that the three largest companies in the anti-diabetics market accounted for 67.5 percent of the sales in that market in 2014. Similarly, the three largest companies in the anti-rheumatics market accounted for 56.8 percent of the sales in that market in 2014, and the three largest companies in the anti-virals market accounted for 72.4 percent of the sales in that market, with the leading anti-viral manufacturer accounting for over half (52.8 percent) of worldwide anti-viral sales. Concentration can also vary for drugs of the same molecular entity, as some generic drugs may have different numbers of manufacturers than others. For example, as of 2017, 14 companies have approved ANDAs for lisinopril, a drug for hypertension—that is, 14 companies have generic versions of the drug approved for manufacture. By comparison, only one company has an approved ANDA for efavirenz, a drug used to treat HIV infection. Greater numbers of generic manufacturers generally reduce concentration, as generic manufacturers compete with one another in addition to brand-name manufacturers. More broadly, one recent study found that of the novel drugs approved in tablet or capsule formulation since the 1984 Hatch-Waxman Act and eligible for generic competition, more than one-third had three or fewer generic approvals. Industry Experts Noted Market Pressures Have Driven Structural Changes in the Industry, Such as in the Types of Acquisitions and Increased Specialization in Therapeutic Areas Experts we interviewed noted that market pressures such as rising R&D costs, fewer drugs in the R&D pipeline, and the growth in sales of generic drugs have driven various structural changes in the drug industry, such as in the types of acquisitions being sought. Not all companies respond to those pressures in identical ways. For example, some experts said that some companies that traditionally manufactured brand-name drugs are expanding into the manufacturing of generic drugs. These brand-name companies may acquire a generics manufacturer to adjust the portfolio of drugs they manufacture or gain access to a generics business. Similarly, some traditionally generic manufacturers are expanding into brand-name manufacturing to acquire product lines with more generous profit margins. For both brand-name and generic manufacturers, expanding the size of their drug portfolio may improve their bargaining position with PBMs, according to two economists we interviewed. Experts also said that traditionally large companies are increasingly relying on mergers and acquisitions to obtain access to new research and are conducting less of their own research in-house. In addition, experts told us that investment in the development of traditional chemically synthesized drugs has produced increasingly lower financial returns, resulting in some traditional pharmaceutical companies turning to invest more in the development of more complicated and costly biologics. Many experts highlighted the proliferation of biotechnology companies as large pharmaceutical companies seek to acquire promising new research developments. Many experts told us that market pressures have also driven some drug companies to move towards specialization in certain therapeutic areas, including through mergers and acquisitions. As one example, GlaxoSmithKline acquired most of Novartis’s vaccine business in 2015, bolstering its own line of vaccines and helping to raise its share of sales of the worldwide vaccine market. Simultaneously, Novartis acquired GlaxoSmithKline’s oncology business, enabling both companies to shed one line of business and focus on the newly acquired therapeutic areas. Experts again noted that one reason companies may be specializing through mergers and acquisitions is because of the increasing cost of R&D—acquiring promising new or developed research or product lines helps companies mitigate R&D investment risk. Acquiring existing lines of business from competitors within a therapeutic area may also help a company increase its presence in a particular therapeutic area. Another widely cited factor influencing structural changes in U.S. industries—including the drug industry—involves tax-influenced mergers, called corporate inversions. An inversion is a type of merger where a U.S. corporation merges with or acquires a company located in a foreign jurisdiction—often a lower-tax country—and reorganizes so the resulting parent corporation is located in the foreign country. This can reduce a corporation’s overall tax liability—often by reducing its U.S. tax liability. While taxes are one of many factors that may influence trends in mergers and acquisitions as discussed above, the incentive for drug companies to reduce tax burdens through inversions can be significant. In 2016, the Treasury Department issued new regulations to curb inversions. Pharmaceutical Company-Reported Research and Development Spending Grew Slightly, while Biologics and Orphan Drugs Were a Greater Share of New Drug Approvals Pharmaceutical company-reported R&D spending grew slightly from 2008 through 2014, while federally funded spending decreased slightly over the period. Industry spending focused on drug development rather than earlier-stage research, whereas direct federal spending, such as through NIH grants, funded a greater amount of basic research. Claims for the orphan drug credit, one of several federal tax incentives encouraging drug development, increased sharply from 2005 through 2014. Biologics and orphan drugs accounted for an increasing share of new drug approvals from 2005 through 2016. Studies we reviewed and experts we interviewed suggested that potential revenues, costs, and policy incentives influenced brand-name drug company R&D investment decisions. Pharmaceutical Company- Reported Research and Development Spending Increased Slightly, While Federally Funded Spending Decreased Slightly, from 2008 through 2014 Our analysis of industry survey data from NSF indicate that worldwide R&D spending by U.S.-owned pharmaceutical companies and U.S.-based R&D by foreign companies increased slightly (8 percent) in real dollars from $82 billion in 2008 to $89 billion in 2014, the years for which comparable data were available (see fig. 8). According to NSF survey data, the share of this spending that pharmaceutical companies paid others to perform also increased over the period. Estimates of worldwide R&D expenditures as a percentage share of total worldwide sales averaged 13 percent and ranged from 11.5 to 14.2 percent over the period 2008 to 2014. This amount, according to estimates from QuintilesIMS, is larger than the 7.6 percent of total pharmaceutical sales revenue that the industry spent on marketing and promotion in 2014; however, due to differences in the different sources’ methodology and data, publicly reported figures are not necessarily comparable. The NSF Business Research, Development, and Innovation Survey data indicated worldwide R&D spending for respondent biotechnology companies was $9.2 billion in 2009, dropped to $2.7 billion in 2010, rose to $6.7 billion in 2011, then decreased to $1.7 billion in 2013, the years for which worldwide data were available. The percentage of biotechnology company-reported R&D to worldwide biotechnology sales ranged widely from 43 percent in 2011 to 7 percent in 2013. Pharmaceutical companies reported spending a greater share of sales on R&D than comparably large, R&D-intensive industries and all aggregated manufacturing and non-manufacturing industries, according to comparable Business Research, Development, and Innovation Survey data (see table 2). For example, in 2014, self-reported R&D expenditures as a percentage of total sales were higher for pharmaceutical companies than for other comparably large, R&D-intensive sectors such as semiconductor and other electronic components, software publishers, and computer system design services. Direct federal spending for biomedical research, primarily funded through NIH, decreased 3.8 percent in real dollars from $27 billion in fiscal year 2008 to $26 billion in fiscal year 2014, after a peak of $32 billion in 2010, according to our analysis of federal survey data from NSF. NIH was the primary federal source for biomedical research and accounted for $26 billion of spending in 2008 and $25 billion in 2014. According to federal officials we interviewed, other agencies that fund biomedical research that could be relevant to drug R&D were the Department of Defense and the NSF. In addition, state and local governments, foundations, charities, and venture capital also funded biomedical R&D, according to studies and experts we interviewed. Estimates of this spending are much smaller than those for industry and federal agencies. In 2015, National Health Expenditure estimates show that state and local governments spent $6.7 billion on research and non-industry private funders spent $5.3 billion. Pharmaceutical Company- Reported Spending Focused on Drug Development and Federal Spending Focused on Basic Research Pharmaceutical company spending from 2008 through 2014 focused on drug development, while federal spending focused on earlier-stage basic research. For example, in 2014 pharmaceutical companies reported allocating 13 percent of total reported domestic R&D spending on basic research, 21 percent on applied research, and 66 percent on development (see fig. 9). By comparison, federal spending consistently funded a greater amount of basic research, according to our analysis of data from NSF’s Survey of Federal Funds for Research and Development. Studies show that basic research often supplies the innovation upon which the industry develops drugs. For example, as shown in figure 10 below, NIH obligated 54 percent, or $13.6 billion of its total $25 billion of drug related spending, for basic research in fiscal year 2014. This is more than twice as much as the $6.3 billion that NSF data show pharmaceutical companies reported spending domestically for basic research that year. NIH also funded applied research that includes more targeted research and activities aimed at translating basic research into new treatments for patients. For example, NIH supports clinical research through the National Center for Advancing Translational Sciences and several other NIH Institutes and Centers. This includes supporting pre-clinical and early-stage clinical trials; promoting and initiating collaborations and partnerships among industry, academia, and other stakeholder communities, such as patient advocacy groups, to address research barriers; and facilitating data sharing, according to agency officials. In accordance with the definition of “development” provided by NSF for the Survey of Federal Funds for Research and Development, NIH classifies R&D activities as “research.” Therefore, NIH does not report any of its activities as strictly drug development, according to agency officials. Studies and experts we interviewed suggested that the relative roles of R&D funders and performers are evolving. For example, some experts noted that there is less distinction between public and private investment in R&D than in the past because publicly funded research institutions, such as universities, are frequently involved in financial relationships with industry for commercial development. Some industry experts also noted NIH’s role in fostering these collaborations. As previously noted, there has been a proliferation of smaller, biotechnology-focused companies and greater use of acquisition and licensing agreements by larger, traditional pharmaceutical and biotechnology companies to build their earlier-stage product pipelines rather than conducting early research in-house. Experts suggested that this trend is a response to the increasing complexity and cost of R&D concurrent with the advent of biotechnology and waves of patent and exclusivity expirations for large companies. In addition, traditional pharmaceutical companies also performed less R&D internally than in the past, according to NSF data. Worldwide R&D spending paid for and performed by pharmaceutical companies decreased in real dollars from $61.7 billion in 2008 to $58.2 billion in 2014 and as a share of total worldwide R&D spending. Conversely, the share of the worldwide pharmaceutical R&D spending that was paid for by the company and performed by others, such as through purchased R&D services, increased from 25 percent in 2008 to 35 percent in 2014. Federal Tax Provisions Encourage Drug R&D, with Claims for the Orphan Drug Credit Increasing Sharply Similar to the R&D spending trend identified above from the NSF data, various IRS tax data consistently indicate that drug R&D activities did not change significantly—with the exception of the orphan drug credit, which over time increased sharply. Inflation-adjusted claims by all industries for the orphan drug credit increased five-fold between 2005 and 2014, from about $280 million to about $1.5 billion (see fig 11). Claims for the other tax credit that incentivizes drug development—the research credit—were more stable than the orphan drug credit between 2005 and 2014. As shown below in figure 12, IRS estimates of research credit claims for pharmaceutical-related corporations reached a high of $1.5 billion in 2007, but then fell to about $1.2 billion in 2014, a level close to the beginning of the period. This may be due in part to the fact that we were not able to obtain a specific estimate for the research credits claimed by biotechnology companies. By comparison, research credit claims grew for all industries over the period, particularly from 2012 to 2014. According to IRS data, between 2005 and 2014 the pharmaceutical manufacturing industry spent, on average, about $22.5 billion per year (in real dollars) in qualified research spending that factored into the calculation of the research credit (see fig. 13). Spending peaked in 2007 at $25.5 billion and then generally declined from 2007 to 2014. This amount of spending—reported on tax returns as meeting the requirements of qualified research spending as noted above—is less than half of the research spending reported by NSF’s Business Research, Development, and Innovation Survey data. These research spending differences can reflect both differences in the definitions of research spending in each data source and in the specific industry definitions used in the different data sources. The ability of companies to deduct research expenditures in the year they are incurred simplifies tax accounting for research spending and reduces the after-tax cost of research investments. The amount of research spending deducted by large pharmaceutical corporations that submitted an IRS form M-3 has been largely consistent between 2010 and 2013, the years for which data were available (see table 3). Specifically, research expenditure deductions in real dollars increased to $30.7 billion in 2013 after a low over the period of $24.9 billion in 2012. The table also shows that the amounts shown as research expense on the financial statements of the same corporations were slightly higher than the amount deducted on tax returns in each year. Novel Drugs Consistently Accounted for About Thirteen Percent of New Drugs Approved in the United States from 2005 through 2016, and Biologics and Orphan Drugs Each Grew as a Share of Approvals The number of approvals for drugs FDA considered novel drugs increased from 20 in 2005 to 45 in 2015 but declined to 22 approvals in 2016, according to FDA data and reports (see fig. 14). Novel drugs accounted for between 8 and 18 percent of all drug approvals each year and averaged 13 percent over the period. The remaining majority of drug approvals each year included those not considered novel because they had chemical substances that were previously approved by FDA or were modifications to existing drugs. Biologics and orphan drugs each represented an increasing share of all drug approvals from 2005 through 2016. As shown in figure 15, biologics grew from 8 percent of all drug approvals in 2005 to 17 percent in 2016. Biologics also represented an increasing share of the subset of all approvals that were considered novel drugs—from 10 percent of novel drugs approved in 2005 to 32 percent in 2016. Orphan-designated drugs as a share of all drug approvals grew even more dramatically from 5 percent of all drug approvals in 2005 to 21 percent in 2016 (see fig.15). Orphan drugs as a share of novel drug approvals ranged from 22 percent in 2007 to 42 percent in 2015. We also examined drug approval trends by product category. The product categories that led the largest number of drug approvals fluctuated over time, but oncology drugs were among the most frequently approved in all but 2 years from 2005 through 2016. Of the 263 drugs approved by FDA in 2016, the most common product categories were oncology (55 approvals) and metabolism and endocrinology (38 approvals). For the 22 novel drug approvals in 2016, the most common product categories were oncology (5 approvals) and neurology (4 approvals). Studies and Experts Suggest Potential Revenues, Costs, and Policy Incentives Influenced Drug Industry Research and Development Investment Decisions Studies and industry experts we interviewed, including economists and industry association officials, suggested several drivers for drug company R&D investment decisions. These investment choices were influenced by revenue, cost, and regulatory and other policy incentives: Potential revenues: High revenue potential, typically associated with a large potential number of patients or the potential for high drug prices, is an important incentive for R&D investment, according to experts and some research. Studies show that potential market size, measured by revenue, is a determinant of R&D investment and market entry for both brand-name and generic drug companies. Companies also seek to maximize potential revenues by investing in the development of drugs that can command high prices, and drugs that address unmet medical needs or differentiate them from competitors. This includes investment in drugs for niche markets that may have limited competition, such as orphan drugs. Experts also noted that some companies invest to extend patent protection or exclusivity periods for existing drugs as a means to extend revenue generation by delaying or limiting the effect of generic competition— sometimes referred to as “evergreening” or “patent hopping.” Cost reduction: Drug development costs, particularly for novel drugs, are increasing and companies have sought various ways to reduce their costs or limit risk. Experts we interviewed suggested that drug companies have attempted to reduce costs by focusing on drugs for which clinical trials are perceived to be less costly, drugs perceived as more likely to receive FDA approval, modifications to existing drugs rather than the development of novel drugs, outsourcing of clinical trials, and acquisition of R&D projects already underway. Policy incentives: Often regulatory and other policy incentives influence potential revenues and risks and, in turn, R&D investment, according to experts. For example, exclusivity periods and patent protection, expedited review programs, and tax incentives were cited as influencing R&D investment. The supply of new science from federally funded research may also influence company investment decisions. Expectations about payer reimbursement could also influence potential pricing and investment decisions, according to some experts. For example, one expert noted that payers typically do not resist high prices for oncology drugs. These drivers may also explain the observed brand-name drug approval trends for biologics, orphan drugs, and drugs for certain disease areas. For example: Biologics: Some experts noted that recent technological advances have spurred opportunity and investment in new biologics. The longer period of FDA market exclusivity for biologics relative to traditional chemically synthesized drugs may also be attractive to drug developers. In addition, there are currently few biosimilar drugs available to compete for market share once BLA exclusivity expires. Though FDA had approved seven biosimilars for marketing between 2010—the year the approval pathway for biosimilar biological products was established—and September 2017, and was reviewing additional applications, some experts suggest that the added cost and difficulty in developing biosimilars may hinder entry of biologics’ competitors relative to the entry seen for traditional generics. Orphan drugs: In addition to the exclusivity and orphan drug credit incentives to develop orphan drugs, an industry expert we interviewed also suggested that it is easier to get FDA approval for orphan drugs, and another suggested that it is less costly to develop them. In addition, orphan drugs can often garner high prices compared to non- orphan drugs, according to an industry report. Disease areas: Certain drug classes or disease areas, such as drugs for oncology or multiple sclerosis drugs, can garner higher prices and, in turn, more R&D investment because they often have fewer competitors, are often administered by providers who are insensitive to price, or are perceived as particularly life-saving, according to some experts we interviewed. In addition, some experts suggested that NIH investment in oncology research and gains in personalized medicine have resulted in many more research opportunities in which companies can invest. For example, many new oncology drugs are approved for treatment of tumors with specific genetic markers, and research suggests these drugs are more likely to succeed in clinical trials and face a less-elastic demand curve that, in turn, can facilitate higher pricing. According to several experts we interviewed, a company’s R&D focus on fewer therapeutic areas of more profitable drugs or niche markets may come at the expense of drug development in less lucrative disease areas—those that affect many patients but in which drugs are more costly to bring to market or have existing generic competition—for example, cardiovascular disease. According to a study of drug development pipeline data, the number of new drugs in all phases of clinical development to treat cardiovascular disease, a leading cause of death in the United States, declined from 1990 to 2012, whereas the number of new cancer drugs increased over the period. Research Suggests Market Concentration Affects Drug Prices, and Mergers May Affect Drug Company Innovation Research we examined in our literature review suggests that the level of competition in a relevant market influences drug prices. Competition also matters for innovation. Certain empirical economic studies suggest that mergers among brand-name drug companies can negatively impact companies’ innovation post-merger. Research Finds High Market Concentration Is Associated with Higher Drug Prices The relationship between competition and drug price is well documented in the drug industry, and industry experts and available research point out that competition dynamics differ for brand-name and generic drugs. Brand name companies producing drugs under patent or exclusivity protection have monopoly pricing power unless alternative drugs that treat the same condition are available. For brand-name products that face competition from such therapeutic alternatives, companies compete on price, differentiation from competitors, or both. We and others have reported that brand-name drug companies consider the availability and price of therapeutic alternatives along with potential market size, the perceived value of the drug relative to competitors, and other factors when determining the price for a new drug. Conversely, generic drugs compete on price with the brand-name or other generic manufacturers of the same drug. As we have reported, and as experts we have interviewed agreed, generic drug companies compete primarily on price. Based on our literature review, we did not identify any empirical studies that examined the impact of drug industry concentration changes from mergers and acquisitions on drug prices post-merger. However, empirical studies we reviewed suggest that less competition—that is, a more highly concentrated market—is associated with higher drug prices, particularly for generic drugs. The following summarizes studies we reviewed on the effect of generic and brand-name competition: Generic competition: Most notably, once brand-name drugs lose patent and marketing exclusivity and generic versions of drugs enter the market, drug prices fall and continue to decline as additional generic manufacturers enter. The price moderating effect of generic competition is well documented by FDA, FTC, the IMS Institute for Healthcare Information, and other research. FDA found that for drugs sold from 1999 through 2004, the first generic competitor reduced the drug price only slightly lower than the brand-name on average, but the second generic competitor reduced the drug price by nearly half. For drugs that attracted nine or more generic manufacturers, the average generic price fell 80 percent or more. The IMS Institute for Healthcare Information reported similar findings in 2016 based on its review of generics that entered the market between 2002 and 2014. The introduction of generics reduced the price of those drugs by 51 percent in the first year and 57 percent in the second year with price reductions driven, in part, by the increasing number of competitors. In addition, a 2017 study of 1,120 drugs available as generics between 2008 and 2013 determined that drugs with less market competition, measured by higher concentration, had higher price increases over the period compared to drugs in the cohort with the lowest concentration. Brand-name competition: For brand-name drugs, studies show that the presence of therapeutic alternatives in the market reduces the launch price—the price the company sets for a new drug. For example, an often-cited 1998 study of launch prices for 130 new molecular entities showed that a greater number of brand-name therapeutic alternatives was associated with substantially lower launch prices for new brand-name drugs compared to their predecessors. More recently, there are examples of therapeutic alternatives creating market pressure on, and thus reducing prices of, brand-name drugs, such as multiple brand-name hepatitis C therapies that became available between 2013 and 2014. Research has also found that some brand-name drug companies are able to maintain or even raise prices for their drugs—despite competition from therapeutic or generic alternatives—for various reasons, such as product differentiation or brand loyalty stemming from marketing or prescribing patterns. For example, brand-name companies may actually increase prices for some of their drugs to capture the price-insensitive segment of the market. Research also suggests that the extent of price reductions resulting from the entry of generic drugs into a market can differ by the characteristics of the drug and may be less dramatic for biosimilar drugs than traditional generic drugs. For example, the 2016 IMS report noted that price reductions under these circumstances occurred faster for oral drugs than for injectable drugs, which often attract fewer generic competitors. Another 2017 study examining the state of generic competition found that injectables and drugs with other formulations, such as topical or inhaled drugs, were more likely than oral drugs to have only one or two manufacturers. Certain literature we reviewed and experts we interviewed suggested that biosimilars will moderate prices for biologic drugs, but not to the same extent as traditional generics do because they are more costly to manufacture and may be less consistently substituted for the brand-name drug; however, more time and research will be needed to understand the effects given the small number of biosimilars on the market. Studies Find Competition Matters for Innovation, and Some Suggest a Negative Impact of Mergers on Drug Company Innovation Competition is also relevant to innovation, according to economic studies we examined. As noted, brand-name drug companies compete to develop new products and differentiate their products from therapeutic alternatives. The analysis of how competition affects innovation is a fact- specific process. There is empirical evidence suggesting that, in certain circumstances the incentive to invest in R&D could be enhanced with more competitors. For example, a 2014 study examining multiple manufacturing and non-manufacturing industries demonstrated a positive relationship between competition and innovation (measured by patents), productivity, and R&D expenditures. While drug innovation comes from multiple sources and increasingly from smaller innovative biotechnology companies, the industry relies on large drug companies to invest in the expensive clinical trials needed to develop and bring new innovations to market. We also identified several merger retrospective studies. These studies suggest that there are varied impacts of drug company merger and acquisition on innovation, including both inputs (e.g., R&D spending) and outputs (e.g., patents and new drug approvals). A 2009 study of 27 large, brand-name drug company mergers found that the mergers had a statistically significant negative impact on company R&D spending and patent issuance in the third year post- merger compared to non-merging companies. The authors concluded that the findings contradict the idea that mergers deliver advances in innovation that could outweigh possible anticompetitive risks. A 2007 study of 165 large mergers between 1988 and 2000 suggested that large companies sought to merge in response to patent expiration or product pipeline gaps, and small companies sought to merge as a response to financial trouble. When controlling for companies’ propensity to merge, small merging companies— defined as companies valued less than $1 billion—grew more slowly in R&D spending, sales, and R&D employees post-merger compared to similar non-merging companies. However, the study did not find these effects to last beyond one year and did not find differences in these growth rates between large merging companies and non- merging companies. Overall, the authors concluded that while merger in the drug industry is a response to being in trouble for both large and small companies, there is no evidence that it is a solution. Another 2009 study examined the number of approvals for new molecular entities—innovative drugs—as a means to examine outputs rather than only R&D spending. The study suggests that while mergers and acquisitions may help small companies, they are not an effective way for larger companies to increase output of new molecular entities. For example, for a sample of 30 mergers and acquisitions with 10 years of data before and after the merger, the study found that for large companies the number of new molecular entities did not increase and may actually have declined slightly following merger or acquisition. Smaller companies, however, experienced an increase in new molecular entities after merger or acquisition. Other studies suggest mergers and acquisitions may have a positive impact on innovation using certain measures. For example, a 2006 study of 160 acquisitions involving drug companies between 1994 and 2001 estimated that companies with declining R&D pipeline and sales were more likely to engage in acquisition and that outsourcing R&D through acquisitions was a successful strategy to stabilize declines in drug R&D pipelines. This study estimated that 71 percent of acquiring companies either maintained or improved the health of their research pipelines after merger. Agency Comments We provided a draft of this report to the Department of Health and Human Services, FTC, IRS, and NSF for review. These agencies provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, relevant agencies, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact John E. Dicken at (202) 512-7114 or [email protected] or Oliver Richard at (202) 512-8424 or [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix III. Appendix I: Scope and Methodology This appendix provides further details on our scope and methodology in addressing each of our three reporting objectives, which are to describe: (1) how the financial performance and structure of the drug industry have changed over time; and (2) how reported research and development spending and new drug approvals have changed; and (3) what is known about the potential effects of consolidation on drug prices and new drug development. In addition, the appendix describes how we selected officials to interview and the steps we took to assure the reliability of the data we analyzed. How the Financial Performance and Structure of the Drug Industry Have Changed Over Time Analysis of Sales Revenue and Profit Margins To describe reported pharmaceutical and biotechnology sales revenue and profit margins, we used the Bloomberg Terminal to identify pharmaceutical and biotechnology companies that were still active as of the time of our review. Bloomberg uses a proprietary hierarchical classification system (the Bloomberg Industry Classification System) to categorize companies into different primary industries. We used the Bloomberg Terminal’s company classification browser to obtain an initial set of companies that currently have reported pharmaceutical or biotechnology revenue. We restricted the drug companies in our review to those that were categorized under the “Pharmaceutical & Biotechnology” Bloomberg Industry Classification System (BICS) level 2 category, which indicated that Bloomberg characterizes the company as being primarily a pharmaceutical or biotechnology company. Using this list, we downloaded each company’s reported pharmaceutical and biotechnology sales revenue, total sales revenue, profit margin, return on assets, and return on equity for each company’s fiscal years 2006 through 2015, which were the most current data available. To provide a comparison, we followed the same procedure to obtain data for software companies over the same period. We selected software companies as a comparison because they have high research and development (R&D) and low manufacturing costs similar to drug companies. Sales revenues were adjusted to reflect real 2015 U.S. dollars using the gross domestic product price index. When examining sales revenues, profit margins, return on assets, and return on equity, analyses were limited to the subset of companies with complete data over the 10-year period for the variables included in the analysis. We did not have a count of how many companies might have existed throughout the review period, but which had no data available on any of the variables we examined. Profit margin, return on assets, and return on equity were each weighted by the company’s industry-specific sales revenue (pharmaceutical and biotechnology or software) prior to averages being computed. To identify the “largest 25” companies for analyses, we first restricted data to companies that had data for the variables being examined for 2006 through 2015, then identified the 25 drug companies with the largest pharmaceutical and biotechnology revenue in 2015. This provided a consistent cohort of large companies to examine longitudinally for each analysis. We also examined profit margins for the largest 500 companies by total worldwide 2015 sales revenue. We obtained a list of the largest 500 companies in 2015 from the Bloomberg Terminal that were still active during our review. Using this list, we downloaded each company’s BICS level 2 category; total sales revenue; pharmaceutical, biotechnology, and software revenues; and profit margins for each company for fiscal years 2006 through 2015. We removed any companies primarily classified by Bloomberg under one of those industries since we had analyzed these separately. For the remaining companies in our largest 500, we subtracted any reported pharmaceutical, biotechnology, and software revenues from their total sales revenues since some companies may have reported such revenues despite not being classified primarily as one of these types of companies. We then weighted each of the remaining companies’ profit margins by their remaining total sales revenue prior to calculating an average. This weighting differed slightly from the industry- specific sales weighting used in the earlier analyses of drug and software companies’ profit margins. For the software industry, the Congressional Budget Office only indicated that it had high R&D and low manufacturing costs similar to drug industry; it did not suggest the same for other lines of business that software companies might additionally be involved in. Because we had no reason to isolate industry-specific revenues for our remaining largest 500 companies, we weighted their profit margins by their total sales revenues. As with the prior profit margin analyses, analysis of the largest 500 sales weighted profit margins were limited to companies with data available for each of company fiscal years 2006 through 2015. Analysis of Mergers and Acquisitions For analyses of mergers and acquisitions, we again relied on data from the Bloomberg Terminal. We restricted our search to mergers and acquisitions that were completed from January 1, 2006, to December 31, 2015, and which featured a drug company on both sides of the transaction (e.g., as the acquirer and as the acquired company in the case of acquisition of a full company). The “largest 25” companies were determined by their 2015 pharmaceutical and biotechnology sales revenue only—because not every company could be expected to have a merger or acquisition transaction in every year, we did not make this a requirement to be included in the merger and acquisition analyses. We used what Bloomberg reported to be the completed transaction values in our analyses, and we adjusted the values to consistently reflect real 2015 dollars. Many companies were not included in analyses due to incomplete data, therefore the results of our analyses of these data do not reflect the entire industry. Bloomberg obtains much of its information from public filings, which provide companies considerable leeway in deciding what to report and how. For mergers and acquisitions, approximately 40 to 50 percent of the completed transactions in Bloomberg’s data between 2006 and 2015 did not have disclosed transaction values. Bloomberg officials told us that transaction values are often missing for private companies. Analysis of Concentration To examine overall industry concentration we used pharmaceutical industry and company-specific sales data from QuintilesIMS from 2007 through 2014, the years for which data were publicly available. We also examined publicly available industry reports and generic drug approvals data for discussion of concentration across different therapeutic areas. Our findings on industry concentration and the variation of concentration across therapeutic classes is limited to these examples. How Reported Research and Development Spending and New Drug Approvals Have Changed Analysis of Research and Development Spending To examine how reported R&D spending changed over time, we analyzed data from the Business Research, Development and Innovation Survey maintained by the National Science Foundation’s (NSF) National Center for Science and Engineering Statistics for years 2008 through 2014, the most recent years for which data were consistently available. The Business Research, Development and Innovation Survey data are collected annually from a probability sample of for-profit companies with a U.S. presence, which are classified in select manufacturing and nonmanufacturing industries based on their North American Industry Classification System (NAICS) code. We analyzed aggregate company- reported worldwide R&D expenditures and worldwide sales for respondent companies designated with NAICS code 3254 for pharmaceuticals and medicines. We also examined pharmaceutical company-reported domestic R&D expenditures by character of work— basic research, applied research, or development—as defined by NSF as well worldwide and domestic R&D expenditure by performer (whether R&D was paid for and performed by the company, or paid for by the company to be performed by others). We also examined worldwide expenditures and sales for companies designated as biotechnology research and development companies (NAICS 541711); however estimates were not available for 2008 or 2014 and were less reliable in the years between. We therefore reported biotechnology expenditures and sales separately from pharmaceutical companies and limited the majority of our analysis to pharmaceutical companies. For comparison, we also examined worldwide R&D expenditure and sales for comparably large industries with high R&D intensity as well as all manufacturing and all non-manufacturing industries. All spending and sales data were adjusted to real 2015 U.S. dollars using the gross domestic product price index. We also examined the Business Research, Development and Innovation Survey sample selection and sampling error information for each year of the survey. Finally, we compared worldwide and domestic R&D expenditure and sales trends to spending and sales reported by Pharmaceutical Research and Manufacturers of America (PhRMA)—a national trade association. To examine federal spending trends, we analyzed publicly available data from NSF’s National Center for Science and Engineering Statistics’ Survey of Federal Funds for Research and Development on obligations for research in biomedical related fields made by federal agencies identified as funding drug-related research between fiscal years 2008 and 2014, years consistent with available industry data from NSF’s Business Research, Development, and Innovation Survey. Data represent federal agency obligations for basic and applied research in the fields of biological sciences, medical sciences, and other life sciences as reported by federal agencies. Obligations were adjusted to real fiscal year 2015 U.S. dollars using the gross domestic product price index. We identified agencies that fund drug-related research based on interviews with officials from the National Institutes of Health (NIH), NSF, and other industry experts. The Survey of Federal Funds for Research and Development is a census of federal agencies that conduct R&D, and provides data on obligations by agency and field of science rather than by specific industry or use. Our estimates of federal spending may be imprecise because the data preclude us from pinpointing spending specific to drug R&D projects, and because the type of research that federal agencies typically fund often has an impact on many different research areas that may not be specific to drugs. We also reviewed budget documents from NIH and reviewed select studies for spending estimates by non-federal or industry sources. In addition, we obtained estimates of R&D spending by state and local governments and non-industry private funders for 2015 from National Health Expenditure account estimates. These estimates include spending for all biomedical research by these categories and thus also likely overestimate spending specific to drug development. Analysis of Tax Incentives To identify tax provisions that provide incentives for drug research and development, we reviewed reports by the Joint Committee on Taxation and the Congressional Research Service. We obtained and analyzed aggregate tax return data from the Internal Revenue Service (IRS) Statistics of Income division for the orphan drug credit and research credit claimed by relevant industries and all returns (all industries) for years 2005 to 2014, the latest ten years for which data were available. Specifically, we analyzed claims from companies with IRS Principle Business Activity codes for pharmaceutical manufacturing, drug wholesalers, and scientific research. IRS’s industry codes are based on NAICS definitions, and corporations are instructed to report the industry code for which it derives the highest percentage of its total receipts. These data are reviewed by Statistics of Income division staff for accuracy. The scientific research industry category includes corporations conducting biotechnology research and development, but also includes firms conducting research in nanotechnology and physical, engineering, and life sciences. As a result, we chose not to report research credits claimed by corporations in the broader scientific research industry category as being related to drug development, but we do report orphan drug credits claimed by corporations in this industry category. We also obtained and examined reported qualified research expenses for pharmaceutical manufacturing companies for years 2005 to 2014. IRS’ Statistics of Income division produces estimates based on a representative stratified sample of corporate returns. IRS provided additional information on the corporations that reported claiming the orphan drug and research credits; in both cases a high percentage of the claims came from large corporations that are included in the stratified sample with certainty. As a result, we concluded that the estimated credit totals are reliable given that the estimates are largely based on returns that were certain to be included in the sample. The amount of research and orphan drug credits claimed represents claims rather than amounts utilized due to limitations of the general business credit. Reported estimates therefore may reflect the upper bounds of what was utilized from claimed amounts. IRS also provided additional data on total deductions claimed for qualified research expenditures and amounts reported on financial statements from Form M-3, for 2010 to 2013. These data were limited to large corporations that filed form M-3, which is required for corporations with $10 million or more of assets. All claims were adjusted to 2015 U.S. dollars using the gross domestic product price index. Analysis of Drug Approvals To examine trends in new drug approvals, we obtained and analyzed data from the Food and Drug Administration (FDA) for new drug applications (NDA) and biologic license applications (BLA) and NDA- and BLA-efficacy supplements approved by the FDA’s Center for Drug Evaluation and Research between 2005 and 2016, the most recent ten years of available data at the time of our review. We determined which drugs FDA considered novel drugs by reviewing publicly available reports and resolving any discrepancies with agency officials. We analyzed these data to determine the type of drugs FDA approved, such as the product category and whether the drug was designated an orphan drug. Finally, we interviewed agency and industry experts and reviewed relevant academic, government, and industry literature on R&D investment trends and reasons for such trends. What Is Known about the Potential Effects of Consolidation on Drug Prices and New Drug Development Literature Search on Consolidation Impacts To determine what is known about the impact of drug industry consolidation on drug price and drug development, we reviewed studies obtained from a literature search. To identify relevant publications, we used a number of bibliographic databases, including ProQuest, Scopus, PubMed, National Technical Information Service, Lexis, Social Science Research Network, and the National Bureau of Economic Research. We reviewed the following document types: scholarly peer reviewed material, government reports, working papers, and policy research organization publications published by a U.S. publication from 2005 forward. We concluded our searches in August 2017. To the resulting list of publications, we added articles identified in our own background research and articles suggested by industry experts, including certain heavily cited papers published prior to 2005. From the revised list, we selected publications that empirically evaluated the effect of drug industry consolidation (mergers and acquisitions) on drug price or innovation (new drug development or R&D spending). We also selected publications that included empirical analyses of drug industry or subindustry concentration or competition and drug price or drug development. Finally, we reviewed the data sources and methodology used to support the assertions of each publication and included those that met our methodological criteria. See the bibliography at the end of this report for the 22 publications included in our review. Interviews To inform our understanding of the drug industry for all three objectives including structural changes that have taken place, reasons for consolidation trends, drivers of drug company R&D investment trends, and any impacts of consolidation on drug price or innovation, we interviewed drug industry experts including three drug trade associations, four advocacy organizations, two financial ratings agencies, and officials from the FDA, IRS, NSF, Federal Trade Commission (FTC), and NIH. We selected these experts to obtain a variety of industry perspectives. We also interviewed seven academic economic experts about economic factors influencing consolidation and other structural changes, R&D investments, and potential consolidation impacts. We selected these economic experts based on citations in our literature review and suggestions from FDA and FTC officials. Data Reliability To ensure that the data used to produce this report were sufficiently reliable, we took several steps. We performed data reliability checks on the data we obtained from the Bloomberg Terminal, such as comparing select companies’ financial data to company annual reports, checking for outliers, and discussing reliability issues with Bloomberg representatives. We did not independently verify the accuracy or completeness of the information reported by the companies. We verified the reliability of NSF’s Business Research, Development and Innovation Survey data used in this report by reviewing relevant documentation, including relative standard errors for specific measures, and by interviewing agency officials who were knowledgeable with the data. We also interviewed knowledgeable NSF officials regarding the reliability of reported Federal Funds for Research and Development survey data and compared reported obligations to NIH budget documents. To verify the reliability of aggregate tax return information, we reviewed relative standard errors for reported measures and interviewed knowledgeable agency officials. We verified the reliability of FDA-provided information by cross-referencing it against other published FDA sources and by interviewing knowledgeable agency officials. After taking these steps, we determined the data were sufficiently reliable for the purposes of our reporting objectives. We conducted this performance audit from April 2016 to November 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings based on our audit objectives. Appendix II: Mergers and Acquisitions of Ten Large Drug Companies from 2006 through 2015 The following table reflects mergers and acquisition transactions from 2006 through 2015 for 10 large drug companies, as measured by their 2014 pharmaceutical and biotechnology revenue. Transactions reflect those reported in Bloomberg that were completed from January 1, 2006, through December 31, 2015, and had values of at least $500 million in real 2015 dollars. Appendix III: GAO Contact and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contact named above, Robert Copeland, Assistant Director; Yesook Merrill, Assistant Director; Rebecca Abela, Analyst-in- Charge; Reed Meyer; Brandon Nakawaki; Edward Nannenhorn; Laurie Pachter; and Matthew Rabe made key contributions to this report. Also contributing were George Bogart, Muriel Brown, Sandra George, Sarah Gilliland, and Giselle Hicks. Bibliography of Research Articles Used in GAO Literature Review We reviewed literature to identify what is known about the impact of drug industry consolidation on drug price and drug development. We included publications that empirically evaluated the effect of drug industry consolidation (mergers and acquisitions) on drug price, of which we did not identify any publications. We also reviewed publications that included empirical analyses of the impact of concentration or competition on drug price. Berndt, Ernst R., and Murray L. Aitken, Brand Loyalty, Generic Entry and Price Competition in Pharmaceuticals in the Quarter Century after the 1984 Waxman-Hatch Legislation, National Bureau of Economic Research Working Paper 16431 (October 2010). Berndt, Ernst R., and Rena M. Conti, Specialty Drug Prices and Utilization After Loss of U.S. Patent Exclusivity, 2001-2007, National Bureau of Economic Research Working Paper 20016 (March 2014). Berndt, Ernst R., Rena M. Conti, and Stephen J. Murphy, The Landscape of US Generic Prescription Drug Markets, 2004-2016, National Bureau of Economic Research Working Paper 23640 (July 2017). Dave, Chintan V., Aaron S. Kesselheim, Erin R. Fox, Peihua Qiu, and Abraham Hartzema. “High Generic Drug Prices and Market Competition: A Retrospective Cohort Study.” Annals of Internal Medicine, vol. 167, no. 3 (2017): 145-151. Department of Health and Human Services. U.S. Food and Drug Administration. “Generic Competition and Drug Prices.” 2015. Accessed July 31, 2017. https://www.fda.gov/AboutFDA/CentersOffices/OfficeofMedicalProductsa ndTobacco/CDER/ucm129385.htm Grabowski, Henry G., David B. Ridley, and Kevin A. Schulman. “Entry and Competition in Generic Biologics.” Managerial and Decision Economics, vol. 28, no. 4/5 (2007): 439-451. Iacocca, Kathleen, James Sawhill, and Yao Zhao. “Why Brand Drugs Priced Higher Than Generic Equivalents.” International Journal of Pharmaceutical and Healthcare Marketing, vol. 9, no. 1 (2015): 3-19. IMS Institute for Healthcare Informatics. Price Declines After Branded Medicines Lose Exclusivity in the U.S. (Parsippany, N.J.: IMS Institute for Healthcare Informatics, 2016). Lu, Z. John, and William S. Comanor. “Strategic Pricing of New Pharmaceuticals.” The Review of Economics and Statistics, vol. 80, no. 1 (1998): 108-118. Olson, Luke M., and Brett W. Wendling, Working Paper No. 317: The Effect of Generic Drug Competition on Generic Drug Prices During the Hatch-Waxman 180-Day Exclusivity Period, Bureau of Economics, Federal Trade Commission (Washington, D.C.: April 2013). Regan, Tracy L. “Generic Entry, Price Competition, and Market Segmentation in the Prescription Drug Market.” International Journal of Industrial Organization, vol. 26, no. 4 (2008): 930-948. Richard, Oliver, and Larry Van Horn. “Persistence in Prescriptions of Branded Drugs.” International Journal of Industrial Organization, vol. 22, no. 4 (2004): 523-540. Tenn, Steven, and Brett W. Wendling. “Entry Threats and Pricing in the Generic Drug Industry.” The Review of Economics and Statistics, vol. 96, no. 2 (2014): 214-228. We also reviewed publications that empirically evaluated the effect of drug industry consolidation on innovation—including new drug development or R&D spending—as well as publications on the impact of concentration or competition on innovation. Banerjee, Tannista, and Arnab Nayak. “Comparing Domestic and Cross- Border Mergers and Acquisitions in the Pharmaceutical Industry.” Atlantic Economic Journal, vol. 43, no. 4 (2015): 489-499. Comanor, William S., and F.M. Scherer. “Mergers and Innovation in the Pharmaceutical Industry.” Journal of Health Economics, vol. 32 (2013): 106– 113. Danzon, Patricia M., Andrew Epstein, and Sean Nicholson. “Mergers and Acquisitions in the Pharmaceutical and Biotech Industries.” Managerial and Decision Economics, vol. 28, no. 4/5 (2007): 307-328. Higgins, Matthew J., and Daniel Rodriguez. “The Outsourcing of R&D Through Acquisitions in the Pharmaceutical Industry.” Journal of Financial Economics, vol. 80 (2006): 351-383. Getz, Kenneth A., Rachael Zuckerman, Joseph A. DiMasi, and Kenneth I. Kaitin. “Drug Development Portfolio and Spending Practices After Mergers and Acquisitions.” Drug Information Journal, vol. 43, no. 4 (2009): 493-500. Grabowski, Henry, and Margaret Kyle. “Mergers and Alliances in Pharmaceuticals: Effects on Innovation and R&D Productivity,” in The Economics of Corporate Governance and Mergers. Northampton, M.A.: Edward Elgar Publishing, Inc., 2008. Munos, Bernard. “Lessons from 60 Years of Pharmaceutical Innovation.” Nature Reviews Drug Discovery, vol. 8 (2009): 959-968. Ornaghi, Carmine. “Mergers and Innovation in Big Pharma.” International Journal of Industrial Organization, vol. 27, no. 1 (2009): 70-79. Thakor, Richard T., and Andrew W. Lo. Competition and R&D Financing Decisions: Theory and Evidence from the Biopharmaceutical Industry, National Bureau of Economic Research Working Paper 20903 (September 2015). Related GAO Products Investigational New Drugs: FDA Has Taken Steps to Improve the Expanded Access Program but Should Further Clarify How Adverse Events Data Are Used. GAO-17-564. Washington, D.C.: July 11, 2017. Generic Drug User Fees: Application Review Times Declined, but FDA Should Develop a Plan for Administering Its Unobligated User Fees. GAO-17-452. Washington, D.C.: May 25, 2017. Physician-Administered Drugs: Comparison of Payer Payment Methodologies. GAO-16-780R. Washington, D.C.: August 1, 2016. Generic Drugs Under Medicare: Part D Generic Drug Prices Declined Overall, but Some Had Extraordinary Price Increases. GAO-16-706. Washington, D.C.: August 12, 2016. Medicare Part B: Data on Coupon Discounts Needed to Evaluate Methodology for Setting Drug Payment Rates. GAO-16-643. Washington, D.C.: July 27, 2016. Drug Shortages: Certain Factors Are Strongly Associated with This Persistent Public Health Challenge. GAO-16-595. Washington, D.C.: July 7, 2016. Medicare Part B: CMS Should Take Additional Steps to Verify Accuracy of Data Used to Set Payment Rates for Drugs. GAO-16-594. Washington, D.C.: July 1, 2016. Corporate Income Tax: Most Large Profitable U.S. Corporations Paid Tax but Effective Tax Rates Differed Significantly from the Statutory Rate. GAO-16-363. Washington, D.C.: March 17, 2016. Drug Safety: FDA Expedites Many Applications, But Data for Postapproval Oversight Need Improvement. GAO-16-192. Washington, D.C.: December 15, 2015. Medicare Part B: Expenditures for New Drugs Concentrated among a Few Drugs, and Most Were Costly for Beneficiaries. GAO-16-12. Washington, D.C.: October 23, 2015. Prescription Drugs: Comparison of DOD, Medicaid, and Medicare Part D Retail Reimbursement Prices. GAO-14-578. Washington, D.C.: June 30, 2014. Drug Shortages: Public Health Threat Continues, Despite Efforts to Help Ensure Product Availability. GAO-14-194. Washington, D.C.: February 10, 2014. Corporate Tax Expenditures: Evaluations of Tax Deferrals and Graduated Tax Rates. GAO-13-789. Washington, D.C.: September 16, 2013. Prescription Drugs: Comparison of DOD and VA Direct Purchase Prices. GAO-13-358. Washington, D.C.: April 19, 2013. Medicare Part D Coverage Gap: Discount Program Effects and Brand- Name Drug Price Trends. GAO-12-914. Washington, D.C.: September 28, 2012. International Taxation: Information on Foreign-Owned but Essentially U.S.-Based Corporate Groups Is Limited. GAO-12-794. Washington, D.C.: July 16, 2012. Prescription Drugs: FDA Has Met Performance Goals for Reviewing Applications. GAO-12-500. Washington, D.C.: March 30, 2012. Drug Pricing: Research on Savings from Generic Drug Use. GAO-12-371R. Washington, D.C.: January 31, 2012. Prescription Drugs: Trends in Usual and Customary Prices for Commonly Used Drugs. GAO-11-306R. Washington, D.C.: February 10, 2011. Brand-Name Prescription Drug Pricing: Lack of Therapeutically Equivalent Drugs and Limited Competition May Contribute to Extraordinary Price Increases. GAO-10-201. Washington, D.C.: December 22, 2009. Tax Policy: The Research Tax Credit’s Design and Administration Can Be Improved. GAO-10-136. Washington, D.C.: November 6, 2009. Prescription Drugs: Improvements Needed in FDA’s Oversight of Direct- to-Consumer Advertising. GAO-07-54. Washington, D.C.: November 16, 2006. New Drug Development: Science, Business, Regulatory, and Intellectual Property Issues Cited as Hampering Drug Development Efforts. GAO-07-49. Washington, D.C.: November 17, 2006.
Why GAO Did This Study Retail prescription drug expenditures were estimated to account for about 12 percent of total personal health care service spending in the United States in 2015, up from about 7 percent through the 1990s. Much of this growth was driven by use of expensive brand-name drugs, but price increases have been reported for some generic drugs as well. Prior GAO reports have identified multiple reasons for drug price increases, including limited competition. Experts have questioned whether consolidation among drug companies could reduce competition and R&D investment in new drugs. GAO was asked to examine changes in the drug industry. This report describes: (1) how the financial performance and structure of the industry have changed over time, (2) how reported R&D spending and new drug approvals have changed, and (3) what is known about the potential effects of consolidation on drug prices and new drug development. GAO analyzed Bloomberg drug industry financial data for 2006 through 2015, and examined select publicly available estimates of company market shares for 2014 and market shares for certain therapeutic classes for 2016. GAO also analyzed estimates of company self-reported R&D spending and federal funding for biomedical R&D data, aggregate tax credit claims data, and drug approval data for the same approximate time period. All data were the most current available. In addition, GAO also reviewed published research and interviewed federal agency officials, economists, and representatives from industry and advocacy groups. What GAO Found GAO's analysis of revenue, profit margin, and merger and acquisition deals within the worldwide drug industry from 2006 through 2015 identified key trends: Estimated pharmaceutical and biotechnology sales revenue increased from $534 billion to $775 billion in 2015 dollars. About 67 percent of all drug companies saw an increase in their annual average profit margins from 2006 to 2015. Among the largest 25 companies, annual average profit margin fluctuated between 15 and 20 percent. For comparison, the annual average profit margin across non-drug companies among the largest 500 globally fluctuated between 4 and 9 percent. The number of reported mergers and acquisitions generally held steady during this period, but the median disclosed deal value increased. The largest 10 companies had about 38 percent of the drug industry's sales revenue in 2014. However, concentration was higher for narrower markets, such as for certain drugs in the same therapeutic class. In addition, experts noted that market pressures such as rising research and development (R&D) costs, fewer drugs in development, and competition from generic drugs, have driven structural changes in the industry such as increased use of acquisition by large drug companies to obtain access to new research. From 2008 through 2014, worldwide company-reported R&D spending, most of which went to drug development (rather than research), increased slightly from $82 billion to $89 billion in 2015 dollars. During the same period, federal spending, which funded a greater amount of basic research relative to industry, remained stable at around $28 billion. In addition to grants, several federal tax provisions provided incentives for industry R&D spending, including the orphan drug credit, available for companies developing drugs intended to treat rare diseases, which increased more than five-fold from 2005 through 2014. Pertaining to drug approvals, the total number of new drugs approved for marketing in the United States fluctuated between 2005 and 2016, ranging from 179 to 263 drug approvals annually. Novel drugs—innovative products that serve previously unmet medical need or help advance patient care—accounted for about 13 percent of all approvals each year. Biologics—drugs derived from living rather than chemical sources—and orphan drugs accounted for growing shares of drug approvals, reflecting market and policy incentives to invest in these areas, according to experts GAO interviewed. Research GAO reviewed indicates that fewer competitors in the drug industry are associated with higher prices, particularly for generic drugs. Research also suggests that drug company mergers can have varied impacts on innovation as measured by R&D spending, patent approvals, and drug approvals. Certain merger retrospective studies have found a negative impact on innovation. The Department of Health and Human Services, Federal Trade Commission, Internal Revenue Service, and National Science Foundation provided technical comments on a draft of this report, which we incorporated as appropriate.
gao_GAO-18-220
gao_GAO-18-220_0
Background Medicaid Section 1115 Demonstrations Nearly three-quarters of states (37 as of November 2016) have CMS- approved Medicaid section 1115 demonstrations, which allow states to test new approaches to coverage and to improve quality and access or generate savings or efficiencies. CMS has approved demonstrations for a wide variety of purposes. For example, under demonstrations, states have extended coverage to populations or for services not otherwise eligible for Medicaid, made payments to providers to incentivize delivery system improvements, and, more recently, expanded Medicaid to certain low-income adults by using Medicaid funds to purchase private health insurance coverage. While state demonstrations vary in size and scope, many are comprehensive in nature, affecting multiple aspects of states’ Medicaid programs simultaneously. For example, Kansas’s demonstration, approved in 2012, significantly expands the use of managed care to deliver physical, behavioral, and long-term care services to almost all the state’s Medicaid populations, care that for some populations was previously provided on a fee-for-service basis. The demonstration also established a funding pool of up to $344 million to provide payments to hospitals to finance uncompensated care. Kansas’s demonstration expenditures accounted for about 94 percent of the state’s total Medicaid expenditures in fiscal year 2015. In fiscal year 2015, federal spending under demonstrations represented a third of all Medicaid spending nationwide. In 10 states, federal spending on demonstrations represented 75 percent or more of all federal spending on Medicaid. (See fig. 1.) Demonstrations are typically approved by CMS for an initial 5-year period (referred to as a demonstration cycle), but some states have operated portions of their Medicaid programs under a demonstration for decades. This can be achieved through a series of renewals approved by CMS, generally occurring every 3 to 5 years. What a state is testing and implementing under its demonstration can change from one cycle to the next. States often make changes to their demonstrations, either through the renewal process or by requesting an amendment during the demonstration cycle. These changes can be relatively small or can be significant and can represent testing of a new approach for the state. For example, at renewal a state could request approval to expand coverage to a new population or add requirements that beneficiaries share in the cost of care by paying a monthly premium. CMS Oversight of State- Led Evaluations CMS has long required states to conduct evaluations of section 1115 demonstrations. CMS oversees the evaluations and can influence them at several key points during the demonstration process. Application review and approval: When a state applies for a demonstration, CMS reviews the state’s application, which describes the goals and objectives of the demonstration and what the demonstration will test, among other things. As part of the review and approval process, CMS negotiates with the state on the STCs, including evaluation requirements. These requirements might include, for example, reporting timeframes and broad standards for the evaluation, such as standards around the independence of the evaluator and acceptable evaluation methods. Evaluation design phase: After a demonstration is approved, states are required to submit an evaluation design to CMS for review and approval. The evaluation design must discuss, among other things, the hypotheses that will be tested, the data that will be used, and how the effects of the demonstration will be isolated from other changes occurring in the state. During review of the design, CMS can seek adjustments such as requiring the state to address certain objectives or using particular performance measures. Demonstration renewal: In the event that a state wishes to renew its demonstration, it must generally submit an application to CMS at least 1 year before the demonstration is scheduled to expire. The application must include, among other things, a report presenting the evaluation’s findings to date, referred to as an interim evaluation report. CMS can use the information from the interim evaluation report to negotiate changes in the STCs for the evaluation of the next demonstration cycle. If CMS renews the demonstration, the evaluation process starts over with the state submitting a new evaluation design that reflects changes in what is being tested in the new cycle. Demonstration end: CMS requires states to submit a final evaluation report for review and approval generally after the end of the demonstration, at which time the agency can work with the state to, for example, add clarity and disclose the limitations of the evaluation before the final evaluation report is made public. Within the framework that CMS has established for state-led evaluations, states design evaluations to the specifics of their demonstrations. As the size and scope of demonstrations varies considerably across states, so, too can evaluations vary in their breadth and complexity. State-led evaluations may assess the effects of several different policies, each with its own set of hypotheses—predictions of the effects of the policy—and methods. For example, a state could evaluate the effects of moving to a managed care delivery model for providing managed long-term services and supports (referred to as MLTSS), implementing provider payment pools aimed at delivery system reform, and expanding coverage to a new population all within the same demonstration. Each of those three elements would have its own hypotheses and methods and may have varying timeframes for the number of years of experience needed to be able to effectively measure the effects of what is being tested. Federal Evaluations CMS has the authority to initiate its own federal evaluations of section 1115 demonstrations, and states must fully cooperate with any such evaluations. Between 2014 and 2016, CMS initiated three federal evaluations that were ongoing as of November 2017. The first evaluation, initiated in 2014, is a large, multi-state evaluation examining four broad demonstration types in several states. (See table 1.) According to CMS, it selected these demonstration types—which together account for tens of billions of dollars in federal and state Medicaid spending—because they included policies that the agency considered priority areas for evaluation. CMS awarded a contract to an evaluation organization to implement the 5-year study. According to CMS, the estimated total cost of this evaluation for the 5-year life of the contract is $8.3 million. The evaluation was designed to produce three sets of results: a series of reports providing contextual information about the demonstrations being evaluated, referred to as rapid cycle reports; interim evaluation reports featuring early results of more in-depth analysis; and final evaluation reports. CMS contracted with another evaluation organization to conduct two federal evaluations examining demonstrations in single states—Indiana and Montana—over 4 years. As of September 2017, the estimated cost of this contract, inclusive of all options, was $8.2 million. In total, spending for Indiana’s and Montana’s demonstrations was about $2 billion in fiscal year 2015, including $1.6 billion in federal spending. Indiana: CMS initiated this evaluation in 2015. CMS officials told us they started this evaluation to better understand how policies in Indiana’s demonstration, many of which were unprecedented, were affecting beneficiaries. These policies included, for example, charging monthly contributions for most newly eligible adults with incomes from 0 to 138 percent of the federal poverty level; imposing a lock-out period of 6 months for nonpayment of premiums for most people with incomes above the federal poverty level; and charging co-payments above statutory levels for non-urgent use of emergency room services. The federal evaluation is aimed at estimating the effects of Indiana’s demonstration on health insurance coverage and access to and use of care, and documenting beneficiary understanding of enrollment, disenrollment, and copayment policies, among other things. Montana: CMS initiated this evaluation in 2016. CMS officials told us they started this evaluation to provide a point of comparison to Indiana’s demonstration, as Montana was implementing similar policies to Indiana but with some variations. For example, under Montana’s demonstration, the state charges premiums to most newly eligible adults with incomes between 51 and 138 percent of the federal poverty level; and disenrolls beneficiaries with incomes above the federal poverty level for nonpayment of premiums, with reenrollment when overdue premiums are paid. Similar to the federal evaluation of Indiana’s demonstration, the evaluation of Montana’s demonstration is aimed at estimating the effects of the demonstration on insurance coverage, access to and use of care, and documenting beneficiary understanding of and experience with premiums, copayments, enrollment, and disenrollment, among other things. Limitations in State- Led Evaluations Hindered Their Usefulness and May Not Be Fully Addressed by CMS Improvements State-led evaluations of demonstrations in selected states often had significant methodological weaknesses and gaps in results that affected their usefulness for federal decision-making. Though CMS has been taking steps since 2014 to improve the quality of these evaluations, the agency has not established written procedures to help implement some of these improvements. State-Led Evaluations in Selected States Often Had Significant Limitations That Affected Their Usefulness in Informing Federal Decision-Making The state-led evaluations we reviewed in our selected states often had methodological limitations that affected what could be concluded about the demonstration’s effects. CMS hired a contractor to review state evaluation designs and reports, and that contractor identified a number of methodological concerns with the evaluations in our selected states. For example, CMS’s contractor raised concerns about the comparison groups, or lack thereof, used to isolate and measure the effects of the demonstrations in the Arkansas, California, Indiana, and Maryland evaluations. The contractor also raised concerns with the sufficiency of sample sizes and survey response rates for beneficiary surveys in Indiana. These surveys were key methods for assessing the effect of demonstrations on access, beneficiary understanding, and perceptions on affordability. Finally, the contractor raised concerns with the analysis of the effects of the demonstration on cost in Arkansas, California, and Maryland. Officials in several states told us that some of the methodological limitations in their evaluations were difficult to control. For example, officials in two states told us that isolating the effects of the demonstration was difficult given other changes happening in the state’s health care system at the same time. Some state officials also noted that state resources, including both funding and staff capacity, present challenges in completing robust evaluations. program, with approved funding up to about $690 million. Under the demonstration STCs, the state was required to evaluate whether the seven hospitals participating in the DSRIP were able to show improvements on certain outcome measures related to improving quality of care, improving population health and access to care, and reducing the per capita costs of health care. However, the evaluation report, submitted by the state 5 years after approval of the DSRIP program, provided only descriptive or summary information about the number and types of projects implemented by the hospitals receiving payments and did not provide any data to measure or conclusions on the effects of those payments. Arkansas: Under its demonstration, the state was testing the effects of using Medicaid funds to provide premium assistance for the more than 200,000 beneficiaries newly eligible under PPACA to purchase private insurance offered through the state’s health insurance exchange. The state’s evaluation was designed to assess whether beneficiaries would have equal or better access to care and equal or better outcomes than they would have had in the Medicaid fee-for- service system. The evaluation was also aimed at examining continuity of coverage for beneficiaries, as the expansion population was anticipated to have frequent income fluctuations leading to changes in eligibility and gaps in coverage. However, evaluation results submitted over two and a half years into the demonstration— the only results submitted for the state’s first cycle—were limited to data only from the first year of the demonstration and did not provide data on continuity of coverage. Achieving continuity of coverage was part of the state’s rationale for using an alternative approach to Medicaid expansion. Arizona: Among other things, Arizona’s demonstration includes MLTSS, including for the particularly complex populations of adults who have intellectual and developmental disabilities and for children with disabilities. As part of its evaluation, the state was assessing whether the quality of and access to care, as well as quality of life, would improve during the demonstration period for long-term care beneficiaries enrolled in MLTSS. However, evaluation results submitted in October 2016—the only results submitted for the state’s most recently completed demonstration cycle—lacked data on key measures of access, such as hospital readmission rates, and on quality of life, such as beneficiaries’ satisfaction with their health plan, provider, and case manager. A key contributor to the gaps in the information included in the state-led evaluations we reviewed was that CMS historically had not required the states to submit final, comprehensive evaluation results at the end of each demonstration cycle. As a result, for our selected states, including those discussed above, CMS had received only interim evaluation reports that were generally based on more limited data from the early years of the demonstration cycle and did not include all of the analyses planned. Though CMS had required final evaluation reports in the demonstration STCs, the due dates for those reports were tied to the expiration of the demonstrations or, in one case, CMS did not enforce the specified due date. Under such conditions, due dates for final evaluation reports were effectively pushed out when the demonstrations were renewed. Evaluation due dates could be pushed out for multiple cycles. CMS officials acknowledged that the lack of data in the interim evaluation reports from the more mature years of the demonstration affected the conclusions that could be drawn from them. We found that due dates for final evaluation reports were pushed out upon renewal in all seven of our states that had completed a demonstration cycle, leading to a gap in evaluation reporting of up to 6 or 7 years for several states. In Maryland, for example, CMS approved the demonstration to run from 2013 to 2016 with a final evaluation report due 120 days after the expiration of the demonstration. In 2016, CMS extended the demonstration, pushing the deadline for the final evaluation report to 18 months following the end of the new cycle, or June 2023. At that time, it will be 7 years since the interim evaluation report was submitted. See figure 2. The limitations in state-led evaluations—including methodological weaknesses and gaps in results—have, in part, hindered CMS’s use of them to inform its policy decisions. CMS officials told us that, historically, state-led evaluations have generally provided descriptive information but lacked evidence on outcomes and impacts. As a result, officials noted that they consider the data reported in the evaluations but, generally, state-led evaluations have not been particularly informative to their policy decisions. CMS officials told us that there have been cases where data, but not the conclusions, from state-led evaluations have informed their thinking on certain policy changes. For example, CMS officials said that data reported in early evaluations of DSRIP programs helped them in considering whether and how the agency should modify the basic policy structure of these programs. State officials had mixed perspectives on whether state-led evaluations influenced CMS decision-making around renewing their demonstrations. Officials in one state told us that while CMS reviewed their interim evaluation results, the results did not appear to influence the negotiations around the demonstration renewal. In contrast, officials from another state told us that discussion of interim evaluation results and limitations was a significant part of negotiations in 2016 regarding whether CMS would be willing to reauthorize funding for certain programs, including a new DSRIP investment and broader delivery system reforms the state was trying to implement. Officials in several states told us that there was value to state-led evaluations and in the federal-state partnership in designing the evaluations. CMS Is Taking Steps to Improve the Quality of State-Led Evaluations, but Lacks Written Procedures to Ensure That All Evaluations Will Be Subject to New Requirements CMS has implemented several procedures since 2014 aimed at improving the quality of state-led evaluations. CMS officials told us that these changes were part of CMS placing increased focus on monitoring and evaluation, which also resulted in CMS establishing a new office in 2015 that is responsible for these activities. One of the key changes CMS began implementing in 2014 was to set more explicit requirements for evaluations in the STCs, including requirements to improve the evaluation methodologies. According to CMS officials, the agency realized that one reason why state-led evaluations had generally lacked rigor and been of limited usefulness was that CMS had not been setting clear expectations for evaluations in the STCs. The officials said that CMS began strengthening evaluation requirements starting in 2014 with demonstrations implementing approaches in CMS’s high priority policy areas. In our review of the STCs for current demonstration cycles in our seven selected states that had completed a demonstration cycle, all of which were approved in 2014 or later, we found evidence of CMS’s efforts. Specifically, we found an increased focus on the use of independent evaluators and more explicit expectations for rigor in the design and conduct of evaluations: Consistent requirements for independent evaluators. The STCs for the most recently approved cycle of demonstrations in all seven states required the state to use an independent evaluator to conduct the evaluation. In some cases, the STCs also required that the evaluation design discuss the process to acquire the independent evaluator, including describing the contractor’s qualifications and how the state will assure no conflict of interest. These requirements were new in most states. More explicit expectations for rigor. In four of the seven states we reviewed, the STCs for the most recently approved cycle of states’ demonstrations included new, explicit language requiring state evaluations to meet the prevailing standards of scientific and academic rigor. These included standards for the evaluation design and conduct as well as the interpretation and reporting of findings. Some states’ STCs further specified the characteristics of rigor that CMS expected, including using the best available data, discussing the generalizability of results, and using controls and adjustments for and reporting the limitations of data and their effects on results. According to CMS, in the past, states have not always discussed methodological limitations in their evaluation reports. In addition to strengthening evaluation requirements, CMS has also taken steps since 2014 to enhance its oversight during the design and early stages of state-led evaluations, and, according to officials, some of these steps are likely to improve the usefulness of evaluations. Specifically, CMS has provided technical assistance to help states design their evaluations, sometimes leveraging expertise from other parts of HHS, including the HHS Office of the Assistant Secretary for Planning and Evaluation and the Center for Medicare & Medicaid Innovation as well as outside contractors. For example, officials stated that the agency assists states in developing relevant and standardized measures and provides assistance to help address states’ data limitations. Officials said this has resulted in more robust evaluation designs with increased potential to isolate outcomes and impacts. CMS has also used contractors to help in its review of state evaluation designs, including sampling designs, and evaluation reports. Since 2014, one contractor has provided over 30 assessments of evaluation designs and findings in at least 11 states. According to officials, this has increased CMS’s capacity to identify methodological weaknesses and negotiate changes with states to improve the usefulness of evaluations. For example, CMS’s contractor reviewed four draft survey instruments that Indiana planned to use in its evaluation, providing comments on the sampling frames and the structure and organization of survey questions. In response to the contractor’s feedback, Indiana made changes to the surveys to gather more reliable information and improve their readability. Finally, CMS has begun making changes to how it sets due dates for final evaluation reports. CMS officials told us that in spring 2017, CMS began requiring states to submit a comprehensive evaluation report for demonstrations in its high priority policy areas for evaluation at the end of each demonstration cycle, rather than after the expiration of the demonstration. CMS’s recent demonstration renewals in Florida and Missouri—approved in August and September of 2017, respectively— required a final, summative evaluation report at the end of the demonstration cycle, consistent with the policy. In October 2017, CMS officials stated that the agency was expanding this policy and was now planning to require final reports at the end of each cycle for all demonstrations, as they are approved or renewed. However, CMS had not established written procedures for implementing this new policy. It is too soon to assess the effectiveness of CMS’s recent efforts to strengthen state-led evaluations. CMS has been implementing the strategies on a rolling basis as states apply for demonstration renewals and new demonstrations. If implemented and enforced consistently, CMS’s efforts to improve the quality of state-led evaluations have the potential to result in more conclusive evaluations. Further, CMS’s efforts to improve the quality of state-led evaluations and its plan to require final reports after each demonstration cycle are consistent with evaluation guidance from the American Evaluation Association that recommends that federal agencies conduct evaluations of public programs and policies throughout the programs’ life cycles, not just at their end, and that agencies use evaluations to improve programs and assess their effectiveness. Federal internal control standards also state that management should implement control activities through policies. However, CMS does not have written procedures for implementing its planned policy, for example, for ensuring that the requirement is included in the STCs for all demonstrations, despite unique negotiations with each state, and that those requirements are consistently enforced. As a result, some state-led evaluations could continue to produce only more limited, interim findings that leave critical questions about the effects of the these demonstrations on beneficiaries and costs unanswered. CMS oversight of state-led evaluations may see further changes, as CMS officials told us that their oversight procedures are still evolving. For example, CMS officials told us that as of October 2017 the agency plans to begin to make distinctions in the level of evaluation required across demonstrations. They said that they are considering, for example, whether longstanding and largely unchanged components of a demonstration, and approaches previously tested by a number of other states without concern, require the same level of evaluation as testing a new approach to Medicaid expansion. Officials said that they plan to include language in demonstration STCs, as the agency did in the recent renewals for Florida and Missouri, instructing the state to consider those factors as the state designs its evaluation. Specifically, in the evaluation design submitted for CMS approval, the state should include in the discussion of limitations whether the demonstration is long-standing, noncomplex, has previously been rigorously evaluated and found to be successful, or is also considered to be successful without issues or concerns. CMS officials said that the expected level of rigor for the evaluation could be balanced against such factors. The implications of limiting evaluation requirements for certain types of demonstration approaches would depend on CMS’s definitions of what is, for example, noncomplex or has previously been rigorously evaluated. As of October 2017, CMS had not established specific criteria for determining when a demonstration component would require less rigorous evaluation. Agency officials told us they were planning to develop such criteria after concluding a pilot of alternative criteria and expectations in certain demonstrations related to providing services for family planning and former foster care children. They said that when these pilots have concluded they will evaluate the results. It is unclear how these narrowly scoped demonstrations—scoped for a particular type of service or population—can be used to inform criteria for comprehensive demonstrations that can affect a state’s entire Medicaid population and all services. Further, though CMS has begun indicating to states, including those with comprehensive demonstrations, that the agency may allow less rigorous evaluations for certain types of demonstration approaches, CMS has not established timeframes for issuing the criteria defining those conditions. Federal standards for internal control stress that management should implement control activities through policy and should internally and externally communicate necessary information to achieve the agency’s objectives. If CMS does not establish clear criteria for components of demonstrations that require limited evaluation, characteristics such as “long-standing” or “noncomplex” could be broadly interpreted. This could result in demonstrations that receive significant amounts of federal funds and affect many beneficiaries not being thoroughly evaluated. Written criteria could also reduce the potential for inconsistencies in the level of evaluation required across demonstrations. Ongoing Federal Evaluations Led by CMS Have Been Limited by Data Challenges and It Is Uncertain When Results Will Be Available Data and other challenges have significantly limited the scope and progress of CMS’s large, multi-state evaluation and the agency’s evaluation of Indiana’s demonstration. Further, CMS has not released available evaluation results from the multi-state evaluation nor set timeframes for making these and future federal evaluation findings public. Data Challenges Have Limited the Scope and Progress of Federal Evaluations CMS encountered numerous data challenges in its multi-state evaluation that significantly reduced the scope of the analyses planned. These data challenges included limitations in the quality of CMS data and delays obtaining data directly from states. These limitations caused CMS to narrow the evaluation’s scope, often by reducing the number of state demonstrations evaluated or limiting what was being examined. All four demonstration types targeted in the multi-state evaluation—which reflect CMS’s high priority policy areas—were affected by these challenges. In the most extreme case, data limitations reduced the scope of the MLTSS evaluation to two states out of the more than 20 states operating such programs. As a result, the evaluation findings will not be generalizable to all MLTSS programs. (See table 2.) The data challenges were in addition to other challenges that affected the evaluation. For example, there were difficulties in trying to isolate demonstration effects in the context of rapidly changing health systems, or recent demonstrations had not been in operation long enough to allow CMS to appropriately assess longer- term effects. Many of the data challenges CMS encountered in the multi-state evaluation reflect long-standing concerns with the lack of accurate, complete, and timely Medicaid data. Specifically, we and others have found that data states are required to submit to CMS have, at times, been incomplete or have not been reported at all, particularly managed care encounter data. Complicating the availability of these data is CMS’s ongoing transition to a new data system, the Transformed Medicaid Statistical Information System (T-MSIS), which is CMS’s primary effort to improve Medicaid expenditure and utilization data. States’ transitions to T-MSIS, however, have introduced substantial delays in state data submissions. For example, by 2015, a large number of states had stopped submitting data through the legacy information system until they established T-MSIS submissions, which meant CMS had to obtain data directly from individual states for the multi-state evaluation. New data challenges have also emerged as states under demonstrations have enrolled newly eligible beneficiaries in health insurance exchange coverage. Lack of accessible data on beneficiaries enrolled in plans offered through the exchange resulted in the delays in obtaining data for Arkansas for the multi-state evaluation. In the past, we have made recommendations to CMS to take action to improve the data available for Medicaid program oversight, including to T-MSIS. As with the multi-state evaluation, data challenges, particularly obtaining needed data from the state, also proved to be a significant hurdle in CMS’s evaluation of Indiana’s demonstration. CMS initiated its federal evaluation of Indiana’s demonstration in 2015 to understand how the approaches being tested in Indiana’s demonstration affected beneficiaries (see sidebar). However, in 2016, Indiana raised concerns about sharing enrollee data with CMS’s evaluation contractors. Specifically, in a letter to CMS, the state cited concerns about the controls that CMS had in place to ensure that its contractors would protect enrollee information consistent with state and federal privacy protections. Despite assurances by CMS, CMS’s contractor and the state were not able to execute a data use agreement. This effectively halted the evaluation’s progress. The data use agreement was necessary for the contractor to access state enrollment data that drove a number of planned evaluation activities, including a key beneficiary survey. In October 2017, CMS officials told us that they were continuing to work with the state and anticipated that a data use agreement would be executed and the federal evaluation of Indiana’s demonstration would proceed. They did not have timeframes for when the agreement would be reached. Despite the data challenges and delays, CMS’s evaluations of Medicaid demonstrations, as planned, are likely to provide new information on the effects of demonstrations in different states to inform policy decisions. The multi-state evaluation, for example, is expected to provide information on whether living in a state that collects monthly contributions from beneficiaries affects the likelihood of beneficiaries enrolling in Medicaid and how per-beneficiary spending differs between premium assistance demonstration states and states that have implemented more traditional Medicaid expansions. CMS officials emphasized that federal evaluations allow for cross-state evaluations that can be used to validate the findings of related studies and also to identify which findings are generalizable to other states and populations. CMS Has Not Released Rapid Cycle Reports and It Is Uncertain When Final Evaluation Results Will Be Available CMS has yet to make initial reports from the multi-state evaluation publicly available, limiting the potential use of those findings by states and other federal policymakers. As of October 2017, CMS’s contractor had produced 15 rapid cycle reports on states’ progress in implementing demonstrations in the high priority policy areas. These reports provide information on states’ implementation of their demonstrations and variations in design and provide details that can help with the interpretation of evaluation results, inform federal policymaking, and provide lessons learned to states and other stakeholders. The reports also describe policy and other challenges states encountered in implementing their programs, which could be useful to other states interested in replicating these models. (See table 3.) However, despite having received some of these reports from its contractor in 2015, CMS had not released these findings as of October 2017. CMS officials said that the reports were still under agency review and acknowledged that since some of the rapid cycle reports were almost 2 years old, CMS’s contractor was reviewing and updating the information in them. CMS officials noted that the rapid cycle reports had provided useful information and had influenced ongoing work with states designing related demonstrations. For example, according to officials, findings from the rapid cycle reports played a part in how the agency structured the latest DSRIP demonstrations. They also said that rapid cycle reports on beneficiary engagement have shed light on the effectiveness of different beneficiary education strategies, such as what approaches are more successful in capturing beneficiaries’ attention and what strategies are easiest for states to implement. In October 2017, CMS officials stated that they had recently decided to make the rapid cycle reports public, although the agency’s clearance process for the reports was still being decided and the officials did not have timeframes for the reports’ release. It is also uncertain when CMS will make interim and final evaluation reports from the multi-state evaluation public. By September 2017, CMS’s contractor for the multi-state evaluation produced three interim evaluation reports covering the four demonstration types. CMS officials regard these as draft interim evaluation reports, and, as of October 2017, said they were under agency review and would not be publicly released. CMS expects the contractor to submit final interim evaluation reports, which are anticipated to include some additional information beyond the draft reports, by September 2018, about 1 year later than when the final interim evaluation reports were originally due. CMS officials said that the agency planned to release the final interim evaluation reports, although there was no specific timetable for this. Timeframes for the completion and release of final evaluation results are even more uncertain, both because of the delays in the evaluation progress and because CMS has no standard policy for timeframes for releasing evaluation results. It is also uncertain when evaluation results will be available and made public for CMS’s evaluations of the Indiana and Montana demonstrations. Two years after the approval of the contract for the Indiana evaluation, CMS’s contractor has produced an evaluation design but no evaluation findings. CMS had not posted the evaluation design on its website until November 2017, according to officials, about 1 year after it was originally submitted. As discussed above, the lack of findings is due to the contractor and state not having negotiated a data use agreement. To the extent that Indiana’s evaluation moves forward and evaluation reports are produced, CMS officials said the agency plans to release the final evaluation report but did not indicate whether interim findings, available a year earlier, would be released. With regard to the Montana evaluation, CMS expects to receive the interim evaluation report by September 2018 and the final evaluation report by September 2019. How soon these findings would be publicly available, however, is difficult to estimate, as CMS officials told us the agency must review these before making them publically available and does not have timeframes for this review. The lack of a standard policy for the public release of findings from federal evaluations of Medicaid demonstrations is inconsistent with recommendations of the American Evaluation Association. The Association recommends that evaluation findings related to public accountability be disseminated to the public, and that evaluation results be made available in a timely manner and be easily accessible through the internet. For state-led evaluations, CMS must post on its website, or provide a link to the state’s website, all evaluation materials, including research and data collection, for the purposes of sharing findings with the public within 30 days of receiving the materials. CMS has not established a comparable policy for the release of findings from federal evaluations of demonstrations. CMS officials stated that federal evaluations provide a unique cross-state perspective that states typically do not have the capacity to provide in their own state-led evaluations; however, if these reports are not made public in a timely fashion, opportunities may be missed to inform federal and state policymakers and other stakeholders on the effects of Medicaid demonstrations. Conclusions Section 1115 demonstrations have long been an important tool for providing states with the flexibility to test new approaches to providing and financing Medicaid coverage. Given the potential effects on millions of beneficiaries and significant federal investment in these demonstrations—over $100 billion in 2015—it is critical that they be evaluated. Evaluating Medicaid demonstrations is complex, both within a single state and across states. These programs are dynamic, and there are many factors affecting outcomes, making it challenging to isolate the effects of policy changes implemented under a demonstration. Further, persistent challenges with Medicaid data that we have highlighted over the years add to the complexity of evaluating demonstrations. Despite these challenges, targeted and well-designed evaluations offer the potential to identify policies that improve outcomes for beneficiaries and reduce costs to Medicaid. With the growing complexity of Medicaid programs and limited resources, that information could prove key in helping to sustain the program. CMS’s approach to overseeing state-led evaluations in the past has resulted in limited information about the effects of demonstrations, leaving gaps in evidence about policies that might improve state Medicaid programs. CMS’s efforts since 2014 to improve the usefulness of evaluations in informing state and federal Medicaid policy decisions have promise. If CMS consistently sets and enforces clear expectations and provides support for rigorous and timely state-led evaluations for all demonstrations as planned, those evaluations could yield more useful information within the next several years. However, CMS has not established written procedures for requiring final, comprehensive evaluation reports at the end of each cycle for all demonstrations, a key step in improving the usefulness of state-led evaluations. Further, CMS is planning to allow less rigorous evaluations for some demonstrations but has not yet established specific criteria for doing so. Federal evaluations led by CMS also show promise. The evaluations currently underway—despite challenges that caused delays and reduced scope—are likely to provide a cross-state look at the effects of policies that are of great interest to CMS, Congress, and other states. However, CMS has not yet made potentially useful rapid cycle reports public and has no established policy for making future evaluation reports public. By not making the results of the federal evaluations public in a timely manner, CMS is missing an opportunity to inform important policy discussions happening at the state and federal levels. Recommendations for Executive Action We are making the following three recommendations to CMS: The Administrator of CMS should establish written procedures for implementing the agency’s policy that requires all states to submit a final evaluation report after the end of each demonstration cycle, regardless of renewal status. (Recommendation 1) The Administrator of CMS should issue written criteria for when CMS will allow limited evaluation of a demonstration or a portion of a demonstration, including defining conditions, such as what it means for a demonstration to be longstanding or noncomplex, as applicable. (Recommendation 2) The Administrator of CMS should establish and implement a policy for publicly releasing findings from federal evaluations of demonstrations, including findings from rapid cycle, interim, and final reports; and this policy should include standards for timely release. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to HHS for review and comment. HHS concurred with all three recommendations. Regarding our first recommendation that CMS establish written procedures for implementing its policy requiring states to submit final evaluation reports after the end of each demonstration cycle, HHS said that it is in the process of developing such written procedures. HHS said that it is currently making this a requirement through the STCs for each demonstration as demonstrations are approved or renewed. Regarding our second recommendation that CMS issue written criteria for when the agency will allow states to limit evaluations of their demonstrations, HHS said it is in the process of testing such criteria, and that once it has experience with the criteria, it will develop written guidance. Regarding our third recommendation that CMS establish and implement a policy for publicly releasing findings from federal evaluations of demonstrations, HHS said that CMS is in the process of establishing such a policy. HHS added that CMS plans to have all finalized federal rapid cycle reports and final interim evaluation reports publicly available in the near future. HHS also provided technical comments, which we incorporated as appropriate. HHS’s comments are reproduced in appendix II. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, appropriate congressional committees, and other interested parties. The report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Characteristics of Selected States’ 1115 Demonstrations The Medicaid section 1115 demonstrations (referred to as demonstrations) in our eight selected states varied in terms of the number of years the demonstrations had been in effect and cost, among other things. For example, three of the more mature demonstrations—those in Maryland, Massachusetts, and New York—had been in place for two decades. Demonstrations in Arkansas and Kansas represented more recent approvals, both approved in 2013. (See table 4.) With regard to cost, all of the selected states were among the top 15 states in terms of amount of spending under demonstrations. Together, spending under demonstrations in our selected states accounted for about 47 percent of all spending under demonstrations in fiscal year 2015. Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Susan Barnidge (Assistant Director), Linda McIver (Analyst-in-Charge), John Lalomio, Hannah Locke, and Corissa Kiyan-Fukumoto made key contributions to this report. Also contributing were Laurie Pachter and Emily Wilson.
Why GAO Did This Study Demonstrations—which represented roughly a third of the more than $300 billion in federal Medicaid spending in 2015—are a powerful tool to test new approaches to providing coverage and delivering Medicaid services that could reduce costs and improve beneficiaries' outcomes. Evaluations are essential to determining whether demonstrations are having their intended effects. States are required to evaluate their demonstrations and CMS can initiate its own federal evaluations of demonstrations. GAO was asked to examine evaluations of demonstrations, including how the results have been used to inform Medicaid policy. This report examines (1) state-led evaluations and (2) federal evaluations. GAO reviewed evaluation documentation for eight states with high demonstration expenditures that varied in the number of years their demonstrations had been in effect and by geography. GAO also reviewed documentation for the ongoing federal evaluations and interviewed state and federal Medicaid officials. GAO assessed evaluation practices against federal standards for internal control and leading evaluation guidelines. What GAO Found Under section 1115 of the Social Security Act, the Secretary of Health and Human Services (HHS) may approve Medicaid demonstrations to allow states to test new approaches to providing coverage and for delivering services that can transform large portions of states' programs. However, GAO found that selected states' evaluations of these demonstrations often had significant limitations that affected their usefulness in informing policy decisions. The limitations included gaps in reported evaluation results for important parts of the demonstrations. (See table.) These gaps resulted, in part, from HHS's Centers for Medicare & Medicaid Services (CMS) requiring final, comprehensive evaluation reports after the expiration of the demonstrations rather than at the end of each 3- to 5-year demonstration cycle. CMS has taken a number of steps since 2014 to improve the quality of state-led evaluations, and in October 2017, officials stated that the agency planned to require final reports at the end of each demonstration cycle for all demonstrations. However, the agency has not established written procedures for implementing such requirements, which could allow for gaps to continue. CMS also plans to allow states to conduct less rigorous evaluations for certain types of demonstrations but has not established criteria defining under what conditions limited evaluations would be allowed. Federal evaluations led by CMS have also been limited due to data challenges that have affected the progress and scope of the work. For example, delays obtaining data directly from states, among other things, led CMS to considerably reduce the scope of a large, multi-state evaluation, which was initiated in 2014 to examine the impact of state demonstrations in four policy areas deemed to be federal priorities. Though CMS has made progress in obtaining needed data, it is uncertain when results from the multi-state and other federal evaluations will be available to policymakers because CMS has no policy for making results public. By not making these results public in a timely manner, CMS is missing an opportunity to inform important federal and state policy discussions. What GAO Recommends GAO recommends that CMS: (1) establish written procedures for requiring final evaluation reports at the end of each demonstration cycle, (2) issue criteria for when it will allow limited evaluations of demonstrations, and (3) establish a policy for publicly releasing findings from federal evaluations of demonstrations. HHS concurred with these recommendations.
gao_GAO-18-427
gao_GAO-18-427_0
Background Prior to the President’s March 2017 executive order for comprehensive government reorganization, in January 2017, the President ordered a federal hiring freeze—providing exemptions for federal employees with national security or public safety responsibilities. The January 2017 presidential memo also directed OMB, in consultation with the Office of Personnel Management (OPM), to recommend a long-term plan to reduce the size of the federal workforce through attrition. OMB’s April 2017 guidance to agencies on their reform plans lifted the federal hiring freeze. Below is a timeline for proposed reform development and implementation as shown in figure 1. According to OMB’s April 2017 guidance, the agency reform plans were intended to accomplish several objectives, including creating a lean, accountable, more efficient government, focusing on efficiency and effectiveness and delivering programs of highest needs to citizens, and aligning the federal workforce to meet the needs of today and the future, among other things. Each agency’s proposed reform plan was to include proposals to improve efficiency, effectiveness, and accountability in four categories: (1) eliminate activities; (2) restructure and merge activities; (3) improve organizational efficiency and effectiveness; and (4) workforce management. To support these proposed reforms, OMB asked agencies to conduct an analysis, among other things, to consider if there was a unique federal role or whether some or all services, activities, or functions could be better performed by another entity, such as a state, local or tribal government or the private sector. Additionally, according to OMB’s April 2017 guidance, the draft agency proposed reform plan should be aligned with the agency strategic plan. Agency strategic plans were to be released with the President’s fiscal year 2019 budget. The final reforms included in the fiscal year 2019 budget also were to be reflected in the agencies’ human capital operating plans and information technology strategic plans, based on OMB guidance we reviewed. In March 2018, OMB released the President’s Management Agenda (PMA), which provided updated information on the status of government reorganization efforts and is connected with these reform efforts. The PMA also identified a set of cross-agency priority (CAP) goals, required under the GPRA Modernization Act of 2010 (GPRAMA), to target those areas where multiple agencies must collaborate to effect change and report progress in a manner the public can easily track. In addition to the agency reform proposals, OMB was also required by the March 2017 executive order to develop a comprehensive government- wide reform plan, including both legislative proposals and administrative actions based on agency reform plans, OMB-coordinated crosscutting proposals, and public input. According to a document provided by OMB staff, OMB solicited public comments beginning in April 2017 through June 2017 to inform the development of the government-wide reform plan. OMB staff told us they provided these comments to the appropriate agencies. The March 2018 PMA stated that, in the months ahead, the administration plans to share additional reorganization proposals designed to refocus programs around current and future needs. According to OMB guidance, once the government-wide reform proposals are finalized, it will, in coordination with the President’s Management Council, establish a way to track the progress of the reforms. To track progress of the reforms, OMB’s guidance stated that it will leverage the federal performance planning and reporting framework originally put into place by the Government Performance Results Act of 1993 (GPRA) and significantly enhanced by GPRAMA, through the use of CAP goals, agency priority goals, and Performance.gov. Key Questions to Assess Agency Reforms Given the potential benefits and challenges developing and implementing agency reform efforts, Congress and the executive branch need the tools and information to help evaluate agencies’ reform proposals and ensure they are effectively implemented. Congress’s role in reviewing agency proposed reforms will be critical to the success of making significant changes in how the government operates. To assist Congress in its oversight role, we organized our prior work and leading practices into the following four broad categories that can help the Congress assess proposed reforms. Figure 2 describes the four broad categories, relevant sub-categories of questions, and selected key questions in more detail below. Goals and Outcomes of Reforms Lessons learned from prior federal reform and reorganization efforts suggest that reforming government is an immensely complex activity that requires agreement on both the goals to be achieved and the means for achieving them. Because many current federal programs and policies were designed decades ago to respond to trends and challenges that existed at the time of their creation, it makes sense to periodically conduct fundamental reviews of major programs and policy areas to ensure they continue to meet current goals and emerging trends. It is also important to determine the appropriate level of government, or the roles of the non- profit or private sectors, in achieving these goals. Our prior work shows that establishing a mission-driven strategy and identifying specific desired outcomes to guide that strategy are critical to achieving intended results. In other words, what is the agency trying to achieve with its reforms? Determining the Appropriate Role of the Federal Government It is important for agencies to reexamine the role of the federal government in carrying out specific missions and programs, policies, and activities by reviewing their continued relevance and determining whether the federal government is best suited to provide that service or if it can be provided by some other level of government or sector more efficiently or effectively. Another key aspect of shifting federal activities to other levels of government is how well the federal government fully considered the potential effects reforms might have on state and local governments, especially from a budgetary and fiscal standpoint. For example, how should the federal government act directly, or in partnership with another level of government or a non-profit organization, to achieve the identified outcomes? Defining the appropriate federal role also involves examining the federal government’s relationships with key state, local, non-profit, and private sector partners. For example, agencies should assess whether there are alternatives for managing their programs effectively across intergovernmental and organizational boundaries, as well as which level of government has the capacity to deliver on the nation’s needs and priorities today and in the future. How well have the proposed reforms indicated the likely result of the elimination, merging, or restructuring of activities with other levels of government or sectors? To what extent have the proposed reforms included consideration for other levels’ of government or sectors’ ability or likelihood to invest their own resources to address the underlying challenges? To what extent have the proposed reforms included goals to transfer a particular responsibility to another level of government—such as state or local government—or sector, and has the agency made the case that such a transfer could improve the overall accomplishment of public purpose? To what extent have the proposed reforms considered if a new mechanism is needed to integrate and coordinate programs between levels of government? If so, what statutory or regulatory changes would be needed to support such a transfer in responsibilities and to address concerns such as cost-sharing or funding? To what extent has the agency identified any risks of using contractors to perform agency activities, and if so, has it developed appropriate risk mitigating strategies? Establishing Goals and Outcomes When considering government reforms, our prior work has identified useful principles, such as designing proposed reforms to achieve specific, identifiable goals that encourage decision makers to reach a shared understanding of the purpose of the reforms. Agreement on specific goals can help decision makers determine what problems genuinely need to be fixed, how to balance differing objectives, and what steps need to be taken to create, not just short-term advantages but long-term gains. Part of determining if agencies have successfully identified the goals of their proposed reforms is to determine whether the agency has built a business case analysis that presents facts and supporting details among competing alternatives. To what extent has the agency established clear outcome-oriented goals and performance measures for the proposed reforms? To what extent has the agency shown that the proposed reforms align with the agency’s mission and strategic plan? To what extent has the agency considered and resolved any agency crosscutting or government-wide issues in developing their proposed reforms? For example, what are the implications of proposed reforms on other agencies? To what extent has the agency considered the likely costs and benefits of the proposed reforms? If so, what are they? To what extent has the agency considered how the upfront costs of the proposed reforms would be funded? To what extent has the agency included both short-term and long- term efficiency initiatives in the proposed reforms? Process for Developing Reforms Successful reforms require an integrated approach that involves employees and key stakeholders and is built on the use of data and evidence. Reforms should also address agency management challenges, such as those we have identified as fragmented, duplicative, or overlapping, or in our high-risk program, or by agency Inspectors General. Involving Employees and Key Stakeholders Our prior work has shown that it is important for agencies to directly and continuously involve their employees, the Congress, other key stakeholders—such as other federal partners, state and local governments, and members of the public—in the development of any major reforms. Involving employees, customers, and other stakeholders helps facilitate the development of reform goals and objectives, as well as incorporating insights from a frontline perspective and increases customer acceptance of any changes. We have also identified leading practices for open innovation strategies, defined as the use of activities and technologies to harness ideas, expertise, and resources of those outside an organization to address an issue or achieve specific goals. How and to what extent has the agency consulted with the Congress, and other key stakeholders, to develop its proposed reforms? How and to what extent has the agency engaged employees and employee unions in developing the reforms (e.g., through surveys, focus groups) to gain their ownership for the proposed changes? How and to what extent has the agency involved other stakeholders, as well as its customers and other agencies serving similar customers or supporting similar goals, in the development of the proposed reforms to ensure the reflection of their views? How and to what extent has the agency considered the views of state and local governments that would be affected by the proposed reforms? How and to what extent have agencies gathered the views of the public and incorporate these views in the proposed reforms? Is there a two-way continuing communications strategy that listens and responds to concerns of employees regarding the effects of potential reforms? How will the agency publicize its reform goals and timeline, and report on its related progress? Using Data and Evidence We have reported that agencies are better equipped to address management and performance challenges when managers effectively use data and evidence, such as from program evaluations and performance data that provide information on how well a program or agency is achieving its goals. When reforming a given program, the use of data and evidence is critical from setting program priorities and allocating resources to taking corrective action to solve performance problems and ultimately improve results. We have also stated that full and effective implementation of GPRAMA could facilitate efforts to reform the federal government and make it more efficient, effective, and accountable. GPRAMA also provides important tools that can help decision makers address challenges facing the federal government. What data and evidence has the agency used to develop and justify its proposed reforms? How has the agency determined that the evidence contained sufficiently reliable data to support a business case or cost-benefit analysis of the reforms? How, if at all, were the results of the agency’s strategic review process used to help guide the proposed reforms? How, if at all, were the results of the agency’s enterprise risk management process used to help guide the proposed reforms? Addressing Fragmentation, Overlap, and Duplication In our prior work, we have identified areas where agencies may be able to achieve greater efficiency or effectiveness by reducing or better managing programmatic fragmentation, overlap, and duplication. For additional details on assessing areas of fragmentation, overlap, and duplication, see our evaluation and management guide. To what extent has the agency addressed areas of fragmentation, overlap, and duplication—including the ones we identified—in developing its reform proposals? To what extent have the agency reform proposals helped to reduce or better manage the identified areas of fragmentation, overlap, or duplication? To what extent has the agency identified cost savings or efficiencies that could result from reducing or better managing areas of fragmentation, overlap, and duplication? Addressing High Risk Areas and Longstanding Management Challenges Reforms improving the effectiveness and responsiveness of the federal government often require addressing longstanding weaknesses in how some federal programs and agencies operate. For example, agency reforms provide an opportunity to address the high-risk areas and government-wide challenges we have called attention to that are vulnerable to fraud, waste, abuse, and mismanagement, or are in need of transformation. What management challenges and weaknesses are the reform efforts designed to address? How specifically has the agency considered high-risk issues, agency Inspector General’s major management challenges, and other external and internal reviews in developing its reform efforts? Have the agency’s efforts to address those challenges been consistent with the proven approach GAO has found to resolve high risk issues? Agencies can show progress by addressing GAO’s five criteria for removal from the High-Risk List: leadership commitment, capacity, action plan, monitoring, and demonstrated progress. The five criteria form a road map for efforts to improve and ultimately address high-risk issues. How has the agency identified and addressed critical management challenges in areas such as information technology, cybersecurity, acquisition management, and financial management that can assist in the reform process? How does the agency plan to monitor the effects proposed reforms will have on high risk areas? Has the agency addressed ways to decrease the risk of fraud, waste, and abuse of programs as part of its proposed reforms? In addition, agencies should also draw upon our past recommendations, including GAO priority open recommendations and those from their own Inspectors General, to address management challenges. How have findings and open recommendations from GAO and the agency Inspectors General been addressed in the proposed reforms? How has the agency addressed GAO’s priority open recommendations, which are those that warrant priority attention from heads of key departments and agencies? Implementing the Reforms Our prior work on organizational transformations show that incorporating change management practices improves the likelihood of successful reforms. Moreover, it is also important to recognize agency cultural factors that can either help or inhibit reform efforts and how change management strategies may address these potential issues. We have also reported that organizational transformations, such as reforms, should be led by a dedicated team of high-performing leaders within the agency. Finally, our prior work also shows that fully implementing major transformations can span several years and must be carefully and closely managed. Leadership Focus and Attention Has the agency designated a leader or leaders to be responsible for the implementation of the proposed reforms? Has agency leadership defined and articulated a succinct and compelling reason for the reforms (i.e., a case for change)? How will the agency hold the leader or leaders accountable for successful implementation of the reforms? Has the agency established a dedicated implementation team that has the capacity, including staffing, resources, and change management, to manage the reform process? Managing and Monitoring How has the agency ensured their continued delivery of services during reform implementation? What implementation goals and a timeline have been set to build momentum and show progress for the reforms? In other words, has the agency developed an implementation plan with key milestones and deliverables to track implementation progress? Has the agency ensured transparency over the progress of its reform efforts through web-based reporting on key milestones? Has the agency put processes in place to collect the needed data and evidence that will effectively measure the reforms’ outcome-oriented goals? How is the agency planning to measure customer satisfaction with the changes resulting from its reforms? Strategically Managing the Federal Workforce As part of its reform effort, OMB also required agencies to develop a long- term workforce reduction plan and a plan to maximize employee performance as part of the April 2017 reform guidance. Specifically, OMB required agencies to develop proposals intended to improve performance, increase accountability, and reduce the size and costs of the federal workforce. Our prior work has found that at the heart of any serious change management initiative are the people—because people define the organization’s culture, drive its performance, and embody its knowledge base. Experience shows that failure to adequately address—or often even consider—a wide variety of people and cultural issues can lead to unsuccessful change. Employee Engagement Research on both private- and public-sector organizations has found that increased levels of engagement—generally defined as the sense of purpose and commitment employees feel toward their employer and its mission—can lead to better organizational performance. Additionally, we found that agencies can sustain or increase their levels of employee engagement and morale, even as employees weather difficult external circumstances. In a previous review of trends in federal employee engagement, as seen in figure 2 below, we identified six key drivers of engagement based on our analysis of selected questions in the Federal Employee Viewpoint Survey (FEVS). What do FEVS results show for the agency’s current employee engagement status both overall and disaggregated to lower organizational levels? How does the agency plan to sustain and strengthen employee engagement during and after the reforms? How specifically is the agency planning to manage diversity and ensure an inclusive work environment in its reforms, or as it considers workforce reductions? Strategic Workforce Planning Strategic workforce planning should precede any staff realignments or downsizing, so that changed staff levels do not inadvertently produce skills gaps or other adverse effects that could result in increased use of overtime and contracting. To what extent has the agency conducted strategic workforce planning to determine whether it will have the needed resources and capacity, including the skills and competencies, in place for the proposed reforms or reorganization? How has the agency assessed the effects of the proposed agency reforms on the current and future workforce and what does that assessment show? To what extent does the agency track the number and cost of contractors supporting its agency mission and the functions those contractors are performing? How has the agency ensured that actions planned to maintain productivity and service levels do not cost more than the savings generated by reducing the workforce? What succession planning has the agency developed and implemented for leadership and other key positions in areas critical to reforms and mission accomplishment? To what extent have the reforms included important practices for effective recruitment and hiring such as customized strategies to recruit highly specialized and hard-to-fill positions? What employment- and mission-related data has the agency identified to monitor progress of reform efforts and to ensure no adverse impact on agency mission, and how is it using that data? Workforce Reduction Strategies Before implementing workforce reduction strategies, it is critical that agencies carefully consider how to strategically downsize the workforce and maintain the staff resources to carry out its mission. Agencies should consider long-term staffing plans and associated personnel costs, organizational design and position structures and the appropriateness of backfilling positions as they become vacant. To what extent has the agency considered skills gaps, mission shortfalls, increased contracting and spending, and challenges in aligning workforce with agency needs prior to implementing workforce reduction strategies? In situations when “early outs” and “buyouts” are proposed, to what extent has the agency linked proposed early outs and buyouts to specific organizational objectives, including the agency’s future operational, restructuring, downsizing, or other reform goals? Employee Performance Management Performance management systems are used to plan work and set individual employee performance expectations, monitor performance, develop capacities to perform, and rate and incentivize individual performance. In addition, performance management systems can help the organization manage employees on a daily basis and help to ensure that individual employees understand the “line of sight” between their performance and organizational results. Effective performance management systems provide supervisors and employees with the tools they need to improve performance. To what extent has the agency aligned its employee performance management system with its planned reform goals? How has the agency included accountability for proposed change implementation in the performance expectations and assessments of leadership and staff at all levels? As part of the proposed reform development process, to what extent has the agency assessed its performance management to ensure it creates incentives for and rewards top performers, while ensuring it deals with poor performers? To what extent has the agency taken action to address employees with unacceptable performance and increase the use of alternative dispute resolution to address workplace disputes that involve disciplinary or adverse actions? Agency Comments and Our Evaluation We provided a draft of this report to the Director of the Office of Management and Budget for review and comment. OMB staff provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Director of the Office of Management and Budget, and other interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact J. Christopher Mihm at (202) 512-6806 or [email protected] or Robert Goldenkoff at (202) 512-2757 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix III. Appendix 1: Related GAO Products Organizational Transformation and Streamlining Government GAO, Managing for Results: Key Considerations for Implementing Interagency Collaborative Mechanisms, GAO-12-1022 (Washington, D.C.: Sep. 27, 2012). GAO, Streamlining Government: Questions to Consider When Evaluating Proposals to Consolidate Physical Infrastructure and Management Functions, GAO-12-542 (Washington, D.C.: May 23, 2012). GAO, Government Efficiency and Effectiveness: Opportunities for Improvement and Considerations for Restructuring, GAO-12-454T (Washington, D.C.: Mar. 21, 2012). GAO, Streamlining Government: Key Practices from Select Efficiency Initiatives Should Be Shared Governmentwide, GAO-11-908 (Washington, D.C.: Sep 30, 2011). GAO, Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations, GAO-03-669 (Washington, D.C.: Jul. 2, 2003). GAO, A Call For Stewardship: Enhancing the Federal Government's Ability to Address Key Fiscal and Other 21st Century Challenges, GAO-08-93SP (Washington, D.C.: Dec. 17, 2007). GAO, 21st Century Challenges: Reexamining the Base of the Federal Government, GAO-05-325SP (Washington, D.C.: Feb. 1, 2005). GAO, Regulatory Programs: Balancing Federal and State Responsibilities for Standard Setting and Implementation, GAO-02-495 (Washington, D.C.: Mar. 20, 2002). Fragmentation, Duplication, and Overlap GAO, 2018 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits, GAO-18-371SP (Washington, D.C.: Apr. 26, 2018). GAO, Fragmentation, Overlap, and Duplication: An Evaluation and Management Guide, GAO-15-49SP (Washington, D.C.: Apr. 14, 2015). High-Risk and Major Management Challenges GAO, High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others, GAO-17-317 (Washington, D.C.: Feb. 15, 2017). GAO, Managing for Results: Selected Agencies’ Experiences in Implementing Strategic Reviews, GAO-17-740R (Washington, D.C.: Sep. 7, 2017). GAO, Enterprise Risk Management: Selected Agencies' Experiences Illustrate Good Practices in Managing Risk, GAO-17-63 (Washington, D.C.: Dec. 1, 2016). GAO, Managing for Results: Practices for Effective Agency Strategic Reviews, GAO-15-602 (Washington, D.C.: Jul. 29, 2015). Contracting and National Security Acquisitions GAO, Federal Procurement: Smarter Buying Initiatives Can Achieve Additional Savings, but Improved Oversight and Accountability Needed, GAO-17-164 (Washington, D.C.: Oct. 26, 2016). GAO, Framework for Assessing the Acquisition Function At Federal Agencies, GAO-05-218 (Washington, D.C.: Sep. 1, 2005). GAO, Improper Payments: Strategy and Additional Actions Needed to Help Ensure Agencies Use the Do Not Pay Working System as Intended, GAO-17-15 (Washington, D.C.: Oct. 14, 2016). GAO, Financial Management Systems: Experience with Prior Migration and Modernization Efforts Provides Lessons Learned for New Approach, GAO-10-808 (Washington, D.C.: Sep. 8, 2010) GAO, Financial Management Systems: Additional Efforts Needed to Address Key Causes of Modernization Failures, GAO-06-184 (Washington, D.C.: Mar.15, 2006). GAO, Executive Guide: Creating Value Through World-class Financial Management (Supersedes AIMD-99-45), AIMD-00-134 (Washington, D.C.: Apr.1, 2000). GAO, Information Technology: Further Implementation of FITARA Related Recommendations Is Needed to Better Manage Acquisitions and Operations, GAO-18-234T (Washington, D.C.: Nov. 15, 2017). GAO, Information Technology: Opportunities for Improving Acquisitions and Operations, Highlights of a Forum Convened by the Comptroller General of the United States, GAO-17-251SP (Washington, D.C.: Apr. 11, 2017). GAO, Cybersecurity: Federal Efforts Are Under Way That May Address Workforce Challenges, GAO-17-533T (Washington, D.C.: Apr. 4, 2017). GAO, IT Workforce: Key Practices Help Ensure Strong Integrated Program Teams; Selected Departments Need to Assess Skill Gaps, GAO-17-8 (Washington, D.C.: Nov. 30, 2016). GAO, Federal Chief Information Security Officers: Opportunities Exist to Improve Roles and Address Challenges to Authority, GAO-16-686 (Washington, D.C.: Aug. 26, 2016). GAO, Digital Service Programs: Assessing Results and Coordinating with Chief Information Officers Can Improve Delivery of Federal Projects, GAO-16-602 (Washington, D.C.: Aug. 15, 2016). GAO, Information Technology Reform: Billions of Dollars in Savings Have Been Realized, but Agencies Need to Complete Reinvestment Plans, GAO-15-617 (Washington, D.C.: Sept. 15, 2015). Strategically Managing the Federal Workforce GAO, Federal Workforce: Additional Analysis and Sharing of Promising Practices Could Improve Employee Engagement and Performance, GAO-15-585 (Washington, D.C.: Jul. 14, 2015). GAO, Federal Workforce: OPM and Agencies Need to Strengthen Efforts to Identify and Close Mission-Critical Skills Gaps, GAO-15-223 (Washington, D.C.: Jan. 30, 2015). GAO, Federal Workforce: Improved Supervision and Better Use of Probationary Periods Are Needed to Address Substandard Employee Performance, GAO-15-191 (Washington, D.C.: Feb. 6, 2015). GAO, Results-Oriented Management: OPM Needs to Do More to Ensure Meaningful Distinctions Are Made in SES Ratings and Performance Awards, GAO-15-189 (Washington, D.C.: Jan. 22, 2015) GAO, Human Capital: Strategies to Help Agencies Meet Their Missions in an Era of Highly Constrained Resources, GAO-14-168 (Washington, D.C.: May 7, 2014). GAO, Human Capital: Agencies Are Using Buyouts and Early Outs with Increasing Frequency to Help Reshape Their Workforces, GAO-06-324 (Washington, D.C.: Mar. 31, 2006). GAO, Issues Related to Poor Performers in the Federal Workplace, GAO-05-812R (Washington, D.C.: June 29, 2005). GAO, Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government (Supersedes GAO-03-893G), GAO-04-546G (Washington, D.C.: Mar. 1, 2004). GAO, Human Capital: Key Principles for Effective Strategic Workforce Planning, GAO-04-39 (Washington, D.C.: Dec. 11, 2003). GAO, Results-Oriented Culture: Creating a Clear Linkage between Individual Performance and Organizational Success, GAO-03-488 (Washington, D.C.: Mar. 14, 2003). GAO, Federal Downsizing: Effective Buyout Practices and Their Use in FY 1997, GGD-97-124 (Washington, D.C.: Jun. 30, 1997). GAO, Performance Management: How Well Is the Government Dealing With Poor Performers?, GGD-91-7(Washington, D.C.: Oct. 2, 1990). GAO, Recent Government-Wide Hiring Freezes Prove Ineffective in Managing Federal Employment, FPCD-82-21 (Washington, D.C: Mar. 10, 1982). GAO, Key Issues: Ensuring the Security of Federal Information Systems and Cyber Critical Infrastructure and Protecting the Privacy of Personally Identifiable Information - High Risk Issue, accessed April 24, 2018, https://www.gao.gov/key_issues/ensuring_security_federal_information_s ystems/issue_summay GAO, Key Issues, Duplication and Cost Savings, Action Tracker, https://www.gao.gov/duplication/overview#t=1, accessed April 24, 2018,an online tool for monitoring the progress federal agencies and Congress have made in addressing the actions identified in GAO's annual Duplication and Cost Savings reports. Appendix II: Subject Matter Specialists Appendix III: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the above contact, Sarah E. Veale, Assistant Director, Thomas Gilbert, Assistant Director, and Carole J. Cimitile, Analyst-in- Charge, supervised the development of this report. Layla Y. Moughari, Steven Putansu, and Robert Robinson made significant contributions to this report. Kayla Robinson provided legal counsel.
Why GAO Did This Study On March 13, 2017, the President issued an executive order requiring a comprehensive reorganization of executive branch agencies. In April 2017, the Office of Management and Budget (OMB) provided guidance to federal agencies for developing their reform and workforce reduction proposals. Past proposals to reform and reorganize government have not always come to fruition and can take years to implement fully. GAO's prior work has shown that successful reforms or transformations depend upon following change management practices, such as agreement on reform goals, and the involvement of the Congress, federal employees, and other key stakeholders. This report identifies the key questions that Congress, OMB, and agencies can use to assess the development and implementation of agency reforms. To meet this objective, GAO reviewed its prior work and leading practices on organizational transformations; collaboration; government streamlining and efficiency; fragmentation, overlap, and duplication; high-risk; and on other agency longstanding management challenges. GAO also identified subject matter specialists knowledgeable about issues related to government reform and strategic human capital management who reviewed and commented on GAO's draft questions. GAO is not making recommendations to OMB in this report. OMB staff provided technical comments, which we incorporated as appropriate.
gao_GAO-18-527
gao_GAO-18-527_0
Background When DCTAG was created, there was no income eligibility requirement. However, in 2007, federal law limited eligibility to students from families with annual taxable incomes less than $1,000,000. In 2015, federal law further limited eligibility to students from families with annual taxable incomes less than $750,000; the law provided that this limit was to be subsequently adjusted for inflation as measured by the percentage increase, if any, in the Consumer Price Index for All Urban Consumers. For example, in academic year 2018, eligibility was limited to students from families with annual taxable incomes less than $762,000 (see textbox for selected eligibility requirements). Complete the Free Application for Federal Generally begin a course of study within 3 years of graduating high school or obtaining a General Equivalency Diploma. institution’s requirements for Satisfactory Academic Progress. Universities (HBCU) nationwide and other participating private nonprofit institutions in the D.C. metropolitan area. Populations Eligible for and Enrolled in DCTAG Remained Relatively Stable as Amounts Awarded Increased and Recipients Graduated at Higher Rates than Selected National Comparison Groups We identified the following trends in eligibility for and enrollment in DCTAG and graduation rates for recipients: DCTAG’s potentially eligible population. ACS data for calendar years 2007−2016 indicate that the population of high school students with incomes within DCTAG’s eligibility requirements has remained relatively stable. Over this time frame, about 25,000 students in D.C. were enrolled in high school each year, and about 90 percent of D.C. households had annual incomes less than $200,000. Additional households with annual incomes of $200,000 and above were also likely eligible for DCTAG based on income. Enrollment in DCTAG. DCTAG program data indicate that the number of DCTAG recipients remained relatively stable over the last decade. DCTAG provided awards to an average of about 4,750 recipients annually over academic years 2007−2016 (see fig. 1). While enrollment in DCTAG peaked in academic year 2012, the number of DCTAG recipients in academic year 2016, the last year in our period of review, was similar to the number of recipients in academic year 2007, the first year in our period of review. Enrollment in DCTAG by type of high school attended. DCTAG program data indicate the majority of recipients over academic years 2007–2016 graduated from D.C.’s public high school system—both traditional public schools and public charter schools. D.C.’s traditional public schools include six selective schools, or magnet schools, that limit admission to students that meet certain criteria or eligibility requirements. For example, in academic year 2016, more than 70 percent of DCTAG recipients graduated from D.C.’s public high school system (see fig. 2). Many DCTAG recipients have also graduated from private schools or schools outside D.C., were home schooled, or attained their General Equivalency Diploma. For academic years 2007−2016, between about 30 and 40 percent of DCTAG recipients came from high schools or programs outside the D.C. public school system. Enrollment in DCTAG by taxable household income. Although in 2007 federal law limited eligibility for DCTAG to students from families with annual taxable incomes less than $1,000,000, DCTAG enrollment data show the program made awards to students from families with a wide range of household taxable incomes in academic years 2009−2016. At the same time, enrollment data indicate the program’s particular support for students from middle and lower income families. Nearly 60 percent of recipients over this time frame came from families with annual household taxable incomes of $50,000 or less (see fig. 3). Enrollment in DCTAG by Ward. DCTAG program data indicate that for academic years 2007−2016, about 50 percent of recipients came from the three D.C. wards with the lowest median household incomes, according to American Community Survey estimates (see fig. 4). Enrollment in DCTAG by attendance at 4-year and 2-year institutions. DCTAG program data show that for academic years 2007−2016, about 90 percent of DCTAG recipients attended 4-year institutions (see fig. 5). To counter the downward trend in enrollment at 2-year institutions that began in academic year 2013, OSSE officials told us they made programmatic changes to DCTAG for academic year 2018. Specifically, OSSE officials told us they determined out-of-state-tuition at 2-year public institutions attended by DCTAG recipients exceeded in-state tuition by an average of $4,500 per year. However, the maximum annual award for recipients attending these institutions was only $2,500. For academic year 2018, OSSE officials said they increased the maximum annual award to attend 2-year public institutions to $10,000 to close this gap. Enrollment in DCTAG by amount awarded. For academic years 2007−2016, the percentage of recipients receiving DCTAG’s maximum annual awards increased from 40 percent to 62 percent (see fig. 6). OSSE officials linked an increase in the percentage of recipients receiving maximum awards to rising tuition at colleges and universities over this period. We analyzed data from IPEDS on average tuition and required fees at 4-year public institutions and our analysis confirmed that the average gap between out-of-state and in- state tuition exceeded DCTAG’s $10,000 maximum annual award starting in academic year 2015. DCTAG graduation rates. College graduation rates are an important measure of performance for DCTAG. OSSE officials told us they maintain a program goal of helping recipients choose schools from which they are likely to graduate. For academic years 2012−2015, 6- year college graduation rates for DCTAG recipients were lower than those for students nationwide. However, OSSE officials reported that rates for recipients compare favorably to rates for national and regional groups of students with characteristics similar to those of DCTAG recipients. Our analysis confirmed that in academic year 2015, about 72 percent of DCTAG recipients were African-American and the DCTAG graduation rate was about 10 percentage points higher than for African-Americans nationwide. Similarly, in academic year 2015, nearly 40 percent of DCTAG recipients attended Historically Black Colleges and Universities (HBCU) and the DCTAG graduation rate was about 15 percentage points higher than for the nationwide population of students at these schools (see fig. 7). Additionally, OSSE officials estimated that more than 65 percent of DCTAG recipients were eligible for Pell Grants in academic year 2016. The National Center for Education Statistics recently started reporting graduation rates for Pell Grant recipients, beginning with the cohort of recipients that should have graduated by academic year 2016. Although not directly aligned, the academic year 2016 graduation rate for Pell Grant recipients nationwide was 48 percent— similar to the academic year 2015 graduation rate for DCTAG recipients. DCTAG and Its Partners Help Recipients Prepare for College, Complete Applications for Financial Aid, and Stay on Track to Graduate DCTAG partners with other entities to offer support services intended to help D.C. students prepare for college, apply for financial aid, and stay on track to graduate college. These partners include other entities within OSSE, as well as partners in the broader community such as public and private high school officials and college access providers. DCTAG provides some support services directly to students, such as individual counseling on how to complete a DCTAG application (see fig. 8). An OSSE official told us that DCTAG counselors instruct applicants and renewing recipients on tasks such as how to obtain required supporting documents to verify their residency in D.C. Additionally, to keep recipients on track to graduate, DCTAG emails recipients a quarterly newsletter with reminders to reapply for DCTAG and federal student aid so that they do not disrupt their studies by losing financial assistance. OSSE officials also said that DCTAG expands the reach of its support services by partnering with other entities within OSSE and in the community. For example, DCTAG works with OSSE’s Office of College and Career Readiness, whose mission is to increase D.C. public school students’ access to college. Through this collaboration, DCTAG helps eligible students prepare for higher education, such as through assistance to public schools to offer college entrance exams at no cost to students. Similarly, by partnering with college access providers, DCTAG supplements the support services it offers to help students stay on track to graduate. For example, DCTAG partners with the D.C. College Access Program, a privately funded scholarship program that offers support services for D.C. students in college. One of their services includes using scholarship recipients to mentor incoming D.C. students. OSSE’s Reporting to Key Stakeholders on DCTAG Does Not Include Program Performance Information We found that although OSSE communicates DCTAG’s program data and activities to internal stakeholders, Congress, and the public in various formats, these reporting methods do not include the program’s four goals (see textbox), relate performance information to these program goals, or describe progress toward achieving them (see table 1). For example, OSSE’s 2017 annual report to Congress on DCTAG did not include DCTAG’s four program goals, nor did OSSE relate information about the performance of the program to those goals. Instead, the 2017 annual report was comprised of descriptive statistics that were presented without explanation or sufficient context to allow readers to understand the significance of what was being reported. Specifically, this information was unrelated—quantitatively or qualitatively—to DCTAG’s program goals of ensuring D.C. residents are aware of and apply to DCTAG, or of helping DCTAG students make smarter college choices, which OSSE officials told us includes helping students select schools where they are more likely to graduate. As a result, it is unclear how to interpret the information presented in these reports and whether reported results indicate positive or negative program performance. Federal standards for internal control state that program managers should communicate necessary quality information so both internal and external parties can help the program achieve its objectives. We have previously reported that annual reports are essential for managers of federal programs to communicate to decision makers the progress an agency has made toward achieving its goals during a given year and, in cases where goals are not met, identify opportunities for improvement or whether goals need to be adjusted. In addition, our prior work found that managers of these programs can increase the value of their reports to congressional decision makers and the public by relating annual performance information to the agency’s strategic goals and mission. Furthermore, we reported that performance measurement does not require establishing a causal link between program activities and program outcomes, but rather emphasizes that the nature of performance measurement is the ongoing monitoring and reporting of program accomplishments, particularly toward pre-established goals. OSSE officials agreed on the importance of developing an annual report relating performance to program goals for the DCTAG program and concurred with our finding that they had not communicated DCTAG’s performance information, such as progress toward program goals, in a single annual report. They explained that developing performance measures is challenging. For example, they said DCTAG recipients have access to multiple support programs, which creates difficulties in establishing causal links between a program and the desired outcome. OSSE officials also stated that many DCTAG initiatives are new and, as a result, complete data on those initiatives are not yet available. Although we recognize that developing an annual report could be challenging, our prior work has found performance measurement guidelines would not require program managers to establish causal links as part of ongoing performance monitoring and reporting of progress toward program goals. Unless DCTAG’s stakeholders have access to an annual report that relates performance information to the program’s goals, they may be limited in their ability to judge the significance of what is being reported, determine whether the agency is making progress toward achieving its goals, or make informed program management and funding decisions. The Design of Selected Scholarship Programs Reflects Unique State and Local Needs Each of the three other selected scholarship programs we reviewed was created to meet unique state or local needs. Boston Tuition-Free Community College Plan. Created to make college more affordable for the city’s low-income students. Kalamazoo Promise. Created to promote the economic and social well-being of the community by expanding college access with full- tuition scholarships. Washington State Opportunity Scholarship. Created to address shortfalls in the state’s Science, Technology, Engineering, and Mathematics (STEM) and health care workforce and increase educational opportunities for low-income and middle-income students. Because each program was designed to address a unique state or local need, they differ with regard to eligibility, funding, recipient supports, and outcome measures. (For additional information on these three programs see appendix I.) Eligibility. Each of the three selected scholarship programs established eligibility criteria, such as income requirements, residency requirements, and grade point average (GPA) requirements among others, that reflect program objectives. For example, to ensure that the Boston program serves the intended low-income population, the program requires students to be eligible for Pell grants to receive funding. Funding. While the selected scholarship programs have dedicated funding streams, their funding sources reflect the origins of each program. For example, Boston’s program was initiated by the city’s mayor and is funded through a public charitable trust from fees for large-scale commercial building projects while the Kalamazoo promise was initiated by a group of anonymous donors who have funded the program in perpetuity, according to program officials. Alternately, the Washington program was initiated through cooperation between the state government and private sector companies and is funded by private donations that are matched by state funds up to an annual maximum of $50 million. Recipient supports. Each of the selected scholarship programs have developed supports such as coaching and peer mentoring to help recipients transition to college and stay on track to graduate. For example, the Kalamazoo Promise partners with and provides funding to two local colleges to create counseling, coaching, or peer mentoring services for scholarship recipients, according to program managers. Outcome measures. The selected scholarship programs have developed outcome measures to better understand the programs’ impact, such as whether students stay on track to graduate or find employment post-graduation. For example, program managers with the Washington program said they initiated a post-graduation survey in 2015 to better understand the employment status of graduates in STEM and health care fields, their job location, and annual salary. Conclusions Steady enrollment in DCTAG provides an encouraging signal that the program may be meeting the purpose set forth in federal law to expand access to higher education opportunities for D.C. students. However, without annual reports that relate DCTAG’s performance information to the program’s goals, it is difficult to assess the impact of the program and its support services. The information OSSE currently makes available about DCTAG does not provide the context needed for the program’s internal stakeholders, Congress, or the public to determine whether the program is meeting its goals or if any changes may be necessary. Recommendation for Executive Action OSSE should issue an annual report on DCTAG that relates information about the program’s performance to the program’s goals. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this report to the Mayor of the District of Columbia for review and comment. Comments from the Mayor are reproduced in appendix II. In response to our recommendation, the Mayor stated that OSSE plans to expand DCTAG’s current annual reports to Congress with direct links to DCTAG’s annual strategic performance goals and the reports will combine data points to illustrate the program’s performance. The Mayor also raised a concern about the title of the draft report, stating that it implied OSSE is not meeting legislative requirements. We have modified the title and text of the report to avoid this implication. We are sending copies of this report to the appropriate congressional committees, the District of Columbia Office of the State Superintendent of Education, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. Appendix I: Other State and Local Scholarship Programs Based on interviews with officials of three selected state and local scholarship programs and a review of program documents, we present a selection of information to provide additional context on these programs. They include the Boston Tuition-Free Community College Plan, the Kalamazoo Promise, and the Washington State Opportunity Scholarship. The following tables include information on these scholarship programs’ eligibility requirements, funding sources, recipient supports, and annual reports and performance measures. Table 2 presents a selection of eligibility requirements for the Boston Tuition-Free Community College Plan, the Kalamazoo Promise, and the Washington State Opportunity Scholarship. Table 3 presents a summary of the three selected scholarship programs’ funding sources, as well as how students may use those funds. Table 4 presents a summary of the supports developed by the three selected scholarship programs to support students, keep them on track to graduate from college, and help them begin their careers. Table 5 presents a summary of the annual reports and selected performance measures developed by the three selected scholarship programs. Appendix II: Comments from the Mayor of the District of Columbia Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Melissa Emrey-Arras, (617) 788-0534 or [email protected]. Staff Acknowledgments In addition to the contact named above, Bill J. Keller (Assistant Director), Tom Moscovitch (Analyst-in-Charge), and Michael C. Duane made significant contributions. Also contributing to this report were James Bennett, Deborah K. Bland, Sheila R. McCoy, Benjamin A. Sinoff, Rachel R. Stoiko, and Kate van Gelder.
Why GAO Did This Study Congress funds DCTAG through an annual appropriation, which was $40 million in fiscal year 2018. DCTAG provides D.C. residents up to $10,000 per year to attend college. The Consolidated Appropriations Act, 2017, included a provision for GAO to review DCTAG. This report examines, among other things, the characteristics of DCTAG recipients and steps taken by the program to support recipients, as well as the extent to which OSSE reports DCTAG's performance to internal and external stakeholders. GAO assessed the most recent data available on DCTAG, covering academic years 2007–2016, as well as data on college graduation, tuition, and fees from the Department of Education's Integrated Postsecondary Education Data System for academic years 2007–2016, and data on enrollment in high schools and median household income in D.C. from the U.S. Census Bureau's American Community Survey for 2007–2016; interviewed representatives of DCTAG and the entities it partners with to support recipients; and reviewed relevant laws, the applicability of standards for internal control, and guidance on performance management. What GAO Found The federally funded District of Columbia Tuition Assistance Grant (DCTAG) program was created in 1999 to give college-bound District of Columbia (D.C.) residents greater choices among institutions of higher education. Since its creation, the DCTAG program has awarded over $440 million to more than 26,000 residents to defray costs charged to out-of-state residents at some of the nation's public colleges and universities. While the program serves students from families with a wide range of household incomes, about half the students receiving a DCTAG award in academic years 2007–2016 came from the three D.C. wards with the lowest household incomes, as the figure below illustrates. DCTAG coordinates with public and private partners in the community to help students prepare for college, complete financial aid applications, and stay on track to graduate college. Although the Office of the State Superintendent of Education (OSSE), which manages DCTAG on behalf of the Mayor of the District of Columbia, issues various annual reports, these reports do not relate program performance to the program's four goals. One of these goals is to help D.C. students make smarter college choices. OSSE officials stated that they regularly communicate information about DCTAG data and activities internally and externally. However, these efforts do not provide the context necessary for program managers, Congress, or the public to understand the program's goals, nor determine whether DCTAG is making progress toward meeting them. Standards for internal control state that program managers should communicate information that internal and external stakeholders need to help the program achieve its objectives. Absent an annual report relating performance to goals, DCTAG's stakeholders will be limited in their ability to assess the program's performance or identify opportunities to improve it. What GAO Recommends GAO recommends OSSE issue annual reports relating DCTAG's performance to program goals. In response to the recommendation, the Mayor stated that OSSE will expand annual reporting to include direct linkages and combine data points to better illustrate the program's performance.
gao_GAO-18-242
gao_GAO-18-242_0
Background The Arms Export Control Act of 1976 gives the President authority to sell defense articles and services to eligible foreign governments and international organizations. This Act is the basis of the FMS program, which the U.S. government considers to be an integral component of U.S. national security and foreign policy. Under the FMS program, foreign governments pay the U.S. government to administer the acquisition of defense articles and services on their behalf. Typically, defense equipment made available for transfer or sale to foreign governments falls under an acquisition program managed by one or more of the U.S. military departments. Generally, this equipment has gone through operational testing and has entered or is entering full-rate production. Multiple federal entities have a role in the FMS program, including DOD and the Department of State (State). Within DOD, DSCA and the military departments play an extensive role in administering the program and managing FMS acquisitions, respectively. DSCA carries out key administrative functions, such as coordinating the development and execution of sales through the FMS program and conducting negotiations with foreign governments. The military departments are involved early in the development of the potential sale when the foreign government identifies the defense equipment it needs to buy to achieve a desired capability. Congressional oversight of the FMS program has resulted in amendments to the Arms Export Control Act and other relevant legislation to improve the FMS program. The first phase of the FMS process generally involves a foreign government submitting a request, usually to State or DOD, to express interest in purchasing defense articles or services. Depending on the size and complexity of the items being purchased and the foreign government’s available budget, the process to finalize the terms of a sale can take from a few days to years. In response to concerns that the FMS process is slow and burdensome, Congress has increased oversight of the program and recently passed legislation intended to improve the timeliness of the FMS process. For example, in the National Defense Authorization Act for Fiscal Year 2017, Congress required DOD to revise its acquisition regulations to place new requirements on FMS contracting and to establish a pilot program to seek ways to accelerate contracting and pricing processes for FMS. According to DSCA officials, foreign governments interested in having nonrecurring costs waived must request a waiver before DOD develops and sends the sales agreement to the foreign government for acceptance and signature. The sales agreement—formally referred to as a letter of offer and acceptance—details the specific items, quantities, and total estimated costs, among other things. The sales agreement, once signed, is commonly referred to as a FMS case. For a given FMS case, DSCA’s decision regarding whether or not to waive nonrecurring costs would also be articulated in the agreement. Consistent with the Arms Export Control Act and DOD policy, foreign governments may request that nonrecurring costs be waived based on one of three justifications: To achieve equipment standardization with NATO and select allies (Australia, Israel, Japan, Jordan, New Zealand, and the Republic of Korea). In addition to NATO itself, there are currently 34 countries that qualify for the equipment standardization waiver justification, as shown in figure 1. To avoid a potential loss of sale that could likely result from imposing nonrecurring costs. To obtain cost savings through economies of scale on major defense equipment also procured for the U.S. military that substantially offsets the revenue that will be lost if the nonrecurring costs are waived. The Code of Federal Regulations states that all waiver requests should originate with the foreign government and must specify the reasons or justifications for the requests. A foreign government generally initiates the process by submitting a written request to waive the nonrecurring costs for the major defense equipment it plans to purchase. For example, a NATO member country planning to purchase P-8A patrol aircraft would submit a request to waive nonrecurring costs for that equipment to the Navy, stating that the purchase would promote equipment standardization. The letter of offer and acceptance the military department sends to the foreign government states the estimated costs and the quantity of major defense equipment for which nonrecurring costs will be waived. Once the letter of offer and acceptance has been signed by the foreign government, any increase in the quantity of items requires that the foreign government submit a request to waive nonrecurring costs for the additional equipment. If equipment quantities are reduced after the waiver is approved, the total amount of nonrecurring costs waived will be less than the value at the time the waiver was approved. For example, in 2013, DSCA approved a waiver for up to $799 million in nonrecurring costs for 768 Patriot missiles. However, the foreign government reduced its planned procurement to 248 missiles. As of December 2017, DSCA estimated that the amount of nonrecurring costs that will be waived decreased to $258 million—about two-thirds less than was originally approved. Congressional and DOD Actions Regarding Nonrecurring Costs The laws, regulations, and policies regarding nonrecurring costs have been revised several times over the past 50 years. DOD has had a process in place to recover nonrecurring research and development and production costs on sales of major defense equipment to foreign governments and international organizations since 1967. The requirement to recover a proportionate amount of these costs was codified in the Arms Export Control Act of 1976, which authorizes arms sales in furtherance of U.S. security objectives. Significant legal, regulatory, and policy changes regarding the justifications that can be used to waive nonrecurring costs are summarized in figure 2. Determining Nonrecurring Costs for Sales of Major Defense Equipment under the FMS Program The Arms Export Control Act requires recovery of a proportionate amount of nonrecurring research, development, and production costs for foreign sales of major defense equipment. For example, in the F-35 Joint Strike Fighter program, costs for production testing and tooling equipment are considered nonrecurring costs. The military departments, as delegated under the Code of Federal Regulations, are responsible for determining the per-unit nonrecurring cost for each type of major defense equipment. In practice, DOD components submit requests to establish nonrecurring costs to DSCA, which—if approved by DSCA—are made publicly available on the agency’s website. In practice, determining what nonrecurring costs will be charged entails the following steps: 1. Charges are calculated by dividing total program nonrecurring costs by the total number of planned production units. For example, the Air Force determined that the nonrecurring costs for a sensor program were $660 million and estimated that a total of 250,000 units would be procured by DOD and from sales under the FMS program. Based on these estimates, the Air Force calculated a nonrecurring cost charge of $2,640 per unit. 2. For each individual FMS case that includes major defense equipment, the military department calculates the amount of nonrecurring costs for the sale by multiplying the quantity of items by the per-unit nonrecurring cost charge. For instance, in the example described above, if a foreign government wants to purchase 10 sensors, a nonrecurring cost charge of $26,400 would be added as part of the sale. Roles and Responsibilities of DOD Offices in Reviewing Nonrecurring Cost Waiver Requests DOD policy requires that waivers also be reviewed on a case-by-case basis and tied to a specific sale that defines the quantities of each item to be procured. This policy prohibits blanket waivers, those that would waive nonrecurring costs on all sales to a particular country or all sales pertaining to specific equipment. For example, DSCA cannot grant a blanket waiver for the Patriot missile that would automatically waive nonrecurring costs on all subsequent sales of that missile. Within DOD, the Director of DSCA has been delegated authority to waive nonrecurring costs for sales of major defense equipment to foreign governments and international organizations. While DSCA has primary responsibility for determining whether waiver requests meet all legal and regulatory criteria, we observed that multiple DOD offices are involved in the waiver review process, as illustrated in figure 3. In practice, the military departments receive waiver requests from a foreign government or international organization and ensure that all required information is submitted, including the equipment type and quantity, as well as the justification for the waiver. Based on this information, the military department determines whether or not to endorse the waiver request. The military department then compiles a package of relevant documentation, including calculation of the estimated total amount of nonrecurring costs to be waived, the original waiver request, and a memo documenting its decision regarding the waiver request. In the course of our work, we found that, within each military department, the offices involved in the waiver review process include: The U.S. Army Security Assistance Command, which initially reviews the waiver request, and the Office of the Deputy Assistant Secretary of the Army for Defense Exports and Cooperation, which provides the Army’s decision whether to endorse the waiver request; The Navy International Program Office, which reviews the waiver request and provides the Navy’s decision whether to endorse the waiver request; and The Air Force Security Assistance and Cooperation Directorate, which initially reviews the waiver request, and the Office of the Secretary of the Air Force, International Affairs, which provides the Air Force’s decision whether to endorse the waiver request. Apart from DSCA, all waiver requests are reviewed by the OUSD (AT&L) and OUSD (Comptroller), while the OUSD for Policy only reviews waivers that cite the loss of sale justification. DOD Approved Nonrecurring Cost Waivers Valued at Billions of Dollars over the Past 6 Years From fiscal years 2012 through 2017, DOD approved nonrecurring cost waivers valued at nearly $16 billion that it might otherwise have collected from foreign governments as part of its major defense equipment sales. Over this period, DSCA approved 810 of the 813 waiver requests it received, resulting in an approval rate of more than 99 percent. However, the dollar value of the approved waivers does not, in all instances, reflect the total amount that will ultimately be waived once sales are finalized. Rather, it reflects a ceiling for the nonrecurring costs that DOD could waive. During this time frame, DSCA collected $106 million in nonrecurring costs; however, this amount may be associated with FMS cases prior to fiscal year 2012. We were not able to determine the exact amount actually waived once sales agreements were finalized due to data limitations. DOD Approved Nearly All Requested Nonrecurring Cost Waivers From fiscal years 2012 through 2017, DOD approved 99 percent of the 813 nonrecurring cost waiver requests for major defense equipment sold through the FMS program. In our analysis of DSCA’s data on waivers requested for the 6-year period, we found that: DOD approved all 471 waiver requests that cited equipment standardization submitted by 25 countries and NATO, totaling approximately $6.7 billion. DOD approved all but 2 of the 340 waiver requests that cited loss of sale submitted by 34 countries, totaling almost $9.2 billion. DOD also approved a waiver of $460,000 for one of the two cost savings waiver requests it received. In total, these approved nonrecurring cost waivers amounted to nearly $16 billion over the past 6 years. The value of approved waivers increased more in fiscal year 2017 than in prior years, as shown in figure 4. The increase is primarily due to 2 approved waivers totaling nearly $3.5 billion for sales of missiles and related support systems. From fiscal years 2012 through 2017, approximately 93 percent of nonrecurring cost waivers were approved for countries in Europe, the Middle East, and the Pacific region. Based on our review of data obtained from DSCA, we found that countries eligible for equipment standardization waivers always cited this justification in their waiver requests, with one exception. We found that only eligible countries received a waiver for equipment standardization. All other countries that did not qualify for equipment standardization submitted waiver requests for loss of sale, except for 2 waiver requests that cited cost savings to the U.S. government. As shown in figure 5, all countries that utilized the equipment standardization justification are located in Europe and the Pacific region, and nearly all the $9.2 billion approved loss of sale waivers were for countries in the Middle East. Various Factors Limit Insight about the Extent of Total Nonrecurring Costs DOD Waived and Collected Currently, DSCA uses the Defense Security Assistance Management System (DSAMS) to maintain records on FMS case initiation and execution, but an official stated the system was not designed to track nonrecurring cost data, such as data on waivers requested or actual costs waived, for each individual FMS case. DSCA uses separate methods for tracking data on approved waivers and the equipment that was purchased as part of an individual FMS case. DSCA officials stated that to calculate the amount of nonrecurring costs actually waived for each approved waiver, they manually review DSAMS records for individual FMS cases to identify the planned quantity of items to be purchased. While DSCA provided data on actual costs waived, we were unable to independently verify these calculations and, as a result, are unable to report on the actual costs waived for waivers that were approved for fiscal years 2012 through 2017. Other complexities make it difficult to conclusively determine how much has been waived, including: Approved waivers do not have expiration dates but are tied to a specific sale. DSCA officials stated that waivers are generally used within 5 years, which coincides with the expiration date of the sales agreement. The lag time between when a waiver is approved and when the amount of equipment is finalized can take years. According to DSCA officials, nearly all nonrecurring costs are waived rather than collected. Officials also noted that DSCA has collected approximately $106 million in nonrecurring costs for fiscal years 2012 through 2017; however; this amount may include costs collected from FMS cases that were finalized prior to our time frame. DSCA officials could not confirm whether the 813 waiver requests that they received during fiscal years 2012 through 2017 represented the universe of all sales eligible for waivers under the FMS program, as DSAMS does not consistently track whether an individual FMS case includes major defense equipment and therefore would be eligible for a waiver or subject to the collection of nonrecurring costs. According to DSCA officials, foreign governments rarely forego submitting waiver requests and, invariably, submission of these requests is considered a standard practice. As a result, with few exceptions, DSCA officials said that DOD waives nearly all nonrecurring costs associated with eligible sales in the FMS program. We have previously reported that DSCA has efforts underway to develop a new system, the Security Cooperation Enterprise Solution, which is expected to address longstanding information management challenges. The new system is being developed with the goal of aggregating data from multiple information management systems in order to provide increased insight into the acquisition process, among other things. During our current review, DSCA officials noted that the new system will include requirements to incorporate nonrecurring cost data, but it is unclear whether the system will automate reporting of nonrecurring costs actually waived. We previously reported that the deployment schedule for the new system has been delayed and is being revised. DSCA officials were uncertain of a new deployment date as system requirements are currently being re-validated and expected to continue through 2020. DOD Considers Foreign Policy, National Security, and Economic Factors When Reviewing Waiver Requests Our review found that DOD considers a variety of factors when reviewing nonrecurring cost waiver requests, but, ultimately, the department wants to ensure that sales are not jeopardized. Individually and collectively, these sales complement various foreign policy, national security, and economic objectives. The ability to waive nonrecurring costs assists in keeping FMS competitive and ensuring sales are not jeopardized, according to DSCA and other DOD officials. While there is a decades-old requirement to recover the U.S. government’s investment in the nonrecurring costs of major defense equipment it develops and later sells to foreign governments, DOD is authorized to waive collection of these costs, which it implements through DSCA. Under the Arms Export Control Act and its implementing regulations, DSCA has considerable latitude to approve all waivers that meet the criteria for each justification. DSCA’s approval of nearly all waivers is in accordance with statutory requirements. When reviewing nonrecurring cost waiver requests, DSCA, consistent with DOD guidance, factors the legal requirements for the justification cited for a waiver request, in addition to broader benefits to achieve foreign policy, national security, and economic objectives, which are interrelated. DOD offices that play a role in reviewing and deciding on waiver requests may also consider these factors. Foreign policy and national security benefits: DSCA and other DOD officials weigh the effect of equipment sales under the FMS program on foreign policy and national security objectives. DSCA officials stated that avoiding a potential lost sale is paramount and outweighs the benefits of collecting nonrecurring costs, which may only be a small fraction of the overall sale. DSCA officials added that if a waiver request is not approved, U.S. relations with the foreign government could become strained or otherwise be negatively affected. DSCA officials indicated that one of the goals of the FMS program is to facilitate building and maintaining international relationships. Further, officials added that nonrecurring cost waivers help achieve that goal by making the FMS program competitive. The precedent for waiving nonrecurring costs has existed for decades, and foreign governments know to request waivers and expect they will be approved, according to DOD officials. In addition, Air Force officials stated that foreign governments seek to negotiate the price when purchasing U.S. defense equipment. DSCA officials stated that foreign governments view the nonrecurring cost waivers as a way to realize some form of cost savings when purchasing defense equipment under the FMS program. DSCA officials stated that, regardless of the amount, waiving nonrecurring costs can be viewed as significant because it gives the appearance of the foreign government achieving some cost savings. The U.S. National Military Strategy prioritizes increasing U.S. interoperability with coalition partners. Sales of defense equipment to U.S. allies are a means to achieve these interoperability goals. Equipment standardization with NATO member countries and other select allies is one of the available justifications for which a waiver can be requested and approved. Interoperability helps strengthen relationships with allies and advances U.S. and allied security interests in these regions. Navy officials stated that increasing the capabilities available to U.S. allies through FMS reduces the need to locate U.S. military forces and equipment in proximity to these allies. Economic benefits: Sales through the FMS program can result in cost savings for the U.S. government, which is also one of the permissible justifications in the Arms Export Control Act for foregoing collection of nonrecurring costs. Both the U.S. and foreign governments can benefit from economies of scale where increasing the volume of defense equipment purchased decreases the cost per unit. Navy officials also explained that they always consider the possibility of cost savings in sales through the FMS program, and added that they coordinate their own procurement plans with FMS purchases to achieve cost savings. However, DSCA officials stated that the efforts to obtain required data and conduct analysis to quantify the amount of cost savings are extensive. As a result, this analysis is generally only performed when required to justify cost savings waiver requests. In addition to the potential for lower unit prices, the FMS program helps to sustain the U.S. defense industrial base and allows it to remain globally competitive. This level of competition has increased as NATO allies also sell their military equipment. Navy officials stated there are always competing items, since foreign governments can purchase more equipment with less capability at a lower price from another country, which can expand the foreign government’s buying power relative to what it can afford when buying from the United States. In addition to competing offers, budget constraints may pose a challenge for some foreign governments seeking to purchase U.S. defense equipment and the added expense of paying nonrecurring costs could threaten a potential sale. DOD officials stated that risking a lost sale if a waiver is not approved could have potentially negative effects for the U.S. companies that manufacture defense equipment sold under the FMS program. Specifically, DOD officials indicated that if the sale is lost, U.S. jobs and economic viability could be affected, particularly because some FMS cases can be valued at billions of dollars in equipment purchases. DOD Could Take Steps to Enhance Efficiency of Waiver Review Process DOD’s waiver review process is, at times, inefficient, includes repetitive steps, and does not account for the value of the waiver request. Waiver justifications are broadly defined in the Arms Export Control Act, which— as delegated—gives DSCA flexibility to determine how to review requests and grant waivers. DSCA has implemented a review process that involves up to 12 offices including the military departments, DSCA, and various OUSD offices. In some cases, these offices are reviewing waivers to verify similar information, at times leading to repetitive reviews. The same process is applied despite the amount of nonrecurring costs requested to be waived, complexity of the case, or ease (or difficulty) in assessing the validity of the justification cited in the waiver request. DOD has taken steps to reduce the time for a few offices to review waivers, but we found there are opportunities for additional efficiencies to be realized. For 23 of the 24 waiver requests we reviewed, on average, the military departments determined whether to endorse requested waivers around 270 days after they were submitted by the foreign government. DSCA then, on average, took less than 60 days to decide whether to approve the waiver, which is consistent with its policy to respond to waiver requests within 60 days of receipt. There is no policy regarding the time frame for military departments to review a waiver request, as military officials stated the review time can vary depending on whether additional information must be obtained from the foreign government. However, recognizing an opportunity to streamline the review process, DSCA has worked with the Air Force to identify one office that did not add value, reducing the Air Force review process from three offices to two. Officials stated this action decreased the amount of time required for review. DSCA officials also stated that they have improved their review times by using digital signatures when concurring on waiver decisions. Our prior work has indicated that concerns have been raised about the timeliness of the FMS program, and a DSCA official stated that these efforts were part of a DSCA initiative to increase efficiencies in the overall FMS process. Further, we found repetitive steps in the process for assessing potential U.S. foreign policy and national security benefits from a sale where equipment standardization is cited as the justification for the requested waiver. These benefits are already assessed for certain FMS cases by an in-country team that is comprised of officials from State and the relevant DOD combatant command that manages military operations in designated areas of responsibility. Once the waiver is requested by the foreign government, DSCA and OUSD (AT&L) officials review the waiver request to assess these benefits, even though military officials stated an assessment has already been conducted to determine how the proposed sale will advance U.S. national security objectives within the region. In addition, DSCA officials stated that since foreign governments are procuring equipment also used by the U.S. military, by default, purchasing the equipment would result in standardization. After a potential sale has received a favorable country team assessment, the only additional requirement is to determine whether the customer is NATO or among the 34 countries eligible for the standardization waiver. Yet while this requirement is easily confirmed, the waiver request may still be reviewed by as many as 11 offices within the military department and DSCA, as well as at the OUSD level. However, we found, for example, DSCA did not adjust its review process based on the value of the nonrecurring costs to be waived. In one case, for a cost savings waiver request with estimated nonrecurring costs just under $12,000, the Air Force took 112 days to coordinate its review and endorsement of the waiver before submitting it to DSCA. DSCA then took 47 days to coordinate input from various OUSD offices to reach a final decision on the requested waiver. Similarly, in another instance where the value of the requested loss of sale waiver was substantially higher—$337 million—it took the Army 160 days to coordinate its review before passing it on to DSCA, which took 29 days to finalize its decision. Other than OUSD Policy’s review of the loss of sale waiver, both of these waiver requests required the same review process despite the substantial difference in costs. For waiver requests that cite the loss of sale justification, DSCA and military department officials told us that it is difficult to prove or disprove a foreign government’s claim that not waiving nonrecurring costs will likely lead to a loss of sale. DOD guidance states these waiver requests should include information on competing items and their cost, if available; however, the guidance does not specify the type of information or level of detail that should be provided. DSCA officials stated that they interpret this guidance to mean this information is optional and therefore not required. According to DOD officials, a foreign government’s budget constraints could limit its ability to pay nonrecurring costs. Of the 18 loss of sale waiver requests that we reviewed, none included any additional information on competing offers or spending limits, beyond the basic loss of sale statement. Even if DOD received this type of information from the foreign government, DSCA officials told us that corroborating this information would be difficult. Therefore, DOD officials stated that they do not assess the likelihood of loss of sale beyond the minimum criteria. Although this assessment requires no additional analysis, loss of sale waiver requests are subject to the same review process, but with OUSD Policy as another required layer of review, bringing the possible total up to 12 offices. DSCA and OUSD Policy officials were unsure of the origin of the requirement for OUSD Policy to weigh in on waiver requests that cite loss of sale. Further, OUSD Policy officials stated that they review waiver requests for similar elements as other DOD entities, such as whether the sale will support security objectives in the region. DSCA officials have acknowledged that identifying further opportunities to streamline waiver reviews through a risk-based approach could enhance efficiencies in the FMS program. Federal standards for internal controls state that agencies should assign and delegate responsibilities in a manner that maximizes efficiency and effectiveness. In light of the significant growth in the FMS program in recent years, as well as the resulting workload for DSCA and other cognizant DOD components, continuing to streamline the waiver review process would better position DSCA and the military departments in maximizing efficiencies in the FMS process. Conclusions The FMS program is a central component of U.S. foreign policy. Our work has shown it enhances the capabilities of our allies, fosters interoperability with foreign militaries, helps sustain our defense industrial base, and serves our national security interests. In 1976, Congress codified the requirement for DOD to recoup nonrecurring costs on sales of major defense equipment to help ensure that FMS customers pay their share of the full cost of this equipment. At the same time, Congress provided for waiving nonrecurring costs for specified reasons. Over the past 6 years, DOD has prioritized the benefits of the FMS program and has typically waived rather than collected nonrecurring costs under these specified reasons. Within DOD, there are opportunities to consider streamlining the waiver review process to eliminate efforts that are potentially repetitive or inefficient. The review process for waiver requests requires that multiple offices review all waiver requests, regardless of the amount of nonrecurring costs to be waived or the complexity of the specific circumstances. The FMS program has been criticized for being slow and burdensome. To create efficiencies in the overall FMS program, DOD could take additional steps to streamline the FMS waivers review process. Recommendation for Executive Action We are making the following recommendation to DSCA: The DSCA Director should continue to identify opportunities to streamline the review process for nonrecurring cost waiver requests. (Recommendation 1) Agency Comments We provided a draft of this report to DOD for comment. In its comments, reproduced in appendix II, DOD concurred with our recommendation. DOD also provided technical comments, which we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Defense. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope and Methodology In this report, we addressed the (1) nonrecurring cost waivers approved by the Department of Defense (DOD) from fiscal years 2012 through 2017, (2) factors DOD considers when reviewing waivers, and (3) efficiency of the waiver review process. To address all objectives, we analyzed data from the Defense Security Cooperation Agency (DSCA) on requests made by foreign governments to waive nonrecurring costs on purchases of major defense equipment under the Foreign Military Sales (FMS) program. We reviewed data for fiscal years 2012 through 2017, as these years provided the most complete data available to facilitate our analysis and gain insights about the waivers requested based on the three allowable justifications within the scope of our review—equipment standardization, loss of sale avoidance, and cost savings to the U.S. government. DSCA uses the Defense Security Assistance Management System (DSAMS) to maintain records on FMS case data from the time the case is initiated; however, the system does not track nonrecurring cost data, such as data on waivers requested or costs waived, for each individual FMS case. Instead, DSCA provided a dataset on waivers requested that is maintained in a spreadsheet. To assess the reliability of DSCA’s data, we tested for missing data, inconsistent coding, and compared data on selected waiver requests to waiver documentation we obtained from DSCA. In reviewing the documentation relative to the dataset we obtained, we found a small amount of data that were incorrectly coded, but these miscodings had minimal potential to affect our analysis. DSCA corrected these miscodings when we brought the errors to their attention. Overall, we found that the documentation for the selected waiver requests generally matched the data DSCA provided. We interviewed DSCA officials responsible for the data to identify the quality controls in place to ensure the data are accurate and reliable. Based on these steps, we determined the data were sufficiently reliable to identify the extent to which DOD approved nonrecurring cost waivers and to select a sample of waiver requests to review. To identify the extent to which DOD approved nonrecurring cost waivers for fiscal years 2012 through 2017, we analyzed data on nonrecurring cost waiver requests, which included: the country requesting the waiver, the justification under which the waiver was requested, the requested amount of the waiver, whether or not the waiver was approved, the approved value of the waiver, and the military department responsible for managing the procurement of the major defense equipment associated with the requested waiver. We analyzed the data to determine the number and dollar value of waivers requested for each waiver justification in total and by fiscal year. We also analyzed the data to determine the value of nonrecurring cost waivers approved by geographic region. DSCA has various information management systems and methods to track data related to FMS cases. However, these systems are not integrated and data limitations precluded certain analysis: A DSCA official stated that DSCA does not track which FMS cases include major defense equipment, which impeded our ability to conclusively report on the total universe of all eligible FMS cases during fiscal years 2012 through 2017 for which a nonrecurring cost waiver could have been requested, and the percentage of cases represented by the 813 requested waivers. DSCA processes thousands of FMS cases each year; however, not all FMS cases meet the threshold for collecting or waiving nonrecurring costs as this requirement only applies to FMS cases where major defense equipment is being purchased. We interviewed DSCA officials to obtain information on how major defense equipment is recorded in DSAMS and the process they use to determine whether a FMS case includes major defense equipment. To identify the universe of eligible FMS cases would have required a manual review of thousands of cases to match the nonrecurring cost waiver data that DSCA maintains in a separate spreadsheet with the case data captured in DSAMS that itemizes the equipment purchased for each individual FMS case. Because a FMS case can have multiple waivers, there is an added challenge to accurately match the waiver with the corresponding case. While DSCA maintains internal records that track the extent to which waivers are used to their fullest value, we were unable to fully validate certain data elements on equipment quantities. This precluded our ability to report on the amount of total costs waived relative to the value of the approved waiver. DSCA officials stated that when DSCA grants a waiver there is a ceiling on the value of the waiver, which functions similar to a coupon in that it cannot be used to waive nonrecurring costs that exceed the value of the approved waiver. DSCA maintains information on the equipment quantities for each FMS case in DSAMS. However, in order to estimate the costs waived, DSCA officials stated that they manually review each FMS case associated with a waiver to identify the quantities purchased, which may change through amendments to the FMS case. DSCA provided a dataset that compares approved waivers to costs waived; however, we could not verify equipment quantities from DSAMS. We also interviewed DSCA officials to gain insight about their quality control process to ensure the data are reliable. Our ability to verify equipment quantities made it difficult to report on actual costs waived. To determine the factors DOD considers when reviewing waiver requests, we selected a non-generalizable sample of 24 waiver requests and the related documentation and files to identify the information the foreign government submitted as part of the request, including any information on competing items, and how these waivers are considered as part of the overall FMS program. We selected the sample from the dataset provided by DSCA on the total waiver requests from fiscal years 2012 through April 2017. The sample included waiver requests citing each of the three justifications and represented different geographical regions. To enhance our understanding of how anomaly waivers are processed, we selected 5 waiver requests to include in our sample because of their unique characteristics: The only 2 waiver requests that cited cost savings to the U.S. The only 2 loss of sale waiver requests that were denied. One waiver request from a foreign government that would have been eligible to use the equipment standardization justification but cited the loss of sale justification in its waiver request. To select the remaining 19 waivers, we set a threshold for waivers approved where the value of the nonrecurring cost was over $20 million to capture high-value waivers, as these waivers represented 80 percent of the total value of approved waivers within our time frame for our sample selection. Next, we selected at least 2 waiver requests from each fiscal year for the 6-year period included in our review and ensured a mix of waivers requested by various foreign governments, including those that had the highest value of waivers approved. We also ensured that the waivers reflected a mix of FMS cases to be managed by the Air Force, Army, and Navy, which also review and provide input to DSCA on the waiver requests. Our sample includes a higher number of loss of sale justifications to provide greater insight about how DOD considers these requests given the minimal requirements and that these requests represent the majority of costs requested to be waived. While our findings are based on a non-generalizable sample and cannot be used to make inferences about all FMS nonrecurring cost waivers requested, the sample provides insight on the specific circumstances of waiver requests and DSCA’s decision in these cases. We recorded the information obtained from our review of the waiver request files in a data collection instrument. One analyst entered information in the data collection instrument and another analyst independently reviewed the information to ensure accuracy. After reviewing the waiver requests, we interviewed officials from military departments associated with the waiver request files that we reviewed to obtain clarifying information about specific waiver requests. To determine the efficiency of the waiver review process, we reviewed documentation for the 24 selected waiver requests to identify the offices involved in the review process, and the length of time taken to review and decide on the waiver request from the time of submission. We used the same data collection instrument to record this information as part of the two analysts’ reviews. We compared these offices’ practices to review the waivers with the Standards for Internal Control in the Federal Government, which calls for agencies to assign and delegate responsibilities in a manner that maximizes efficiency and effectiveness. In addition, we reviewed relevant DOD policy and interviewed officials from the military departments, DSCA, the Office of the Undersecretary of Defense (OUSD) Comptroller, OUSD for Policy, and OUSD for Acquisition, Technology, and Logistics (AT&L) to discuss their roles in reviewing nonrecurring cost waiver requests and the steps they take during their review. We conducted this performance audit from March 2017 to January 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments: In addition to the contact named above, Candice Wright (Assistant Director), Jessica Karnis (Analyst-in-Charge), Emily Bond, Lorraine Ettaro, Cale Jones, William Lamping, Miranda Riemer, and Roxanna Sun made key contributions to this report.
Why GAO Did This Study Under the Arms Export Control Act and its implementing regulations, DOD is required to recover nonrecurring costs—unique one-time program-wide expenditures—for certain major defense equipment sold under the FMS program. These costs include research, development, and one-time production costs, such as expenses for testing equipment. The Act also permits those costs to be waived under certain circumstances, such as to standardize equipment with select allies or to avoid a loss of sale. GAO was asked to review DOD's use of nonrecurring cost waivers. This report addresses the (1) nonrecurring cost waivers approved by DOD from fiscal years 2012 through 2017, (2) factors DOD considers when reviewing waivers, and (3) efficiency of the waiver review process. To conduct this work, GAO analyzed DOD data of nonrecurring cost waivers for fiscal years 2012 through 2017, the most recent and complete data, to identify the value of waivers. GAO then reviewed a non-generalizable sample of 24 of these waivers that included a mix of justifications and geographic regions. GAO reviewed relevant DOD policy and interviewed DOD officials about the process to assess these waivers. What GAO Found In the past 6 years, the Department of Defense (DOD) approved waivers valued at nearly $16 billion that it might otherwise have collected from foreign governments as part of its sales of major defense equipment through the Foreign Military Sales (FMS) program. The Arms Export Control Act, as delegated, authorizes the Defense Security Cooperation Agency (DSCA) within DOD to waive nonrecurring costs under certain circumstances, such as to standardize equipment with allies. From fiscal years 2012 through 2017, DSCA reviewed 813 waivers and denied 3, resulting in an approval rate of 99 percent. As shown in the figure below, the value of approved waivers significantly increased to nearly $6 billion last year, which is due to 2 waivers totaling nearly $3.5 billion for sales of missiles and related support systems. Total Value of Approved Foreign Military Sales Nonrecurring Cost Waivers from Fiscal Years 2012 through 2017 When reviewing waivers, DSCA considers foreign policy and national security factors, such as interoperability with allies, and economic factors, such as support for the U.S. defense industrial base. Agency officials stated that approving waivers helps ensure sales go through and such broader benefits are realized. DSCA's practice to approve waivers is consistent with the authority it has been delegated under the Arms Export Control Act and is influenced by these benefits. The process DOD has established to consider waivers is, at times, inefficient and repetitive. DSCA has final approval authority; however, multiple DOD offices must review and provide input on each waiver, with some offices reviewing waivers for the same purpose. Federal standards for internal control call for agencies to allocate resources and assign responsibilities to achieve efficiency and effectiveness. DOD has already taken steps to improve the efficiency of the waiver review process; for example, by reducing the time a few offices take to review the waivers. Nonrecurring cost waivers are one part of the larger FMS process, and continuing to streamline the waiver review process would better position DSCA and the military departments to identify opportunities to maximize efficiencies. What GAO Recommends DSCA should continue to identify opportunities to streamline the waiver review process. DSCA concurred with GAO's recommendation.
gao_GAO-18-105
gao_GAO-18-105_0
Background CSPF is a defined benefit multiemployer pension plan. Multiemployer plans are often created and maintained through collective bargaining agreements between labor unions and two or more employers, so that workers who move from job to job and employer to employer within an industry can continue to accrue pension benefits within the same plan over the course of their careers. Multiemployer plans are typically found in industries with many small employers such as trucking, building and construction, and retail food sales. In 2017, there were about 1,400 defined benefit multiemployer plans nationwide covering more than 10 million participants. Multiemployer Plan Administration, Funding, and Benefits Administration Most multiemployer plans are jointly administered and governed by a board of trustees selected by labor and management. The labor union typically determines how the trustees representing labor are chosen and the contributing employers or an employer association typically determines how the trustees representing management are chosen. The trustees set the overall plan policy, direct plan activities, and set benefit levels (see fig. 1). Multiemployer plans are “prefunded,” or funded in advance, primarily by employer contributions. The employer contribution is generally negotiated through a collective bargaining agreement, and is often based on a dollar amount per hour worked by each employee covered by the agreement. Employer contributions are pooled in a trust fund for investment purposes, to pay benefits to retirees and their beneficiaries, and for administrative expenses. Multiemployer plan trustees typically decide how the trust fund should be invested to meet the plan’s objectives, but the trustees can use investment managers to determine how the trust fund should be invested. A plan’s funded percentage is its ratio of plan assets to plan liabilities. Because the amount needed to pay pension benefits for many years into the future cannot be known with certainty due to a variety of economic and demographic factors, including the potential volatility of asset values, estimates of a plan’s funded percentage may vary from year to year. Defined benefit pension plans use a “discount rate” to convert projected future benefits into their “present value.” The discount rate is the interest rate used to determine the current value of estimated future benefit payments and is an integral part of estimating a plan’s liabilities. The higher the discount rate, the lower the plan’s estimate of its liability. Multiemployer plans use an “assumed-return approach” that bases the discount rate on a long-term assumed average rate of return on the pension plan’s assets. Under this approach, the discount rate depends on the allocation of plan assets. For example, a reallocation of plan assets into more stocks and fewer bonds typically increases the discount rate, which reduces the estimated value of plan liabilities, and therefore, reduces the minimum amount of funding required. Looking at the entire “multiemployer system”—the aggregation of multiemployer plans governed by ERISA and insured by PBGC—shows that while the system was significantly underfunded around 2001 and 2009, its funded position has improved since 2009. Specifically, analyses published by the Center for Retirement Research at Boston College and the Society of Actuaries used plan regulatory filings to calculate the funded status for the system and determined that it was approaching 80 percent funded by 2014 after falling during the 2008 market downturn. However, some observers have noted that while many plans are making progress toward their minimum targets, a subset of plans face serious financial difficulties. Benefits Multiemployer retirement benefits are generally determined by the board of trustees. The bargaining parties negotiate a contribution rate and the trustees adopt or amend the plan’s benefit formulas and provisions. Decisions to increase benefits or change the plan are also typically made by the board of trustees. Benefit amounts are generally based on a worker’s years of service and either a flat dollar amount or the worker’s wage or salary history, subject to further adjustment based on the age of retirement. The Central States, Southeast and Southwest Areas Pension Fund (CSPF) CSPF was established in 1955 to provide pension benefits to International Brotherhood of Teamsters union members (Teamsters) in the trucking industry and it is one of the largest multiemployer plans. In the late 1970s, CSPF was the subject of investigations by the IRS within the U.S. Department of the Treasury (Treasury), and by DOL and the U.S. Department of Justice (DOJ). The DOL investigation ultimately resulted in the establishment of a federal court-enforceable consent decree in 1982 that remains in force today. CSPF held more than $4.3 billion in Net Assets at the end of 1982 after the consent decree was established. The plan’s Net Assets peaked at nearly $26.8 billion at the end of 2007 and declined to about $15.3 billion at the end of 2016 (see fig. 2). As of 2016, CSPF reported that it had about 1,400 contributing employers and almost 385,000 participants. The number of active CSPF participants has declined over time. In 2016, 16 percent of about 385,000 participants were active, i.e., still working in covered employment that resulted in employer contributions to the plan. In comparison, CSPF reported in 1982 that 69 percent of more than 466,000 participants were active participants. Since the 1980s, CSPF’s ratio of active to nonworking participants has declined more dramatically than the average for multiemployer plans. By 2015, only three of the plan’s 50 largest employers from 1980 still paid into the plan, and for each full-time active employee there were over five nonworking participants, mainly retirees. As a result, benefit payments to CSPF retirees have exceeded employer contributions in every year since 1984. Thus, CSPF has generally drawn down its investment assets. In 2016, CSPF withdrew over $2 billion from investment assets (see fig. 3.). CSPF has historically had fewer plan assets than were needed to fully fund the accrued liability—the difference referred to as unfunded liability. In 1982, we reported that CSPF was “thinly funded”—as the January 1, 1980, actuarial valuation report showed the plan’s unfunded liability was about $6 billion—and suggested that IRS should closely monitor CSPF’s financial status. In 2015, the plan’s actuary certified that the plan was in “critical and declining” status. The plan has been operating under an ERISA-required rehabilitation plan since March 25, 2008, which is expected to last indefinitely. As of January 1, 2017, the plan was funded to about 38 percent of its accrued liability. In September 2015, CSPF filed an application with Treasury seeking approval to reduce benefits pursuant to provisions in the Multiemployer Pension Reform Act of 2014 (MPRA), which is fully discussed later in this section. The application was denied in May 2016 based, in part, on Treasury’s determination that the plan’s proposed benefit suspensions were not reasonably estimated to allow the plan to remain solvent. In 2017, CSPF announced it would no longer be able to avoid the projected insolvency. (See app. I for a timeline of key events affecting CSPF.) The Consent Decree As previously mentioned, CSPF was the subject of investigations in the 1970s by IRS, DOL, and DOJ. DOL’s investigation focused on numerous loan and investment practices alleged to constitute fiduciary breaches under ERISA, such as loans made to companies on the verge of bankruptcy, additional loans made to borrowers who had histories of delinquency, loans to borrowers to pay interest on outstanding loans that the fund recorded as interest income, and lack of controls over rental income. As a result of its investigation, DOL filed suit against the former trustees of CSPF and, in September 1982, the parties entered into a consent decree, which remains in force today. The consent decree provides measures intended to ensure that the plan complies with the requirements of ERISA, including providing for oversight by the court and DOL, and prescribes roles for multiple parties in its administration. For example, certain plan activities must be submitted to DOL for comment and to the court for approval, including new trustee approvals and some investment manager appointments. According to DOL, to prevent criminal influence from regaining a foothold of control over plan assets, the consent decree generally requires court-approved independent asset managers—called “named fiduciaries”—to manage CSPF’s investments. CSPF’s trustees are generally prohibited from managing assets; however, they remain responsible for selecting, subject to court approval, and overseeing named fiduciaries and monitoring plan performance. To focus attention on compliance with ERISA fiduciary responsibility provisions, the consent decree provides for a court-appointed independent special counsel with authority to observe plan activities and oversee and report on the plan. (See app. II for additional detail on the key provisions of the consent decree.) Legal Framework Employee Retirement Income Security Act of 1974 In 1974, Congress passed ERISA to protect the interests of participants and beneficiaries of private sector employee benefit plans. Among other things, ERISA requires plans to meet certain requirements and minimum standards. DOL, IRS, and PBGC are generally responsible for administering ERISA and related regulations. DOL has primary responsibility for administering and enforcing the fiduciary responsibility provisions under Part 4 of Title I of ERISA, which include the requirement that plan fiduciaries act prudently and in the sole interest of participants and beneficiaries. Treasury, specifically the IRS, is charged with determining whether a private sector pension plan qualifies for preferential tax treatment under the Internal Revenue Code. Additionally, the IRS is generally responsible for enforcing ERISA’s minimum funding requirements, among other things. ERISA generally requires that multiemployer plans meet minimum funding standards, which specify a funding target that must be met over a specified period of time. The funding target for such plans is measured based on assumptions as to future investment returns, rates of mortality, retirement ages, and other economic and demographic assumptions. Under the standards, a plan must collect a minimum level of contributions each year to show progress toward meeting its target, or the plan employers may be assessed excise taxes and owe the plan for missed contributions plus interest. Minimum contribution levels may vary from year to year due to a variety of economic and demographic factors, such as addressing differences between assumed investment returns and the plan’s actual investment returns. To protect retirees’ pension benefits in the event that plan sponsors are unable to pay plan benefits, PBGC was created by ERISA. PBGC is financed through mandatory insurance premiums paid by plans and plan sponsors, with premium rates set by law. PBGC operates two distinct insurance programs: one for multiemployer plans and another for single- employer plans. Each program has separate insurance funds and different benefit guarantee rules. The events that trigger PBGC intervention differ between multiemployer and single-employer plans. For multiemployer plans, the triggering event is plan insolvency, the point at which a plan begins to run out of money while not having sufficient assets to pay the full benefits that were originally promised when due. PBGC does not take over operations of an insolvent multiemployer plan; rather, it provides loan assistance to pay administrative expenses and benefits up to the PBGC-guaranteed level. According to PBGC, only once in its history has a financial assistance loan from the multiemployer pension insurance program been repaid. In 2017, PBGC provided financial assistance to 72 insolvent multiemployer plans for an aggregate amount of $141 million. For single-employer plans the triggering event is termination of an underfunded plan—generally, when the employer goes out of business or enters bankruptcy. When this happens, PBGC takes over the plan’s assets, administration, and payment of plan benefits (up to the statutory limit). The PBGC-guaranteed benefit amounts for multiemployer plans and the premiums assessed by PBGC to cover those benefit guarantees are significantly lower than those for single-employer plans. Each insured multiemployer plan pays flat-rate insurance premiums to PBGC based on the number of participants covered. The annual premium rate for plan years beginning in January 2017 was $28 per participant and it is adjusted annually based on the national average wage index. (See app. I for the PBGC premium rates that have been in effect since the consent decree was established in 1982.) When plans receive financial assistance, participants face a reduction in benefits. For example, using 2013 data, PBGC estimated 21 percent of more than 59,000 selected participants in insolvent multiemployer plans then receiving financial assistance from PBGC faced a benefit reduction. The proportion of participants facing reductions due to the statutory guarantee limits is expected to increase. About 51 percent of almost 20,000 selected participants in plans that PBGC believed would require future assistance were projected to face a benefit reduction. Since 2013, the deficit in PBGC’s multiemployer program has increased by nearly 700 percent, from a deficit of $8.3 billion at the end of fiscal year 2013 to $65.1 billion at the end of fiscal year 2017. PBGC estimated that at of the end of 2016, the present value of net new claims by multiemployer plans over the next 10 years would be about $24 billion, or approximately 20 percent higher than its 2015 projections. The program is projected to become insolvent within approximately 8 years. If that happens, participants who rely on PBGC guarantees will receive only a very small fraction of current statutory guarantees. According to PBGC, most participants would receive less than $2,000 a year and in many cases, much less. We have identified PBGC’s insurance programs as high-risk. This designation was made in part because multiemployer plans that are currently insolvent, or likely to become insolvent in the near future, represent a significant financial threat to the agency’s insurance program. We designated the single employer program as high-risk in July 2003, and added the multiemployer program in January 2009. Both insurance programs remain on our high-risk list. Key Amendments to ERISA Affecting Multiemployer Plans Multiemployer Pension Plan Amendments Act of 1980 Among other things, the Multiemployer Pension Plan Amendments Act of 1980 (MPPAA) made employers liable for a share of unfunded plan benefits when they withdraw from a plan, unless otherwise relieved of their liability, and strengthened certain funding requirements. An employer that chooses to withdraw from a multiemployer plan may be required to continue to contribute if the plan does not have sufficient assets to cover the plan’s current and known future liabilities at the time the employer withdraws; however, these payments may not fully cover the withdrawing employer’s portion of the plan’s liabilities. In such cases, the employers remaining in the plan may effectively assume the remaining liability. The Pension Protection Act of 2006 The Pension Protection Act of 2006 (PPA) was intended to improve the funding of seriously underfunded multiemployer plans, among other things. It included provisions that require plans in poor financial health to take action to improve their financial condition over the long term and established two categories of troubled plans: (1) “endangered status” or “yellow zone” plans (this category also includes a sub-category of “seriously endangered”), and (2) more seriously troubled “critical status” or “red zone” plans. PPA further required plans in the endangered and critical zones to develop written plans to improve their financial condition, such as by revising benefit structures, increasing contributions, or both, within a prescribed time frame. Multiemployer plans in yellow or red zone status must document their remediation strategies in a written plan, notify plan participants, and report annually on whether scheduled progress has been made. Since the 2008 market decline, the number of participants in endangered and critical plans has generally been decreasing (see fig. 4). The Multiemployer Pension Reform Act of 2014 In response to the funding crisis facing PBGC and multiemployer pension plans, the Multiemployer Pension Reform Act of 2014 (MPRA) made changes to the multiemployer system that were intended to improve its financial condition. Key changes included: Creation of critical and declining status. MPRA created a new category, “critical and declining,” for plans in critical status projected to become insolvent during the current plan year or within any of the 14 succeeding plan years, or in certain circumstances, within any of the 19 succeeding plan years. In 2017, PBGC reported that more than 100 multiemployer plans (more than 7 percent of plans) representing approximately 1 million participants (about 10 percent of participants) have been determined to be “critical and declining.” Permitted reduction of accrued benefits. MPRA permits plans to reduce participants’ and beneficiaries’ accrued retirement benefits if the plan can demonstrate such action is necessary to remain solvent. Plans apply to Treasury for the authority to reduce benefits. Treasury, in consultation with PBGC and DOL, reviews the applications and determines whether the proposed changes would enable the plan to remain solvent. Increased PBGC premiums. MPRA also increased the PBGC premiums for multiemployer plans from $12 to $26 (per participant per plan year) in 2015 and from $26 to $28 in plan year 2017. The annual premium in subsequent years is indexed to changes in the national average wage index. Creation of new framework of rules for partition. Partition allows a multiemployer plan to split into two plans—the original and a successor. Partitions are intended to relieve stress on the original plan by transferring the benefits of some participants to a successor plan funded by PBGC and to help retain participant benefits in the plans at levels higher than the PBGC-guaranteed levels. CSPF’s Critical Financial Condition Is a Result of Factors That Reflect Challenges Experienced by the Multiemployer System CSPF Has Been Underfunded Since the Consent Decree Was Established At the time the consent decree was established in 1982, CSPF had less than half the estimated funds needed to cover plan liabilities (and to pay associated benefits over the lifetime of participants) and it has not attained 100 percent of its estimated funding need since then, according to regulatory filings. CSPF’s 1982 Form 5500 we reviewed shows that the plan was less than 40 percent funded prior to the consent decree becoming effective. Over the next two decades, the plan generally made progress toward achieving its targeted level of funding but was never more than 75 percent funded, and funding has generally deteriorated since its 2002 filing (see fig. 5). Overall, the plan’s unfunded liability increased by approximately $11.2 billion (in inflation-adjusted dollars) between January 1983 and January 2016. As a consequence, participant benefits were never fully secured by plan assets over this period, as measured by ERISA’s minimum funding standards, and the plan consistently needed to collect contributions in excess of those needed to fund new benefit accruals to try to make up for its underfunded status. Stakeholders Described Multiple Factors That Contributed to CSPF’s Critical Financial Condition, Many of Which Have Been Experienced by Other Multiemployer Plans CSPF officials and other stakeholders identified several factors that contributed to CSPF’s critical financial condition and reflect the challenges faced by many multiemployer plans. For example, like CSPF, many multiemployer plans have experienced financial difficulties due to a combination of investment losses and insufficient employer contributions. In addition to being underfunded prior to the consent decree going into effect, stakeholders identified other specific factors that contributed to CSPF’s critical financial condition, such as trends within the national trucking industry and its workforce, funding challenges and common investment practices of multiemployer plans, and the impact of market downturns on long-term investment performance. Stakeholders also described the effects of the 2007 withdrawal of a key employer, United Parcel Service (UPS), on CSPF’s critical financial condition. Key Industry Specific Workforce Trends Stakeholders we interviewed said changes to the workforce, such as declining union membership rates and changes resulting from industry deregulation, affected CSPF and some other multiemployer plans by reducing the number of workers able to participate in their plans. While the multiemployer structure distributes bankruptcy risk across many employers, for any particular multiemployer plan employers are often concentrated in the same industry, making the plans vulnerable to industry-specific trends and risks. For example, stakeholders noted the impact that the Motor Carrier Act of 1980 had on the trucking industry. Specifically, deregulation of the trucking industry reduced government oversight and regulation over interstate trucking shipping rates. The trucking industry became increasingly dominated by nonunion trucking companies resulting in the bankruptcy of many unionized trucking companies, according to stakeholders. New trucking companies typically did not join multiemployer plans because their labor force was not unionized and this, coupled with the bankruptcy of many contributing employers, contributed to a decrease in active participant populations for many plans serving the industry. As the total number of active participants in a plan declines, the resources from which to collect employer contributions declines proportionally. Stakeholders also said these changes were unforeseeable. Limitations on a plan’s ability to increase contributions mean that a plan has less capacity to recover from an underfunded position or to make up for investment returns that fall short of expectations. A decline in the number of active workers can also accelerate plan “maturity,” as measured by the ratio of nonworking to working participants. Plan maturity has implications for a plan’s investment practices and the time frame over which the plan must be funded. According to PBGC’s data for the multiemployer plans it insures, there were approximately three active participants for every nonworking participant in 1980 (3:1); by 2014, the ratio was approximately one active worker for every two nonworking participants (1:2). Figure 6 shows the change in the percentages of active and nonworking participants for the multiemployer plans that PBGC insures. CSPF saw an even more dramatic change in its active to nonworking participant ratio from 1982 through 2015. In 1982, there were more than two active workers for every nonworking participant (2:1) and by 2016 that ratio had fallen to approximately one active worker for every five nonworking participants (1:5) (see fig. 7). Because CSPF’s contributing employers were largely trucking companies, stakeholders said this made the fund especially vulnerable to industry-wide shocks. Like the industry as a whole, CSPF was unable to attract new employers to replace exiting employers, in part because of the lack of new unionized employers. CSPF officials said that changes to the trucking industry and its workforce also led to other challenges for the plan. For example, contributions to the plan declined with the shrinking number of active workers. CSPF officials told us they could not significantly increase the contribution rate paid by remaining employers because of the financial hardship it would cause, and as a result, the plan’s ability to recover from its underfunded position was limited. CSPF officials said that this increased the plan’s reliance on investment returns to try to close the gap between its assets and liabilities. Funding Challenges and Investment Practices Stakeholders we interviewed cited challenges inherent in multiemployer plans’ funding and investment practices, and described how the challenges may have contributed to the critical financial condition of some plans, including CSPF. Stakeholders said that CSPF and many other multiemployer plans have been challenged by employer withdrawals. An employer withdrawal reduces the plan’s number of active worker participants, thereby reducing its contribution base and accelerating plan maturity. A withdrawing employer generally must pay a share of any unfunded benefits. Stakeholders identified several ways in which the withdrawal liability framework could result in a withdrawing employer underpaying its share of an unfunded liability. We have previously reported on the challenges associated with withdrawal liability, including: withdrawal liability assessments are often paid over time, and payment amounts are based on prior contribution rates rather than the employer’s actual withdrawal liability assessment; withdrawal liability payments are subject to a 20-year cap, regardless of whether an employer’s share of unfunded benefits has been fully paid within this 20-year timeframe; plans often did not collect some or all of the scheduled withdrawal liability payments because employers went bankrupt before completing their scheduled payments; and fears of withdrawal liability exposure increasing over time could be an incentive for participating employers to leave a plan and a disincentive for new employers to join a plan. Stakeholders we interviewed also added that the calculation used to determine withdrawal liability may use an investment return assumption that inherently transfers risk to the plan. When exiting employers do not pay their share of unfunded benefits, any remaining and future employers participating in the plan may effectively assume the unpaid share as a part of their own potential withdrawal liability as well as responsibility for the exiting employer’s “orphaned” participants. Participating employers may negotiate a withdrawal if they perceive a risk that the value of their potential withdrawal liability might grow significantly over time. In its MPRA application, CSPF cited employer withdrawals and bankruptcies as a significant challenge for the plan. CSPF reported that after deregulation, the number of contributing employers dropped by over 70 percent. While some of the drop could be due to the consolidation of trucking companies after deregulation, CSPF officials cited several cases in which employers went bankrupt or withdrew from the plan, which reduced the plan’s contribution base and accelerated its maturity. Additionally, when employers went bankrupt, they often did not pay their full withdrawal liability. For example, CSPF said two of its major contributing employers left the plan between 2001 and 2003, and left $290 million of more than $403 million in withdrawal liability unpaid after they went bankrupt. Stakeholders identified funding timeframes as a factor that contributed to the challenges facing many multiemployer plans, including CSPF. ERISA’s minimum funding standards have historically allowed multiemployer plans to amortize, or spread out the period of time for funding certain events, such as investment shortfalls and benefit improvements. For example, CSPF began a 40-year amortization of approximately $6.1 billion in underfunding on January 1, 1981, giving the plan until the end of 2021 to fully fund that amount. Longer amortization periods increase the risk of plan underfunding due to the number and magnitude of changes in the plan’s environment that may occur, such as a general decline in participants or deregulation of an industry. The Pension Protection Act of 2006 shortened amortization periods for single- employer plans to 7 years and the amortization periods for multiemployer plans to 15 years. Shorter amortization periods provide greater benefit security to plan participants by reducing an unfunded liability more rapidly. In addition, shorter amortization periods can be better aligned with the projected timing of benefit payments for a mature plan. However, shorter periods can be a source of hardship for plans with financially troubled contributing employers because they may require higher contributions. According to CSPF officials, CSPF requested and received an additional 10-year amortization extension from the IRS in 2005 after relating that contribution requirements could force participating employers into bankruptcy. One CSPF representative said an amortization extension can also help avoid subjecting the plan’s employers to IRS excise taxes for failing to make required minimum contributions. Stakeholders we interviewed said that certain common investment practices may have played a role in the critical financial condition of CSPF and other mature and declining plans. In general, multiemployer plans invest in portfolios that are expected, on average, to produce higher returns than a low-risk portfolio, such as one composed entirely of U.S. Treasury securities. Stakeholders also stated that these investment practices may have been too risky because returns can be more volatile, and the higher expected returns might not be achieved. In addition, the Congressional Budget Office has reported that if “plans had been required to fund their benefit liabilities—at the time those liabilities were accrued—with safer investments, such as bonds, the underfunding of multiemployer plans would have been far less significant and would pose less risk to PBGC and beneficiaries.” Stakeholders also told us that for mature plans like CSPF, these investment practices can pose further challenges. Mature plans, with fewer active employees, have less ability to recoup losses through increased contributions and have less time to recoup losses through investment returns before benefits must be paid. Market corrections, such as those that occurred in 2001 through 2002 and in 2008, can be particularly challenging to mature plans and their participants, especially if a mature plan is also significantly underfunded. Mature plans could mitigate these risks by investing more conservatively, however, the resulting lower expected returns from more conservative investing necessitates higher funding targets and contribution rates, which could be a hardship for employers in an industry with struggling employers. Alternatively, a plan that invests more conservatively may provide lower promised benefits to accommodate the level of contributions it can collect. Lower investment returns from a more conservative investment policy would cost employers more in contributions and could potentially result in employers leaving the plan. Further, investing in a conservative portfolio would be relatively unique among multiemployer plans, and stakeholders said plan managers may feel they are acting in a prudent fashion by investing similarly to their peers. Underfunded plans like CSPF may not see conservative investment as an option if they cannot raise the contributions necessary to fully fund their vested benefits. Officials from CSPF told us that, because they lacked the ability to significantly increase revenue or decrease accrued benefits, the named fiduciaries sought incrementally higher investment returns to meet funding thresholds required by the amortization extension they received in 2005. On the other hand, there are challenges associated with risk-bearing investments. In our prior work, we reported that multiemployer plans generally develop an assumed average rate of investment return and use that assumption to determine funding targets, required contributions, and the potential cost of benefit improvements. Experts we interviewed for that report told us that using a portfolio’s expected return to value the cost of benefits increases the risk that insufficient assets could be on hand when needed. They also told us that using the portfolio’s expected return to calculate liabilities could incentivize plans to invest in riskier assets and to negotiate higher benefit levels because the higher returns expected from riskier portfolios can result in lower reported liabilities. Plan Terms Set through Collective Bargaining Stakeholders we interviewed said that plan terms, such as contribution rates, which are set through the collective bargaining process, can create an additional challenge for multiemployer plans. Employers in multiemployer plans generally are not required to contribute beyond what they have agreed to in collective bargaining, and these required employer contributions generally do not change during the term of a collective bargaining agreement. CSPF officials said that up until the early 2000s, plan officials did not request modifications to collective bargaining agreements, such as reallocating contribution dollars, to respond to adverse investment returns. Investment Performance and Market Downturns Stakeholders highlighted the effects of market downturns on multiemployer plan assets as another contributing factor to CSPF’s critical financial condition and that of other multiemployer plans. Failure to achieve assumed returns has the effect of increasing unfunded liabilities. For the multiemployer system in aggregate, the average annual return on plan assets over the 2002 to 2014 period was about 6.1 percent, well short of typical assumed returns of 7.0 or 7.5 percent in 2002. Many multiemployer plans were especially impacted by the 2008 market downturn. PBGC estimated that from 2007 to 2009, the value of all multiemployer plan assets fell by approximately 24 percent, or $103 billion, after accounting for contributions to and payments from the plans. Although asset values recovered to some extent after 2009, some plans continued to be significantly underfunded, and stakeholders said this could be due to the contribution base not being sufficient to help recover from investment shortfalls. CSPF’s investment performance since 2000 has reflected performance similar to other multiemployer plans and the plan went from 73 percent funded in 2000 to about 38 percent funded in 2017. While the plan used an assumed rate of return of 7.5 to 8.0 percent per year between 2000 and 2014, our analysis of the plan’s regulatory filings shows that the plan’s weighted-average investment return over this period was about 4.9 percent per year. CSPF officials said the 2008 downturn significantly reduced CSPF’s assets and it was unable to sufficiently recoup those losses when the market rebounded in 2009. Plan assets declined from $26.8 billion at the beginning of 2008 to $17.4 billion at the beginning of 2009, with $7.5 billion of the decline attributable to investment losses. Despite reporting a 26 percent return on assets during 2009, CSPF had only $19.5 billion in assets at the end of 2009 because benefits and expenses exceeded the contributions it collected and because it had fewer assets generating returns for the plan. By the end of 2009, CSPF’s funding target was $35.9 billion but the fund had less than $20 billion that could be used to generate investment returns. If CSPF’s portfolio had returned 7.5 percent per year over the 2000-2014 period, instead of the approximately 4.9 percent we calculated, we estimate that the portfolio value would have exceeded $32.0 billion at the end of 2014, or 91 percent of its Actuarial Accrued Liability. Effect of UPS Withdrawal In addition to the factors mentioned that affected many multiemployer plans, stakeholders we interviewed also noted the unique effect of the UPS withdrawal on CSPF. In 2007, UPS negotiated with the International Brotherhood of Teamsters for a withdrawal from CSPF and paid a withdrawal liability payment of $6.1 billion. This payment was invested just prior to the 2008 market downturn. Moreover, the loss of UPS, CSPF’s largest contributing employer, reduced the plan’s ability to collect needed contributions if the plan became more underfunded. A UPS official said that, following the market decline of 2001-2002, the company considered whether it should withdraw from all multiemployer plans because it did not want to be the sole contributing employer in any plan. According to this official, UPS considered the large number of UPS employees in CSPF and the plan’s demographics—such as an older population and fewer employers—in its decision to withdraw. CSPF officials said they did not want UPS to withdraw because its annual contributions accounted for about one-third of all contributions to the plan. CSPF officials also told us that, prior to the UPS withdrawal, they had expected the population of active UPS workers in the plan to grow over time. UPS’ withdrawal of 30 percent of CSPF’s active workers, in combination with the significant market downturn just after UPS withdrew, reflected the loss of working members and investment challenges on a large scale. Additionally, stakeholders noted that although each of the factors that contributed to CSPF’s critical financial condition individually is important, their interrelated nature also had a cumulative effect on the plan. Industry deregulation, declines in collective bargaining, and the plan’s significantly underfunded financial condition all impaired CSPF’s ability to maintain a population of active workers sufficient to supply its need for contributions when investment shortfalls developed. Given historical rules for plan funding and industry stresses, CSPF was unable to capture adequate funding from participating employers either before or after they withdrew from the plan. The plan’s financial condition was further impaired when long-term investment performance fell short of expectations. For an underfunded, mature plan such as CSPF, the cumulative effect of these factors was described by some stakeholders as too much for CSPF to overcome. DOL Has Provided Oversight in Its Role As Described in the Consent Decree Roles and Responsibilities Identified in the Consent Decree The consent decree describes roles and responsibilities for several parties, including CSPF, its trustees, and DOL. Generally, it reiterates the requirement that CSPF must comply with ERISA, and gives DOL the authority to provide input on certain actions proposed by the plan. Additionally, the consent decree requires CSPF to employ a named fiduciary to administer and manage the plan’s investment assets, set investment policy, and select and supervise investment managers to create separation of plan trustees and staff from the management of plan investments. The plan must seek court approval for certain actions, such as the appointment of new trustees and named fiduciaries, and DOL can raise objections to these proposed actions. The named fiduciary must also seek court approval for proposed changes to the investment policy. (Appendix II provides a more comprehensive description of roles and other key provisions of the consent decree and its amendments.) The consent decree also provides for a court-appointed independent special counsel to assist the court in overseeing the plan, attend meetings of the board of trustees, and submit quarterly reports on plan activities to the court (see table 1). Although certain stakeholders have stated that the consent decree has achieved its purpose, DOL and CSPF agree that it still provides valuable protections, and the consent decree remains in place. The intent of the consent decree was to address alleged breaches of fiduciary duties under ERISA, including plan officials’ roles in the mismanagement of assets that were identified during DOL’s investigation of the plan in the 1970s. The former Assistant Secretary for the Employee Benefits Security Administration (EBSA) stated that the consent decree was primarily focused on preventing corrupt conduct and the influence of organized crime found during investigations prior to the consent decree’s establishment. Stakeholders agreed the consent decree accomplished its objectives by requiring the plan to seek court approval for certain activities. In 2004, the presiding judge noted in a memorandum opinion and order that the “professional management guidelines” that arose from the consent decree had worked well. In 2002, discussions arose between CSPF and DOL as to whether the consent decree should be dissolved. In 2011, the independent special counsel wrote in a letter to the court that he believed the plan was well-run and the role of the independent special counsel was no longer necessary. However, DOL officials stated that the provisions of the consent decree have created a strong incentive for ERISA compliance and have had a positive impact on the administration of the plan and the selection of trustees. Similarly, CSPF officials stated they had not requested the consent decree be dissolved because its requirements have provided valuable protection from stakeholder influence. DOL Conducted a Number of Oversight Activities under the Consent Decree In accordance with the requirements of the consent decree, DOL may provide input on and oversight to certain plan activities. For example, DOL may comment on or object to proposedboard of trustee candidates and proposed named fiduciaries prior to court approval. CSPF must provide notice to the court and DOL within specific time frames when seeking court approval for such actions. The consent decree requires CSPF to submit trustee and named fiduciary candidates to the court and DOL 60 days before filing their request for court approval (see fig. 8). In addition, the consent decree states that CSPF must notify DOL of new trustee candidates, selected by union or employer processes, 60 days prior to the proposed effective date of the candidate’s term and DOL may object to, or comment on, the approval of trustee candidates within 30 days. Although the consent decree does not require DOL to take any specific actions in determining whether it will comment on or object to a trustee candidate, DOL officials reported that with the assistance of other agencies they have taken the following steps to review trustee candidates: Requesting trustee candidate information. DOL requests that CSPF provide information on prospective trustee candidates; Providing questionnaires to trustee candidates via CSPF. Responses to questionnaires are reviewed by DOL’s Offices of Labor- Management Standards and Inspector General, the Department of Justice, the Federal Bureau of Investigation, and the Office of the Chief Investigator at the Teamster’s Independent Review Board (IRB); Compiling additional information. DOL searches internal and external databases for information regarding the trustee candidates; Assessing the information. DOL reviews any findings identified by the attorney assigned to CSPF in DOL’s Office of the Solicitor, officials in DOL’s Plan Benefits Security Division, and EBSA management staff. A recommendation regarding whether to file an objection is discussed and, if filing an objection is being considered, it is first discussed with the plan; and Filing objections. If any identified issues cannot be resolved, DOL files an objection with the court. Documents submitted to the court by DOL also indicated the agency has sought input on trustee candidates from PBGC, IRS, and the National Labor Relations Board. Several trustees we interviewed confirmed that DOL’s process to vet them included background checks. Our review of correspondence and other documentation found DOL routinely took such steps to vet trustee candidates. CSPF and DOL provided documents associated with the appointment and approval process of the 21 trustees appointed to the board since 1982 and one additional trustee candidate who was not presented to the court for approval because DOL identified issues during the vetting process. Vetting trustees took from approximately 1 to 5 months for the cases we reviewed. Our review of documentation also found that DOL provided input and collaborated with CSPF in two cases where approved trustees were asked to resign post- approval. The length of time for the process to vet trustee candidates (in advance of submitting them to the court) varied, but, in the cases we reviewed, took as long as 5 months. Correspondence showed various factors contributed to in the duration of DOL’s vetting process prior to submitting candidates to the court for approval, including DOL officials’ workload and vacation schedules, scheduling, and additional time spent clarifying any issues identified during the vetting process. In 2009, the vetting processes used by CSPF and DOL identified concerns with a trustee candidate before the candidate was presented to the court. During the 4-month vetting process, the candidate was found to be involved in two ongoing court cases in his role as a fiduciary for two other pension plans. Although the nominating employer association did not consider his involvement in the suit to be a problem, they eventually withdrew the nomination and proposed another candidate. In 2012, DOL’s review of a candidate to fill a vacancy left by a trustee who died during his term in office was completed in approximately 1 month. In 2015, DOL’s vetting process for a trustee candidate identified and resolved a concern before the candidate was presented to the court. DOL reported that they made inquiries to agencies and the Teamsters’ Independent Review Board (IRB) about the candidate during the vetting process, and the IRB did not report any issues with the trustee candidate at the time of DOL’s inquiry. More than 7 months after the candidate was approved, DOL received a report from the IRB that alleged lapses in financial controls and expense payment practices and procedures at a Teamsters’ local union office when the then-trustee had served as president. The trustee resigned from CSPF’s board 7 weeks later, but continued to serve as a trustee for an additional 5 months until a replacement was vetted by DOL and presented to the court for approval. In 2007 and 2009, CSPF kept DOL apprised of trustees who resigned and were replaced because employers were leaving the plan. The consent decree does not discuss court or DOL involvement in resolving issues with trustees already serving on the board, but in 1996, DOL assisted the CSPF board of trustees when they learned that one of their trustees, who had been on the board of CSPF for about 11 years, was accused of fiduciary misconduct in carrying out his duties for another pension plan. To assist the nominating board and the plan’s board of trustees in determining the proper course of action, CSPF consulted with DOL and the court before filing a motion with the court to appoint a special counsel to investigate, and to authorize expenditures for the investigations. Following the special counsel’s report, the nominating board recommended that trustee be removed, and the trustee chose to resign. Documents we reviewed also indicated DOL provided input to CSPF and the court on proposed amendments to the consent decree. For example, DOL assisted in writing a proposed amendment that would allow for the addition of a second named fiduciary and for named fiduciaries to act as investment managers for the plan. In addition, in 2007, a named fiduciary requested that CSPF assume responsibility for determining the plan’s asset allocation and indemnify it for any losses it might incur in fulfilling its role. In response to the request, CSPF considered several approaches to insulating the named fiduciary from fiduciary risk, and whether they would be inconsistent with the consent decree; however, CSPF decided against requesting the consent decree be dissolved. CSPF officials consulted with DOL regarding the approaches they considered, including one that would allow for flexibility in the allocation of investment assets within prescribed bands. CSPF waited to file its motion to amend the consent decree until DOL had an opportunity to evaluate the proposals. CSPF decided not to proceed with the proposed amendments, and instead worked with the named fiduciary to make changes to the investment policy to reduce risk for the named fiduciary. In our review of documents provided by CSPF, we also found that DOL regularly reviewed the quarterly reports from the independent special counsel, which included topics of discussion at the meetings of the board of trustees, a quarterly financial report, and other recent events of significance to the plan. Our review of communication between CSPF and DOL showed the plan also provided updates and allowed for DOL’s input on other actions. For example, CSPF responded to DOL inquiries about changes in the number of participants and the plan’s funded status in 2011 and 2014, respectively. In 2009, CSPF also provided details about a possible arrangement to allow a contributing employer that was at risk of bankruptcy to defer its contribution payments instead of suspending its participation in the plan. CSPF received input from DOL on the employer’s request to use real estate as collateral in place of cash contributions to the plan. DOL Conducted Investigations of CSPF in Accordance with Its Role under ERISA DOL Has Primary Responsibility for Enforcing ERISA’s Fiduciary Provisions Separate from its role under the consent decree, DOL has a primary oversight role over plans under ERISA, which it carries out through investigations and other activities. DOL is responsible for enforcing the reporting, disclosure, and fiduciary responsibility provisions of ERISA. Additionally, ERISA grants DOL investigative authority. Title I of ERISA establishes responsibilities for fiduciaries, such as persons who are responsible for the administration and management of employee benefit plans, to ensure that they act solely in the interest of plan participants and beneficiaries, and gives DOL authority to examine and investigate plans to ensure they comply with the provisions. ERISA sets forth a “prudent man” standard of care that requires fiduciary duties to be executed “…with the care, skill, prudence, and diligence…that a prudent man acting in a like capacity and familiar with such matters would use…”. According to a DOL compliance guide, prudence focuses on the process for making fiduciary decisions, and the guide states that a fiduciary lacking needed expertise is encouraged to hire others with professional knowledge to carry out fiduciary function, including investing fund assets. The guide further notes that, if a plan appoints an investment manager that is a bank, insurance company, or registered investment advisor, the plan is responsible for selecting and monitoring the manager, but is not liable for the individual investments of that manager. Further, in testimony, the former Assistant Secretary for EBSA stated that plan fiduciaries are not liable for plan losses merely because an investment lost money, but rather would be in instances where they acted imprudently in selecting and monitoring investments. Beyond the requirements of ERISA, the consent decree requires that CSPF hire a named fiduciary with exclusive responsibility and authority to manage and control the assets allocated to them. The consent decree also requires the independent special counsel to provide quarterly reports to the court and DOL. The quarterly reports include topics of discussion at the meetings of the board of trustees, a quarterly financial report, and other recent events of significance to the plan. Although stakeholders identified major factors contributing to the plan’s critical financial condition those factors are not the focus of DOL’s role under ERISA. DOL has provided assistance to the plan in identifying and assessing solutions to its financial condition. For example, in 2010, CSPF’s executive director worked directly with the assistant secretary of Labor as the plan prepared a partition application for PBGC consideration. According to CSPF officials, the plan chose not to submit the application because it did not believe the application would be approved. In 2015, CSPF had discussion with the assistant secretary about MPRA before CSPF ultimately submitted its application to Treasury to reduce pension benefits under MPRA. CSPF-provided documents show it also collaborated with DOL in developing strategies to improve the broader multiemployer plan system. For example, DOL contacted CSPF’s executive director to participate in a meeting as a “thought leader” on PBGC investment policy. The plan also worked with the assistant secretary and DOL and other government officials on legislative proposals, including modifications to statutes concerning partitioning and how partitions are funded through PBGC. In 2010, the assistant secretary testified regarding changes to the partition process proposed by CSPF and others, stating DOL would continue to work with CSPF on the proposal. IRS and PBGC also have roles under ERISA related to key factors that stakeholders identified as contributing to CSPF’s critical financial condition. IRS is responsible for enforcing certain ERISA requirements, including minimum participation, vesting and benefit accrual which are generally requirements to qualify for favorable tax treatment and minimum funding standards. Plans certify their PPA funding (or zone) status to IRS annually. PBGC, in addition to collecting premiums and providing financial assistance to insolvent multiemployer plans to pay participants a statutorily guaranteed benefit for the rest of their retirement lifetimes, provides technical assistance to multiemployer plan professionals, monitors plans, and administers certain tools to help preserve plans, such as assisting with plan mergers, reviewing methods for alternative withdrawal liabilities, and providing possible relief through plan partitions. Two Completed DOL Investigations Resulted in No Action Against CSPF DOL has completed at least two investigations of the plan since the consent decree was established; neither of which resulted in adverse findings or action against CSPF. DOL carries out its ERISA enforcement through a wide range of activities, including civil and criminal investigations and the agency’s enforcement priorities are set annually at the national level. DOL officials stated that to meet those priorities, the national and regional offices of DOL develop enforcement projects to focus enforcement activities on specific plan activities. Investigations based on enforcement projects or triggered by participant complaints are conducted by regional offices—DOL officials also stated that the Chicago Regional Office is primarily responsible for oversight of CSPF at the regional level. National and regional projects may be broadly applicable or may focus on specific types of plans. Since 2012, there have been seven national projects and five regional projects (two of the regional projects are currently underway). Currently, there is a Chicago Regional Office project focused on multiemployer plans. DOL officials noted that field offices generally exercise broad discretion in determining when investigations will be opened and what entities or people will be investigated. During investigations, the field offices gather information and evaluate compliance with ERISA’s civil and criminal provisions. Potential issues for investigation are identified through participant complaints, targeting based on computer-generated results of Form 5500 review and analysis, media, and referrals from federal, state, and local government, advocacy groups and service providers. For the period between 2007 and 2016, DOL opened an average of nearly 2,600 civil and criminal pension cases annually; about 5 percent of the cases were investigations of multiemployer plans. ERISA’s fiduciary responsibility provisions are intended to ensure that plan fiduciaries act solely in the interest of plan participants. Accordingly, if investigators review the selection of investments, they generally focus on the fiduciaries’ duty of prudence in the selection and monitoring of investments, rather than the ultimate performance of the asset. Investigation 1 from DOL’s Case Management System: Opened June 1996, Closed November 1998 The investigation was opened based on a referral from DOL’s Office of the Solicitor, the entity that coordinates DOL oversight of CSPF under the consent decree. The investigation centered on alleged breaches of fiduciary responsibility by the plan trustees in private litigation. The parties settled for a withdrawal liability of one-fifth of the alleged amount owed and did not pursue a malpractice claim against attorneys who represented CSPF in the litigation. DOL’s Chicago Regional Office concluded that CSPF trustees were not in violation of ERISA. DOL’s Office of Enforcement concurred. The investigation was closed without action. Investigation 2 from DOL’s Case Management System: Opened June 2001, Closed September 2004 The investigation was opened based on a complaint from a former employee of the named fiduciary who alleged he was fired when he brought possible misconduct to the attention of the named fiduciary. DOL’s investigation was centered on alleged securities violations by the named fiduciary. DOL’s Chicago Regional Office concluded that no violations occurred. Because of incomplete documentation from DOL and because agency officials could not provide further information, we were unable to determine why the investigation was closed. CSPF provided documents that indicated it had also been subject to earlier DOL investigations. For example, CSPF provided a June 1989 letter from DOL indicating the agency had investigated whether CSPF met its fiduciary duties through adequate procedures for monitoring legal services provided to the plan. In the letter, the DOL investigator noted that CSPF had written procedures for monitoring services and addressing disputes and that the plan provided reports showing activities surrounding the monitoring of legal fees. DOL concluded, based on available information, that CSPF had implemented monitoring procedures and DOL would take no further action. DOL did not provide further information about the letter or investigation. Agency Comments and Our Evaluation We provided a draft of the report to the U.S. Department of Labor, U.S. Department of the Treasury, and the Pension Benefit Guaranty Corporation for review and comment. We received technical comments from the U.S. Department of Labor and the Pension Benefit Guaranty Corporation, which we incorporated as appropriate. The U.S. Department of the Treasury provided no comments. We will send copies to the appropriate congressional committees, the Secretary of Labor, the Secretary of the Treasury, Director of the Pension Benefit Guaranty Corporation, and other interested parties. This report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Selected Events Affecting the Central States, Southeast and Southwest Areas Pension Fund Below is a list of selected events that have affected the Central States, Southeast and Southwest Areas Pension Fund (CSPF) as identified through a review of relevant documentation and interviews with stakeholders and agency officials. It is not intended to be an exhaustive list of the events that have impacted CSPF, nor is it intended to include comprehensive descriptions of each event. Appendix II: Key Provisions of the Central States, Southeast and Southwest Areas Pension Fund’s Consent Decree Brief History and Current Status of Consent Decree On September 22, 1982, the Department of Labor (DOL) entered into a court-enforceable consent decree with the Central States Southeast and Southwest Areas Pension Fund (CSPF) to help ensure the plan’s assets were managed for the sole benefit of the plan’s participants and beneficiaries as required by the Employee Retirement Income Security Act of 1974 (ERISA). The consent decree has been amended several times and currently remains in effect, as amended, under the jurisdiction of the Federal Court for the Northern District of Illinois, Eastern Division. Below is a description of the key parties to and their primary responsibilities under the consent decree. Key Parties and Their Primary Roles under Consent Decree The consent decree defines roles and responsibilities for its parties, including the court, the court-appointed independent special counsel, DOL, the plan and its Board of Trustees, and the independent asset manager, which is called the named fiduciary. Court The primary role of the court is to oversee and enforce the consent decree. Specifically, the court: appointed an independent special counsel to assist it in administering has approval over the appointment of named fiduciaries and trustees; has approval over the appointment of investment managers of the may, for good cause shown, remove a named fiduciary after 60 days’ notice provided to the named fiduciary and DOL; and may, upon request by the plan, dissolve the consent decree absent good cause shown by DOL why the consent decree should continue in effect. Independent Special Counsel The court-appointed independent special counsel is intended to serve the court by assisting in identifying and resolving issues that arise in connection with the plan’s compliance with the consent decree and Part 4 of Title I of ERISA, and to report on the plan to the court. Specifically, the independent special counsel: has full authority to examine the plan’s activities and oversee and report on the plan’s performance of the undertakings of the consent decree; may, with court approval, employ attorneys, accountants, investigators, and others reasonably necessary and appropriate to aid him in the exercise of his responsibilities; has full access to all documents, books, records, personnel, files, and information of whatever type or description in the possession, custody, or control of the plan; may attend meetings of the plan, including meetings of the board of trustees and any meetings at which plan-related matters are discussed or considered; can petition the court to compel the plan to cooperate with the independent special counsel in the performance of his duties and responsibilities; may consult with DOL, the Internal Revenue Service, and other agencies, as appropriate, but must provide access to DOL upon its request to any documents prepared by the independent special counsel within the exercise of his power; is required to file quarterly reports, as well as any other reports the independent special counsel deems necessary or appropriate, with the court, and provide copies to DOL and the plan; may have other powers, duties, and responsibilities that the court may later determine are appropriate; and cannot be discharged or terminated during the duration of the consent decree except for leave of court, and upon the termination, discharge, death, incapacity, or resignation of an independent special counsel, the court will appoint a successor. Department of Labor Under the consent decree, DOL has an oversight role and may object to certain proposed plan changes. Specifically, DOL: may request and review certain reports provided by the plan and any documents prepared by the independent special counsel in the exercise of his authority; may object to the appointment of proposed trustees, named fiduciaries, investment managers of the passively-managed accounts, and asset custodians; receives notice of proposed changes to the plan’s investment policy statements from the plan; and may object to the dissolution of the consent decree. CSPF (including Board of Trustees and Internal Audit Staff) The plan must operate in full compliance with the consent decree, with ERISA, and with any conditions contained in determination letters it receives from the Internal Revenue Service. Specifically, CSPF, its board of trustees, and its internal audit staff must meet certain requirements. is required to use an independent asset manager known as the named fiduciary; must rebid the named fiduciary role at least once within every 6 years, with the option to extend the appointment for one calendar year; may remove a named fiduciary without cause shown on 6 months’ written notice to the named fiduciary and DOL; must cooperate with the independent special counsel in the performance of his duties and responsibilities and with DOL in its continuing investigation and enforcement responsibilities under ERISA; is required to recommend to the court three replacement candidates, agreeable to DOL, to replace an outgoing independent special counsel; and is required to maintain a qualified internal audit staff to monitor its affairs. is required to appoint, subject to court approval, the investment managers of the passively-managed accounts; is prohibited from authorizing any future acquisitions, investments, or dispositions of plan assets on a direct or indirect basis unless specifically allowed by the consent decree; and is required to comply with ERISA fiduciary duties, such as monitoring the performance of the assets of the plan, under Part 4 of Title I of ERISA. is required to review benefit administration, administrative expenditures, and the allocation of plan receipts to investments and administration; and is required to prepare monthly reports setting forth any findings and recommendations, in cooperation with the executive director of the plan, and make copies available to the independent special counsel and, upon request, to DOL and the court. Named Fiduciaries The independent asset managers, known as named fiduciaries, are appointed by the plan’s trustees, subject to court approval, and have exclusive responsibility and authority to manage and control all assets of the plan allocated to them. Specifically, the named fiduciaries: may allocate plan assets among different types of investments and have exclusive authority to appoint, replace, and remove those have responsibility and authority to monitor the performance of their are required to develop, in consultation with the Board of Trustees, and implement investment policy statements for the assets they manage, giving appropriate regards to CSPF’s actuarial requirements. Appendix III: GAO Contacts and Staff Acknowledgments GAO Contact Charles A. Jeszeck, (202) 512-7215 or [email protected]. Staff Acknowledgments In addition to the individual named above David Lehrer (Assistant Director), Margaret J. Weber, (Analyst-in-Charge), Laurel Beedon, Charles J. Ford, Jessica Moscovitch, Layla Moughari, Joseph Silvestri, Anjali Tekchandani, Frank Todisco, and Adam Wendel made key contributions to this report. Also contributing to this report were Susan Aschoff, Deborah K. Bland, David M. Chrisinger, Helen Desaulniers, Ted Leslie, Sheila McCoy, Mimi Nguyen, and Walter Vance. Related GAO Products Central States Pension Fund: Investment Policy Decisions and Challenges Facing the Plan. GAO-18-106. Washington, D.C.: June 4, 2018. High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. Pension Plan Valuation: Views on Using Multiple Measures to Offer a More Complete Financial Picture. GAO-14-264. Washington, D.C.: September 30, 2014. Private Pensions: Clarity of Required Reports and Disclosures Could Be Improved. GAO-14-92. Washington, D.C.: November 21, 2013. Private Pensions: Timely Action Needed to Address Impending Multiemployer Plan Insolvencies. GAO-13-240. Washington, D.C.: March 28, 2013. Private Pensions: Multiemployer Plans and PBGC Face Urgent Challenges. GAO-13-428T. Washington, D.C.: March 5, 2013. Pension Benefit Guaranty Corporation: Redesigned Premium Structure Could Better Align Rates with Risk from Plan Sponsors. GAO-13-58. Washington, D.C.: November 7, 2012. Private Pensions: Changes Needed to Better Protect Multiemployer Pension Benefits. GAO-11-79. Washington, D.C.: October 18, 2010. Private Pensions: Long-standing Challenges Remain for Multiemployer Pension Plans. GAO-10-708T. Washington, D.C.: May 27, 2010. The Department of Labor’s Oversight of The Management of the Teamsters’ Central States Pension and Health and Welfare Funds. GAO/HRD-85-73. Washington, D.C.: July 18, 1985. Investigation to Reform Teamsters’ Central States Pension Fund Found Inadequate. GAO/HRD-82-13. Washington, D.C.: April 28, 1982.
Why GAO Did This Study Multiemployer plans are collectively bargained pension agreements often between labor unions and two or more employers. CSPF is one of the nation's largest multiemployer defined benefit pension plans, covering about 385,000 participants. Since 1982, the plan has operated under a court-enforceable consent decree which, among other things, requires that the plan's assets be managed by independent parties. Within 7 years, CSPF estimates that the plan's financial condition will require severe benefit cuts. GAO was asked to review the events and factors that led to the plan's critical financial status and the oversight DOL provides under the consent decree and under other federal laws. GAO reviewed (1) what is known about the factors that contributed to CSPF's critical financial condition, (2) DOL's role in the administration of the 1982 CSPF consent decree and what actions the agency has taken under that role, and (3) what actions, if any, DOL has taken to oversee CSPF, beyond those required under the consent decree. GAO reviewed the consent decree and its amendments, relevant federal laws and regulations, agency guidance on plan management, and DOL protocols for investigating plans; interviewed CSPF representatives, International Brotherhood of Teamsters officials and members, federal officials, and industry stakeholders; and reviewed correspondence between DOL and CSPF and documents related to DOL investigations. What GAO Found The Central States, Southeast and Southwest Areas Pension Fund (CSPF) was established in 1955 to provide pension benefits to trucking industry workers and is one of the largest multiemployer plans. According to its regulatory filings, CSPF had less than half the estimated funds needed to cover plan liabilities in 1982 at the time it entered into a court-enforceable consent decree that provides for oversight of certain plan activities. Since then, CSPF has made some progress toward achieving its targeted level of funding; however, CSPF has never been more than 75 percent funded and its funding level has weakened since 2002, as shown in the figure below. Stakeholders GAO interviewed identified numerous factors that contributed to CSPF's financial condition. For example, stakeholders stated that changes within the trucking industry, as well as a decline in union membership, contributed to CSPF's inability to maintain a healthy contribution base. CSPF's active participants made up about 69 percent of all participants in 1982, but accounted for only 16 percent in 2016. The most dramatic change in active participants occurred in 2007 when the United Parcel Service, Inc. (UPS) withdrew from the plan. At that time, UPS accounted for about 30 percent of the plan's active participants (i.e. workers). In addition, the market declines of 2001 to 2002 and 2008 had a significant negative impact on the plan's long-term investment performance. Stakeholders noted that, while each individual factor contributed to CSPF's critical financial condition, the interrelated nature of the factors also had a cumulative effect on the plan's financial condition. The 1982 consent decree between the U.S. Department of Labor (DOL) and CSPF came about as a result of an investigation of alleged breaches of fiduciary duty and mismanagement of plan assets, and is intended to prevent their reoccurrence. In addition to reiterating the requirement that the plan comply with the Employee Retirement Income Security Act of 1974 (ERISA)—the primary law governing the treatment of private-sector pensions in the United States—the consent decree further outlines requirements for the plan to help ensure fiduciary controls and plan management, including seeking court approvals for the appointment of new trustees and changes to the plan's investment policy. The consent decree also delineates roles for DOL and other stakeholders. For example, it allows DOL to object to or comment on certain proposed plan actions, but does not require the agency to do so. GAO's review of plan documents found that the agency provided oversight and technical assistance in the areas specifically identified for its involvement under the consent decree, such as vetting proposed trustees prior to the court's approval. DOL is primarily responsible for enforcing the reporting, disclosure, and fiduciary provisions of ERISA for all tax-qualified pension plans, including CSPF. ERISA sets forth a “prudent man standard of care” in the execution of fiduciary duties that, according to DOL, focuses on the process for making proper fiduciary decisions. Plan fiduciaries are responsible for selecting and monitoring investment managers, but are generally not liable for the individual investment decisions of those managers. To enforce ERISA, DOL conducts examinations and investigations. Since the consent decree was established, DOL officials reported that the agency has completed two investigations of CSPF. The two investigations—completed in 1998 and 2004—were closed without adverse findings against the plan. Beyond the agencies' oversight role, DOL collaborated with CSPF and others on steps intended to improve the plan's financial position, including contributing to discussions on proposed legislation and working with CSPF on its application to reduce benefits under the Multiemployer Pension Reform Act of 2014. The application was not approved by the U.S. Department of the Treasury. What GAO Recommends GAO is not making recommendations in this report.
gao_GAO-18-361
gao_GAO-18-361_0
Background Composition of TRICARE’s Nonenrolled Beneficiary Population In fiscal year 2016, DOD identified about 2.2 million nonenrolled TRICARE beneficiaries who fell into four categories: (1) retired servicemembers and their dependents, (2) inactive guard/reserve servicemembers and their dependents, (3) dependents of active duty, or of guard/reserve on active duty status, and (4) other beneficiaries, such as dependent survivors of deceased servicemembers. Retired servicemembers and their dependents made up the majority of nonenrolled beneficiaries at the end of fiscal year 2016 (approximately 60 percent). (See fig. 1.) TRICARE’s Benefit Options Prior to January 1, 2018, TRICARE provided benefits through three basic options for its non-Medicare-eligible beneficiary population—TRICARE Prime, Standard, and Extra. These options varied by enrollment requirements, choices in civilian and military treatment facility providers, and the amount beneficiaries must contribute toward the cost of their care. (See table 1.) The NDAA 2017 made specific changes to the TRICARE program that became effective on January 1, 2018. These changes included terminating the TRICARE Standard and Extra options, establishing a new option called TRICARE Select, and ensuring that 85 percent of TRICARE Select beneficiaries are covered by the network of civilian providers. TRICARE Select has similar benefits to TRICARE Standard and Extra for obtaining care from nonnetwork and network providers, but unlike these options, TRICARE Select requires enrollment. TRICARE Networks and Locations Under TRICARE, DOD uses managed care support contractors to develop networks of civilian providers to serve all TRICARE beneficiaries in PSAs, which are typically within an approximate 40-mile radius of a military outpatient or inpatient treatment facility or Base Realignment and Closure sites. Although some network providers may be located in non- PSAs, contractors are not required to develop networks in these areas. Previously, contractors had the option of developing additional PSAs (and civilian provider networks) in areas that were not located near military treatment facilities or Base Realignment and Closure sites. However, on October 1, 2013, DOD eliminated these additional PSAs, referred to in the survey analyses as “former PSAs,” and as a result, the managed care support contractors were no longer required to develop and maintain networks of civilian providers in these areas. In fiscal year 2016, approximately 65 percent of the 2.2 million nonenrolled beneficiaries that were eligible for TRICARE Standard and Extra (1.47 million) lived in PSAs. Of the remaining nonenrolled beneficiaries (775,000), about 19 percent lived in former PSAs and about 16 percent lived in non-PSAs. (See fig. 2.) Nonenrolled beneficiaries who live in former PSAs and non-PSAs may still have access to network providers, even though contractors are not required to develop networks in these areas. About 57 percent of these beneficiaries (445,000) filed at least one TRICARE claim with a network civilian provider during fiscal year 2016. DOD’s Implementation of Mandated Beneficiary and Civilian Provider Survey Requirements The NDAA 2008 directed DOD to survey nonenrolled beneficiaries and civilian providers in at least 20 PSAs in each of four fiscal years, 2008 through 2011, as well as 20 non-PSAs. To do this, DOD divided the country into 80 distinct PSAs and 80 distinct non-PSAs and surveyed 20 PSAs and 20 non-PSAs each year. At the end of the 4-year period, each year’s survey results were combined and weighted to develop estimates of access to health care, including mental health care, at the service area, state, and national levels. Additionally, the NDAA 2008 required DOD to consult with representatives of TRICARE beneficiaries and providers of health care, including mental health care, to identify locations where nonenrolled beneficiaries have experienced significant access-to-care problems and to survey both beneficiaries and health care providers, including mental health care providers, in these areas. Based on these consultations, DOD designated certain Hospital Service Areas (HSA) to include in its beneficiary and provider surveys. DOD used a similar methodology for determining its locations in the 2012- 2015 surveys. However, as a result of DOD’s changes to PSAs on October 1, 2013, 28 of the 80 non-PSAs surveyed were former PSAs. DOD also surveyed both nonenrolled beneficiaries and civilian providers in a total of 30 HSAs. As a result, DOD collectively surveyed 190 geographic locations over the 4-year period. Furthermore, we previously reported that DOD’s implementation of its 2008-2011 nonenrolled beneficiary and civilian provider surveys generally addressed the requirements outlined in the NDAA 2008. DOD made several minor revisions to the methodologies of the 2012-2015 surveys, but we determined that none of those changes altered DOD’s compliance with the NDAA 2008, as amended. Nonenrolled TRICARE Beneficiaries Reported Generally Experiencing Fewer Problems Accessing Care, and More Reported Obtaining Care when Desired Nonenrolled TRICARE Beneficiaries Reported Generally Experiencing Fewer Problems Accessing Care than in the Prior Survey Nonenrolled beneficiary survey results over time. Nationwide, a lower percentage of nonenrolled beneficiaries reported that they experienced problems finding any type of provider in the 2012-2015 survey (29 percent) when compared to the prior 2008-2011 survey (31 percent). Specifically, fewer nonenrolled beneficiaries reported that they experienced problems finding a primary care provider than in the prior survey (22 percent in 2012-2015 compared to 25 percent in 2008-2011). However, there was no significant statistical difference over time in the percentage of beneficiaries who reported experiencing problems finding a specialty care or mental health care provider. (See fig. 3.) Nonenrolled beneficiary survey results by type of location. Nonenrolled beneficiaries in non-PSAs reported experiencing fewer problems finding primary and specialty providers than those in PSAs, which is similar to what we reported for the prior survey. For example, about 20 percent of beneficiaries in non-PSAs reported that they had problems finding specialty care providers compared to 24 percent in PSAs. Regarding beneficiaries in former PSAs, the only statistically significant difference among the three provider types was for problems finding a primary care provider. Specifically, fewer (about 19 percent) nonenrolled beneficiaries in non-PSAs reported experiencing problems finding a primary care provider to accept TRICARE, compared to 24 percent in former PSAs. (See fig. 4.) DOD officials told us that they were unsure of the exact reasons for the difference between PSAs and non- PSAs. However, they explained that PSAs are often located in more populated areas, where TRICARE beneficiaries may not make up a large market share for local civilian providers, who may have a wide array of patients with other health plans. Nonenrolled beneficiary survey results by network status. Nonenrolled beneficiaries with network providers reported experiencing fewer problems finding civilian providers, compared to nonenrolled beneficiaries with nonnetwork providers. For example, 20 percent of the nonenrolled beneficiaries who used a network civilian primary care provider reported that they had a problem finding a primary care provider that would accept TRICARE compared with the 44 percent of nonenrolled beneficiaries who used a nonnetwork civilian primary care provider. (See fig. 5.) In addition, when compared with the results of the last survey (2008- 2011), the percentages of nonenrolled beneficiaries who reported that they experienced problems finding a specialty care or mental health care provider increased in the most recent survey (2012-2015) for beneficiaries who used nonnetwork providers, but there were no changes over time if their specialty care or mental health care providers were in the network. (See fig. 6.) More Nonenrolled TRICARE Beneficiaries Reported Obtaining Appointments as Soon as Desired Compared to the prior survey, a higher percentage of nonenrolled beneficiaries reported that they were able to obtain appointments as soon as they desired. Specifically, the percentage of nonenrolled beneficiaries who made non-urgent appointments for health care and reported that they were able to usually or always obtain an appointment as soon as they thought they needed increased from 87 percent in the 2008-2011 survey to 90 percent in the 2012-2015 survey. However, the most commonly reported length of time they waited between making an appointment and actually seeing a provider did not change from the 2008- 2011 surveys—most respondents in both surveys reported they were able to get appointments within 3 days (about 54 percent for both years’ surveys). The 2012-2015 survey also asked specific questions about how easy it was to get an appointment with specialty care providers and mental health care providers: Of those nonenrolled beneficiaries who tried to make an appointment with a civilian specialty care provider, 84 percent reported it was “usually easy,” or “always easy,” to get appointments. These results also varied by network status, as a higher percentage of those who used a network specialty care provider reported that they found it “usually easy” or “always easy” to get appointments (85 percent) compared to those that used a nonnetwork specialty care provider (74 percent). Of those nonenrolled beneficiaries that received treatment or counseling from a civilian mental health care provider, 73 percent reported that when they needed treatment or counseling right away, they usually or always saw someone as soon as they wanted. We found that this result did not change since the prior survey, nor did we find any statistically significant differences between beneficiaries’ responses for seeing a network versus a nonnetwork mental health provider. Nonenrolled Beneficiaries’ Positive Ratings of TRICARE Have Generally Increased over Time and Vary Compared to Other Federal Health Plans Ratings of TRICARE over time. Nonenrolled beneficiaries’ positive ratings of TRICARE have generally increased since the previous survey. Specifically, over time, nonenrolled beneficiaries’ positive ratings of five different categories related to TRICARE have either increased (primary care rating, specialty care rating, and health plan rating) or remained the same (mental health care rating and health care rating). (See fig. 7.) Furthermore, nonenrolled beneficiaries’ positive ratings of their mental health care providers were lower than their ratings for their primary and specialty care providers. We also found that there were no significant differences at the 95 percent confidence level for nonenrolled beneficiaries’ ratings of primary care, specialty care, or mental health care providers based on their network status. Ratings of TRICARE compared to other federal health plans. When we compared these results to those of the 2013-2015 CAHPS surveys, we found that nonenrolled TRICARE beneficiaries’ positive experience ratings for primary care providers and specialty care providers were lower than those of Medicare fee-for-service beneficiaries and higher than those of Medicaid beneficiaries, which is similar to what we found for the previous survey. We also found that TRICARE beneficiaries’ positive experience ratings for their health care were higher than that of both Medicare fee-for-service beneficiaries and Medicaid beneficiaries, but TRICARE beneficiaries’ positive experience ratings for their health plan were lower than both of these groups. (See fig.8.) Civilian Providers’ Reported Awareness of TRICARE Has Generally Increased over Time, While Mental Health Providers’ Acceptance of New TRICARE Patients Has Decreased Civilian Providers’ Awareness of TRICARE Has Generally Increased over Time, with Network Providers Reporting Higher Awareness than Nonnetwork Providers Provider awareness over time, by provider type and by location type. Nationwide, a higher percentage of civilian providers reported that they were aware of TRICARE in the 2012-2015 civilian provider survey (84 percent) than those from the 2008-2011 civilian provider survey (82 percent). Specifically, since the previous survey, we found that awareness increased for specialty care providers (from 92 to 94 percent) and mental health care providers (from 68 to 74 percent). In addition, when we analyzed these results by location type, we found that civilian providers in both PSAs and non-PSAs reported higher awareness of TRICARE since the previous survey (from 79 to 82 percent in PSAs and from 87 to 89 percent in non-PSAs). Awareness among civilian providers in locations now designated as former PSAs remained statistically unchanged at the 95 percent confidence level (89 percent in 2012-2015). However, despite some increases in awareness, civilian providers in PSAs reported lower awareness than those in non-PSAs and former PSAs in the 2012-2015 surveys. Provider awareness by network status. Providers within the TRICARE network reported higher awareness of TRICARE than nonnetwork providers, regardless of individual provider type. (See fig. 9.) Among individual provider types, the biggest difference in awareness between network and nonnetwork providers was for mental health care providers, with 96 percent of network mental health care providers reporting awareness of TRICARE compared with 72 percent of nonnetwork mental health care providers. Civilian Mental Health Care Providers’ Acceptance of New TRICARE Patients Has Decreased; Network Providers Reported Higher Acceptance than Nonnetwork Providers Provider acceptance over time, by provider type and location type. Nationwide, we found an overall decrease reported in civilian providers’ acceptance of new TRICARE patients in the 2012-2015 civilian provider survey (55 percent) compared to the 2008-2011 civilian provider survey (58 percent). However, when we analyzed acceptance by provider type, we found that the overall decrease was mainly attributable to a decrease in mental health care providers’ acceptance rates, as primary and specialty care providers’ acceptance rates remained unchanged. Specifically, mental health care providers’ TRICARE acceptance rate decreased from 39 to 36 percent. However, this low acceptance rate may not be an issue unique to TRICARE, as we have previously reported that there is a nationwide shortage of mental health professionals. In addition, when we analyzed results for all civilian providers by location type, we found that civilian providers in PSAs and non-PSAs reported lower acceptance rates of new TRICARE patients since the previous survey (from 55 to 53 percent in PSAs, and from 66 to 62 percent in non- PSAs). Acceptance among civilian providers in locations now designated as former PSAs remained statistically unchanged at the 95 percent confidence level (60 percent in 2012-2015). Similar to our findings on providers’ awareness, we found that civilian providers in PSAs reported lower acceptance rates than those in non-PSAs and former PSAs. miscellaneous, and A Department of Defense official told us that some examples of “miscellaneous” are “in a private practice”, and “not a preferred provider.” Provider acceptance by network status. When we analyzed civilian providers’ acceptance of new TRICARE patients by providers’ network status, we found that network providers reported higher acceptance of new TRICARE patients than nonnetwork providers, regardless of provider type. (See fig. 10.) Among individual provider types, the biggest difference in acceptance between network and nonnetwork providers was for mental health care providers with 79 percent of network mental health care providers reporting acceptance of new TRICARE patients compared with 30 percent of nonnetwork mental health care providers. Of those mental health care providers that were not accepting new TRICARE patients, one of the top reasons reported by those not in the network was a lack of awareness of TRICARE. Due to the relatively small number of network mental health providers who provided reasons for not accepting new TRICARE patients, it was not possible to identify one primary reason; however, some of the reasons they cited include reimbursement, not accepting new patients, and specialty was not covered. DOD’s Surveys of Nonenrolled Beneficiaries and Civilian Providers Collectively Indicate That Specific Geographic Locations May Have Access Problems Our analysis of the 2012-2015 nonenrolled beneficiary and civilian provider surveys indicated that beneficiaries may have difficulty accessing a primary care provider, a specialty care provider, or both in 6 out of the 190 specific geographic locations that were surveyed. For the 6 locations we identified, beneficiaries reported higher levels of problems finding providers, and providers reported lower rates of accepting TRICARE patients. Primary care. We identified two locations where access to primary care providers may be particularly problematic. (See table 2.) In these two locations, the percent of beneficiaries who reported that they had problems finding a primary care provider was at or above the 2012- 2015 beneficiary survey’s national average of 22 percent, and also where the percent of primary care providers who reported that they were accepting new TRICARE patients was at or below the 2012- 2015 civilian provider survey’s national average of 68 percent. Specialty care. We identified five locations where access to specialty care providers may be particularly problematic. (See table 2.) In these five locations, the percent of beneficiaries who reported that they had problems finding a specialty care provider was at or above the 2012- 2015 beneficiary surveys’ national average of 23 percent, and also where the percent of specialty care providers who reported that they were accepting new TRICARE patients was at or below the 2012- 2015 civilian provider surveys’ national average of 78 percent. When we compared this analysis to our analysis of the 2008-2011 beneficiary and provider surveys, the “Dallas/Fort Worth, Texas” HSA was identified in both results. Using data from the prior survey, our analysis identified it as being potentially problematic for primary care, but using data from the more recent survey, we identified specialty care access as being potentially problematic. DOD officials told us that their past analysis of beneficiaries’ complaints in this location centered on appointment wait times exceeding beneficiaries’ preferences and on drive times to providers’ offices. Officials explained that although there was a wide range of network specialty care providers in this location, TRICARE beneficiaries were a very small percentage of the overall population. Furthermore, this location is home to a number of large corporations that have health care plans that reimburse providers more than TRICARE. DOD officials added that due to these factors, providers in this location do not give preference to TRICARE beneficiaries, and drive times in this location are often long due to the traffic patterns and overall congestion of a large urban area. Agency Comments In reviewing a draft of this report, DOD concurred with our overall findings. DOD’s written response is reprinted in appendix I. We are sending copies of this report to the Secretary of Defense and appropriate congressional committees. The report is also available at no charge on GAO’s website at http://www.gao.gov. If you or your staff has any questions regarding this report, please contact Debra A. Draper at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Defense Appendix II: GAO Contact and Staff Acknowledgments GAO Contact: Debra A. Draper at (202) 512-7114 or [email protected]. Staff Acknowledgments: In addition to the contacts named above, Bonnie Anderson, Assistant Director; Jeff Mayhew, Analyst-in-Charge; Amy Andresen; and Jennie Apter made key contributions to this report. Also contributing were Zhi Boon, Jacquelyn Hamilton, Vikki Porter, and Eric Wedum.
Why GAO Did This Study DOD provides health care, including mental health care, to eligible beneficiaries through TRICARE. Beneficiaries who use TRICARE Prime, a managed care option, must enroll to receive care. Prior to Jan. 1, 2018, beneficiaries did not need to enroll for TRICARE Standard, a fee-for-service option, or TRICARE Extra, a preferred provider organization option (referred to as nonenrolled beneficiaries). Although the TRICARE Standard and Extra options were terminated effective Jan. 1, 2018, the new TRICARE Select option has similar benefits for obtaining care from network and nonnetwork providers. The National Defense Authorization Act (NDAA) for Fiscal Year 2008 directed DOD to conduct surveys of nonenrolled beneficiaries and civilian providers about access to care under the TRICARE Standard and Extra options. It also directed GAO to review the surveys' results. Additionally, the NDAA for Fiscal Year 2017 included a provision for GAO to review access to care under TRICARE Extra. This report addresses both provisions. GAO analyzed DOD's surveys to determine (1) nonenrolled beneficiaries' access to care, (2) nonenrolled beneficiaries' ratings of TRICARE, (3) civilian providers' awareness and acceptance of TRICARE, and (4) nonenrolled beneficiaries' access by individual geographic area. GAO interviewed agency officials, analyzed the 2012-2015 surveys, and compared them to DOD's 2008-2011 surveys and to surveys of Medicare and Medicaid beneficiaries. In commenting on a draft of this report, DOD concurred with GAO's findings. What GAO Found The Department of Defense's (DOD) most recent surveys of TRICARE beneficiaries and civilian health care providers show that access to care has generally improved for nonenrolled beneficiaries who used the TRICARE Standard and Extra options. Specifically, GAO found the following: Nonenrolled beneficiaries reported improved access to care in the most recent 4-year survey (2012-2015), compared to the prior survey (2008-2011). For example, a lower percentage of nonenrolled beneficiaries reported that they experienced problems finding a civilian provider in the most recent survey (29 percent) than those in the prior survey (31 percent). In addition, a higher percentage of nonenrolled beneficiaries (90 percent) reported that they were usually or always able to obtain a non-urgent appointment as soon as they thought they needed compared to the prior survey (87 percent). The percentage of nonenrolled beneficiaries who reported positive experience ratings of TRICARE ranged from 71 to 83 percent over five categories, including ratings of primary, specialty, and mental health care providers. These ratings were generally higher than the prior survey. When compared to other federal health plans, nonenrolled TRICARE beneficiaries' positive experience ratings of primary and specialty care providers were lower than those of Medicare fee-for service beneficiaries, but higher than those of Medicaid beneficiaries. The percentage of civilian providers who were aware of TRICARE increased from 82 percent in the prior survey to 84 percent. However, the percentage who accepted new TRICARE patients decreased from 58 percent to 55 percent. According to GAO's analysis of survey data, this overall decrease was mainly attributable to a decrease in mental health care providers' acceptance rates, as the acceptance rates for primary and specialty care providers remained unchanged. Network providers reported both higher awareness and acceptance of TRICARE than providers not in the network (referred to as nonnetwork providers). The biggest gap in both awareness and acceptance between network and nonnetwork providers was for mental health care providers: About 96 percent of network mental health care providers reported awareness of TRICARE compared to 72 percent of nonnetwork mental health care providers. About 79 percent of network mental health care providers reported accepting new TRICARE patients compared to 30 percent of nonnetwork mental health care providers. GAO's analysis of both the beneficiary and provider surveys identified locations in New York, Washington, Texas, and Washington, D.C. where access to providers may be particularly problematic. Specifically, in these locations, beneficiaries reported more problems finding providers who accepted TRICARE and providers reported lower acceptance of TRICARE, compared to national averages.
gao_GAO-18-103
gao_GAO-18-103_0
Background Medicaid is jointly financed by the federal government and the states. States administer their Medicaid programs within broad federal rules and according to a state plan approved for each state by CMS. CMS issues program requirements in the form of regulations and guidance, approves changes to states’ Medicaid programs, provides technical assistance to states, and conducts other oversight activities. States are responsible for establishing state policies and procedures in accordance with federal requirements. Each state must designate a single state agency to administer its Medicaid program. That agency can delegate programs or functions—such as enrollment in HCBS programs—to other state and local agencies, but is responsible for their supervision. States may provide certain types of HCBS under their state plans. In addition, states may seek permission from CMS to provide HCBS under waivers of traditional Medicaid requirements; for example, in order to provide services to a targeted population or to a limited number of beneficiaries. Both state plans and waivers are developed and proposed by states and must be approved by CMS in order for states to receive federal matching funds for medical expenditures. Types of Medicaid HCBS Programs and Delivery Systems Medicaid HCBS cover a wide range of services and supports to help individuals remain in their homes or live in a community setting, such as personal assistance with daily activities, assistive devices, and case management services to coordinate services and supports that may be provided from multiple sources. With approval from CMS, states can provide Medicaid HCBS under one or more types of programs authorized under different sections of the Social Security Act, including several state plan and waiver authorities. (See table 1.) States can have multiple HCBS programs operating under different authorities, and these authorities have distinct features such as different functional eligibility criteria. For example, some types of Medicaid HCBS programs only serve beneficiaries who are functionally eligible for an institutional level of care; that is, beneficiaries must have needs that rise to the level of care usually provided in a nursing facility, hospital, or other institution. Under some types of HCBS programs, states can tailor their programs to the needs of specific beneficiary populations they choose to target. Common populations that states target with their HCBS programs include: older adults and people with physical disabilities, people with intellectual or developmental disabilities, people with addictions or mental illness, and other populations with specific conditions such as traumatic brain injury or Alzheimer’s disease. States use different delivery systems to provide Medicaid HCBS, and these may vary across distinct HCBS programs within a state. Historically, states have predominantly provided HCBS using fee-for- service delivery systems in which states pay providers for HCBS rendered to beneficiaries and billed to the state. Alternatively, under managed care long-term services and supports delivery systems, states contract with managed care plans to provide HCBS to beneficiaries and typically reimburse the plans through capitation payments, which are periodic payments for each beneficiary enrolled under the contract. Managed care plans may contract with HCBS providers to provide services to beneficiaries or may provide services directly. A state may use a combination of fee-for-service and managed care delivery systems within or among its HCBS programs. Estimated expenditures on HCBS provided under managed care have grown from $8 billion in fiscal year 2012 to $19 billion in fiscal year 2015. Assessments of Individuals’ Needs for Medicaid HCBS Individuals require HCBS because they are limited in their ability to care for themselves due to physical, developmental, or intellectual disabilities, or to chronic conditions. These services can assist beneficiaries with activities of daily living—basic, personal, everyday activities such as bathing, dressing, and eating—or with instrumental activities of daily living, which are other activities that allow individuals to live independently in the community, such as meal preparation or managing finances. States generally assess a beneficiary’s needs for HCBS based on designated assessment tools—or sets of questions—that assessors use to collect information from sources such as beneficiaries, caregivers, and health records. Examples of this information include the following: Functional support needs: The need for assistance with activities of daily living or instrumental activities of daily living. Clinical care needs or medical health concerns: Information on an individual’s health history, active diagnoses, medications, and clinical services (e.g., wound care or dialysis). Cognitive and behavioral support needs: The loss of memory function, behaviors that pose risks, or adaptive and maladaptive behaviors. Beneficiaries’ strengths, preferences, and goals. The needs assessment processes may vary across states and distinct HCBS programs within a state, but typically involves the following key steps: States direct potentially eligible individuals to entities that conduct Medicaid HCBS needs assessments. An assessor conducts a needs assessment, generally in a face-to- face setting, using a designated assessment tool to collect information based on methods such as interviews with beneficiaries and caregivers, observation, and review of other sources of information needed to determine functional eligibility for services. Additional information relevant for service planning purposes may be included in this needs assessment, or collected in additional assessments that may occur after an individual is determined eligible for HCBS. Needs assessment results are used to inform determinations of whether an individual meets particular HCBS programs’ functional eligibility requirements. Needs assessment results for eligible individuals inform the development of a service plan. The service plan includes the type and amount of services to be provided to the beneficiary within state- specified limits. States may use distinct needs assessments for service planning to collect more detailed information or may use the same assessment that was used to determine functional eligibility. (See fig. 1.) CMS’s goals for HCBS and other Medicaid long-term services and supports include achieving a sustainable and efficient system that provides appropriate services to beneficiaries. Effective needs assessments can help beneficiaries to receive appropriate services to help them live independently and help states manage utilization of services, and therefore costs. An effective assessment process would facilitate efficient use of services and beneficiaries’ access to available services appropriate to their needs by accurately and consistently estimating beneficiaries’ needs. Assessment processes that overestimate needs, underestimate needs, or both, may result in HCBS programs that offer more services than needed or deny eligible beneficiaries access to needed services. (See fig. 2.) There are varied reasons why HCBS needs assessments may not accurately and consistently estimate beneficiaries’ needs. HCBS needs assessments cover complex subject matter and may require assessors to make observations and judgments about beneficiaries’ needs. For example, needs assessments typically address numerous and varied tasks necessary for a beneficiary to live independently, which can be difficult to measure and subject to interpretation—such as a beneficiary’s ability to manage finances. Furthermore, CMS has stated that assessors’ conflicts of interest can influence decisions even without individual assessors realizing this. Conflicts of interest can arise when an assessor has an incentive for a beneficiary to either over- or under-utilize HCBS, or an incentive to put the needs of assessors ahead of program goals, such as promoting certain HCBS when others may be more beneficial or cost effective. As examples to further illustrate these points, incentives that could result in over- or under-utilization of HCBS include the following, respectively. On one hand, an assessor may be a provider of the services for which the beneficiary may be eligible or a managed care plan that covers these services, and thus have an incentive to find that individuals need the services or coverage they offer. Conversely, a managed care plan may have incentives to reduce enrollees’ service utilization in order to reduce costs below the capitation payments that the plan receives to provide care to its enrollees and thus to maximize its profits, which could influence needs assessments used for service planning. Selected States Used Multiple Tools and Entities to Conduct Needs Assessments across Distinct HCBS Programs Selected States Varied in the Extent to Which Needs Assessment Tools Were Tailored to Distinct HCBS Programs and Assessment Purposes Each of the six selected states we reviewed used varied needs assessment tools across their distinct Medicaid HCBS programs, for which both functional eligibility criteria and amount of services available to beneficiaries can differ widely. The selected states varied in the extent to which their needs assessment tools were either tailored to a single Medicaid HCBS program or used across multiple, though not necessarily all, HCBS programs in the state. The selected states also varied in the extent to which the same or different needs assessment tools were used for different purposes, such as determining functional eligibility and developing a service plan: Connecticut. State officials reported that the state was in the process of piloting a uniform needs assessment tool that it planned to use for all but one of the Medicaid HCBS programs in the state. This needs assessment tool was used both to determine functional eligibility and to develop beneficiary service plans. Kentucky. State officials reported that the state had implemented a new needs assessment tool for one Medicaid HCBS Waiver program while continuing to use previous tools for other Medicaid HCBS Waiver programs. The same assessment was used for determining functional eligibility and for developing the service plan. In selecting and adapting the new tool, officials said that they considered the assessment needs of the other Medicaid HCBS waiver programs, because they would ultimately like to use only one assessment tool across all HCBS Waiver programs. Minnesota. State officials reported that the state had designed a uniform needs assessment tool for use across all HCBS programs in the state and had implemented it for most programs. The uniform assessment tool was used to determine functional eligibility for all HCBS programs in the state for which it was implemented and was also used to inform the development of service plans. New York. State officials reported that the state had implemented a set of needs assessment tools, referred to as a uniform assessment system, for use across multiple HCBS programs. The same uniform assessment system was used to both determine functional eligibility and to inform development of beneficiary service plans. North Carolina. At the time of our review, officials described generally using different assessment tools for the separate HCBS programs in the state. State officials reported that the state had developed a new needs assessment tool for one Medicaid HCBS Waiver program, and that they planned to expand use of this tool to another program in the future. The state used different needs assessments to determine functional eligibility and for service planning in its HCBS Waiver programs. Washington. State officials reported that a uniform assessment system was used across HCBS programs in the state. The system was composed of multiple needs assessment components. One version of the assessment was used for HCBS programs serving individuals with intellectual and developmental disabilities, and a different version was used for all other programs. For all HCBS programs, the same needs assessment system was used to determine functional eligibility and to develop the service plan. Selected States Used Different Types of Entities to Conduct Needs Assessments, Including State Agencies, Local Public Agencies, Contractors, HCBS Providers, and Managed Care Plans All six states we studied reported using more than one type of entity to conduct needs assessments for HCBS programs. For example, New York used five different types of entities, North Carolina used four different types of entities, and the remaining four states used two or three types of entities to conduct needs assessments. State agencies, local public agencies, and independent contractors were used by four states to conduct needs assessments for at least one HCBS program. All states but one, Washington, used HCBS providers or managed care plans to conduct needs assessments (see table 2). The types of entities that conduct needs assessments in the selected states varied across distinct HCBS programs, or for distinct needs assessment purposes within a single HCBS program. States may use multiple types of entities to conduct needs assessments because of differences in how particular HCBS programs were delivered. For example, the entities used in Minnesota varied by delivery system—the state reported that it used local public agencies to conduct needs assessments for all Medicaid HCBS programs other than its managed care HCBS program, for which managed care plans conducted needs assessments. In other states, different entities conducted needs assessments within the same HCBS programs depending on the purpose of the assessment. For example, because managed care plans may have a financial interest in eligibility determinations, New York began by July 2015 to use an independent contractor exclusively to conduct needs assessments to determine functional eligibility for HCBS for new enrollees in its managed care HCBS program. Once individuals were determined eligible for managed care HCBS, the managed care plans conducted the same assessment a second time in order to develop beneficiary service plans. Selected States Varied in Whether Formulas Were Used to Inform Functional Eligibility Determinations and Service Planning Decisions The six selected states also varied in whether they used formulas based on information collected using Medicaid HCBS needs assessment tools to inform key functional eligibility and service planning decisions. States may use such formulas as a means of meeting goals of consistent treatment of individuals based on needs. In making functional eligibility determinations, five of the six selected states—Connecticut, Minnesota, New York, North Carolina, and Washington—reported using a formula to compare the results of completed needs assessments to eligibility criteria for at least one of the HCBS programs in the state. For example, for specific HCBS programs, the assessment tool may compile results of certain assessment questions into a score that indicates whether or not the beneficiary is considered to have a need for an institutional level of care, which is required in order to be functionally eligible for some types of HCBS programs. For service planning purposes, four states— Connecticut, Minnesota, North Carolina, and Washington—reported that in at least one of these states’ distinct HCBS programs, the assessment tools included formulas. These formulas specified a particular amount of services or guided a potential range of service amounts for beneficiaries based on the results of particular assessment questions. (See table 3.) Professional judgment may also be used in conjunction with formulas. For example, when formulas are used to specify particular service levels based on the needs assessment results, they may specify a number of service hours or service budget. In either case, other factors may also be considered in some circumstances. State officials in one state described the use of formulas to allocate services as an initial step prior to the detailed person-centered service planning process. For example, in Minnesota a state formula specifies a certain number of hours of personal care services partly based on the level of need for assistance with activities of daily living such as eating, bathing, and toileting. However, the beneficiary and the entity responsible for the service planning process determine the specific services to prioritize within the overall number of hours available, and they may decide to use the authorized hours toward covered services that were not necessarily part of the formula, such as instrumental activities of daily living. In contrast, in states or HCBS programs that did not utilize formulas to specify or guide a particular amount of services based on assessment results, the amount of services may be determined—within the scope of service limits applicable to the particular HCBS program—by the entity responsible for working with the beneficiary on the service planning process. Selected States Took Steps to Unify Needs Assessment Processes and Increase Consistency Selected States Made Efforts to Make Needs Assessment Processes More Uniform across Distinct HCBS Programs and Noted Benefits and Challenges The six selected states reported taking steps to unify needs assessment processes across Medicaid HCBS programs as a means of meeting goals such as improving the efficiency and effectiveness of assessments. Specifically, states reported taking steps to implement assessment tools for use across multiple Medicaid HCBS programs in the state. Four states—Connecticut, New York, Minnesota, and Washington—had adopted or were piloting needs assessment tools that were used across multiple state Medicaid HCBS programs (though not necessarily all such programs in the state) rather than completing separate needs assessments for each separate program. In addition, Kentucky and North Carolina had recently implemented new tools for specific Medicaid HCBS programs that would be considered for use in additional HCBS programs in the future. Important benefits to beneficiaries and HCBS programs have resulted from efforts to coordinate needs assessment processes by using a uniform assessment across distinct HCBS programs, according to state officials and advocates. For example: State officials and advocates described that using a uniform assessment tool to determine functional eligibility for multiple state HCBS programs resulted in benefits and efficiencies for beneficiaries. Officials and advocates in Minnesota said that the uniform assessment process allowed beneficiaries to connect with the program best suited to their needs, even if they may not have otherwise been aware of it when initially seeking assistance. For example, officials said that families of children with autism may apply for personal care services, but may benefit more from being connected to another HCBS program that is available and designed to support the children’s specific needs. Similarly, officials in Connecticut said that uniform assessment across HCBS programs allows beneficiaries to access the services that are most appropriate without multiple assessments. For example, if an individual applies for a particular HCBS program but a separate program would be more appropriate, a second assessment is not necessary. Connecticut, Washington, and New York officials described how uniform assessment tools allowed consistent information to be shared with care providers or when beneficiaries transitioned between care settings. This, in turn, could allow care providers to better manage beneficiary care. State officials reported uniform assessment tools can result in better informed program management and policy decisions because they allow for the ability to collect consistent information across HCBS programs. For example, officials from Connecticut and Washington described how comparable assessment information could inform equitable policies for allocating services. Washington officials described using information about the extent of beneficiary needs to inform decisions about how many program staff were needed. Kentucky officials described how a more uniform assessment process helped them become aware of when beneficiaries were receiving services from multiple different non-Medicaid HCBS programs that were state-funded. States and advocates also reported challenges, including inefficiencies, to using uniform assessments under certain circumstances, such as when states have different criteria for functional eligibility across their different HCBS programs, or when different beneficiary populations have different assessment needs. For example: Minnesota officials reported that beneficiaries may need to address multiple versions of similar eligibility-related questions in its uniform assessment tool. This was due to the decision to incorporate each HCBS program’s previously separate functional eligibility questions into its tool to avoid changes in the information they used to determine eligibility. Beneficiary advocates in three states expressed concerns with the use of assessments designed for a particular population on a different population, such as using assessments designed for adults to assess the needs of children. Officials from Kentucky also noted concerns about using assessments across distinct populations as part of the reason the state was not using a single assessment tool. State officials and advocates also reported that uniform assessments resulted in lengthier assessment question sets that take longer to complete for both the assessor and the beneficiary. Selected States Reported Efforts to Increase Consistency in How Needs Assessments Were Conducted and Used, but Balancing Consistency with Flexibility Was a Concern Selected states reported making efforts to improve their assessment processes to increase consistency in how assessors conduct HCBS needs assessments. These efforts included using structured questions and emphasizing training to ensure individual assessors approached the assessment questions consistently and according to policy, and addressing potential conflicts of interest by using independent assessors rather than HCBS providers and managed care plans to conduct certain needs assessments. States’ improvement efforts included the following: Structured questions. Officials from five states described that structured approaches to assessment questions could improve the consistency of the assessment results, which are used to make functional eligibility and service planning decisions. Examples of structured questions that state officials described included questions that limited responses to a specific time period—such as the past 7 days—when assessing needs, and questions that used a standard scale for responses. Assessor training. Officials from four states reported focusing on assessor training to improve consistency. For example, North Carolina officials reported that determinations of need for personal care services were improved after training. In the training, assessors were taught to comply with a state policy to ask that beneficiaries demonstrate need for assistance with activities of daily living, such as mobility, rather than solely asking them questions about their needs. Independent needs assessments. Officials from three of the selected states—New York, North Carolina, and Kentucky—reported that needs assessments were improved by removing entities that had a financial interest in assessment results from conducting certain assessments. For example, Kentucky officials reported that using independent assessors rather than HCBS providers enhanced consistency because HCBS providers may skew beneficiaries’ assessment results to generate demand for their services. They noted that providers had resisted their removal from the process. Three of the six selected states reported that using a formula to summarize assessment results increased the consistency with which functional eligibility determinations or decisions about the amount of services to provide were made based on each individual’s assessment results. For example, officials from Washington reported that after implementing a formula to generate an overall classification of need, the amount of service hours authorized for beneficiaries was distributed more equitably and evenly across a continuum from minimum to maximum, rather than beneficiaries mainly always receiving the maximum number of hours allowed under program limits. This could allow for limited resources to be allocated more consistently across beneficiaries with similar levels of need. Officials from Connecticut similarly reported that during testing of a formula that was planned for use to specify the amount of service to provide, they had identified beneficiaries receiving more services than would be indicated by the formula based on their assessed needs. While officials reported that these efforts enhanced consistency of eligibility determinations and service authorization decisions, state officials and advocates also acknowledged challenges related to balancing consistency with flexibility in arriving at decisions—particularly with respect to the use of formulas for service allocation. The different approaches of relying on a formula or relying on the judgement of individual entities each presented its own challenges: In two states where there was a formula to specify or guide the amount of services to provide, advocates raised concerns that the indicated amount did not adequately address needs for some individuals. For example, advocates noted that the results of a lengthy and nuanced assessment tool were ultimately reduced to a single score in order to inform a particular budget for services. While this score might reflect the average needs of beneficiaries with similar assessment results, it did not adequately convey individualized needs of some beneficiaries, according to the advocates. On the other hand, there were concerns that relying on entities’ judgment resulted in inconsistency across beneficiaries. Advocates in three states raised concerns about inconsistent decisions across managed care plans or geographic areas, or over time, when determinations of functional eligibility or amount of services to provide were not based on state-determined formulas. In one state, state officials and advocates noted that these concerns were addressed by using formulas to allocate services but allowing beneficiaries to use an alternative assessment process in certain circumstances or receive “exceptions” to the amount of service authorized by the state’s formula based on individual circumstances. Beneficiary advocates also emphasized that the amount of services that are authorized for beneficiaries may reflect the scope of available services rather than the needs of an individual beneficiary. To the extent that a given HCBS program has limited resources for providing services, assessment results may be used to allocate resources within those limitations rather than to estimate the amount of services that would fully meet needs. For example, an assessment formula in Washington is designed to specify service amounts based on beneficiaries’ identified levels of need and the amounts that are available for particular levels of need may increase or decrease based on the state budget. State officials in Connecticut also said that because funding can vary for different HCBS programs within a single state, moving to a consistent formula for analyzing assessment results may shed light on the extent that beneficiaries with similar levels of need receive different levels of services depending on available program resources. CMS Has Taken Steps to Make HCBS Needs Assessment Processes More Effective, Uniform, and Free from Conflict of Interest, but Some Concerns Remain Unaddressed Two CMS Programs Have Sought to Make Assessment Processes More Effective and Uniform within and across States CMS has implemented two key programs that facilitate state efforts to make their HCBS needs assessment processes more uniform, among other goals. One of these is called Testing Experience and Functional Tools (TEFT) and is designed, in part, to test the effectiveness of a set of specific questions that states can use to conduct needs assessments. CMS designed the TEFT assessment questions for use across multiple HCBS beneficiary populations, including beneficiaries (1) of advanced age, or with (2) intellectual or developmental disabilities, (3) physical disabilities, (4) serious mental illnesses, or (5) traumatic brain injuries. The assessment questions being tested are limited to needs that may be relevant among these populations and do not assess needs that may apply to only certain populations; for example, questions to assess cognitive status that may apply to those with intellectual or developmental disabilities or other conditions but that do not apply to those with physical disabilities only. CMS announced TEFT in 2012 and six states received grants to test needs assessment questions for their effectiveness, which includes their validity (defined as accuracy in measuring individuals’ functional abilities) and reliability (defined as the consistency of results across assessors). Three of these six states were among those we selected for this review: Connecticut, Kentucky, and Minnesota. Officials in these states told us that they had not completed field testing the TEFT questions, and officials in two of these states (Connecticut and Minnesota) said they would consider the option of using TEFT questions in their assessments in the future. CMS officials told us that CMS plans to make the assessment questions they determine to be valid and reliable available to all states in the spring of 2018. Another key program that CMS has implemented is the Balancing Incentive Program, which was authorized by the Patient Protection and Affordable Care Act in 2010, to provide incentives for eligible states to rebalance their long-term services and supports systems towards more home- and community-based care. Among other things, this program required participating states to collect information on specific topics related to beneficiaries’ needs, but allowed states to choose the needs assessment questions. Under this program, states could use different assessment tools to gather information for HCBS programs serving different populations as long as the states used tools that collected information on 26 key topics that spanned five broad areas, or domains. The five domains were (1) activities of daily living, (2) instrumental activities of daily living, (3) medical conditions/diagnoses, (4) cognitive functioning, memory, and learning, and (5) behavior concerns (e.g., injurious, uncooperative, or destructive behavior). The requirement to collect information from these five domains for each beneficiary population was designed to promote consistency in determining beneficiaries’ needs across HCBS programs, while allowing states to tailor their assessment processes to specific beneficiary populations, according to CMS officials. For example, New York reported collecting information on the required topics using a suite of six assessment tools that varied to reflect differences in beneficiaries’ age, population, and other factors. The Balancing Incentive Program ended in 2015, although some states were provided extensions to carry out planned activities. Of 20 participating states evaluated, 18 successfully carried out the requirement to incorporate the 26 key topics in their needs assessments, according to a program evaluation prepared for the HHS Assistant Secretary for Planning and Evaluation. In addition, CMS has provided information and lessons learned from the Balancing Incentive Program to all states via its website and, according to CMS officials, has done several related presentations. While CMS does not have plans to conduct additional evaluations of assessment tools used by participating states, CMS officials told us that there would be some value to doing so and they may consider it in the future. CMS Has Taken Steps to Improve Effectiveness by Addressing the Potential for Conflicts of Interest, but These Steps Do Not Address All Types of Programs or Conflicts CMS has sought to improve HCBS needs assessments by addressing concerns about the potential for conflicts of interest that HCBS providers and managed care plans may have in conducting assessments. As previously noted, HCBS providers may have a financial interest in providing services that could potentially lead to over-utilization of services, while managed care plans may have a financial interest in increasing enrollments and reducing enrollees’ service utilization. Addressing HCBS Providers’ Potential for Conflicts of Interest CMS has taken steps to address conflicts of interest that may occur when HCBS providers conduct needs assessments, but gaps remain. The Balancing Incentive Program, which ended in 2015, required the 21 participating states to either separate HCBS provision from needs assessment processes or to take steps to mitigate the potential for conflicts of interest that occur when HCBS providers conduct assessments. In addition, CMS implemented regulations requiring all states to establish standards for conducting needs assessments that address certain potential conflicts for particular types of HCBS programs. The specific requirements may differ by program and whether the assessment is used to determine functional eligibility or develop service plans: For example, for State Plan HCBS—a relatively small program that accounted for less than 1 percent of estimated Medicaid HCBS expenditures in fiscal year 2015—states are required to establish conflict-of-interest standards that address both (1) evaluation of eligibility, and (2) needs assessments used to develop service plans. These standards must prohibit HCBS providers from conducting eligibility evaluations and needs assessments for this program, with certain exceptions in which the potential for conflict of interest must be mitigated. Under the HCBS Waiver, Community First Choice, and Self-Directed Personal Assistant Services programs—which collectively accounted for 60 percent of estimated expenditures for Medicaid HCBS in fiscal year 2015—states are required to establish standards that generally prohibit HCBS providers from conducting assessments of need used to develop service plans, but this requirement does not apply to assessments that states may use to determine functional eligibility. In addition, for State Plan Personal Care Services programs and other HCBS authorized under Section 1905(a) of the Social Security Act— which collectively accounted for 29 percent of estimated Medicaid HCBS expenditures in fiscal year 2015—regulations do not specifically limit HCBS providers from conducting assessments that states may use to determine eligibility or develop service plans. As a result of these differences in requirements across HCBS authorities, there are gaps in federal conflict-of-interest requirements applicable to entities that may conduct needs assessments. For example, several types of HCBS programs have specific requirements for states to establish standards to address potential conflicts of interest when HCBS providers conduct needs assessments that are used for service planning, but there are no equivalent requirements for State Plan Personal Care Services programs. (See table 4). In addition, HCBS providers may conduct certain needs assessments that inform HCBS functional eligibility determinations, but specific conflict of interest requirements are generally not in place for such assessments. With respect to gaps in requirements specific to needs assessments that are used to inform functional eligibility determinations, CMS officials suggested that state agencies’ responsibility for making final eligibility determinations addresses conflict- of-interest concerns. Specifically, officials noted that CMS regulations require state agencies to determine eligibility, and that, in doing so, states may consider needs assessments conducted by assessor entities as well as information from other sources. However, states may vary in the extent to which they consider information from other sources. In addition, it is unclear how the requirement that the state maintain responsibility for eligibility determinations addresses potential conflicts of interest when an HCBS provider conducts a needs assessment upon which a determination of eligibility for HCBS may be based. Gaps in requirements to address the potential for conflicts of interest when HCBS needs assessments are conducted by HCBS providers are not consistent with federal internal control standards, which require federal agencies to identify, analyze, and respond to risks related to achieving defined objectives. While CMS has a goal of achieving an effective long-term services and supports system that provides appropriate services to beneficiaries, because the agency does not require states to address the potential for HCBS providers’ conflicts of interest in conducting needs assessments under all HCBS authorities, there is a risk that some states may rely on HCBS providers to conduct assessments without addressing HCBS providers’ financial incentives, which can lead to over-utilization of HCBS. Examples among our case study states include: North Carolina: A program integrity review conducted by CMS in North Carolina found that the state’s transition to the use of an independent entity to conduct needs assessments for the State Plan Personal Care Services Program—rather than relying on HCBS providers to assess beneficiary needs—was followed by a reduction in both the number of beneficiaries using the program and a 30 percent reduction in average monthly expenditures. This suggests the program may have been over-utilized before the independent entity was used to conduct needs assessments. CMS highlighted this use of an independent entity as a practice that merits consideration from other states. Kentucky: State officials told us that when they transitioned to the use of independent assessors they also identified apparent instances of over-utilization that were occurring before they implemented independent assessments and other program changes. For example, officials said that when testing a new assessment tool using independent assessors, they identified individuals who had a low level of needs, and who did not appear to require an institutional level of care, as required for program eligibility, but who had been assessed at that level in the past. Addressing Managed Care Plans’ Potential for Conflicts of Interest Conflict-of-interest concerns also exist for states with managed care HCBS programs where managed care plans conduct assessments. CMS has taken separate steps to address these concerns, including issuing guidance and new regulatory requirements. CMS issued guidance in May 2013 that addressed best practices and CMS’s expectations of new and existing managed long-term services and supports programs, which include managed care HCBS. The guidance stated that managed care plans may not be involved in any HCBS functional eligibility determinations or needs assessment processes prior to a beneficiary’s enrollment in the plan. CMS officials told us that allowing managed care plans to assess individuals before enrollment without proper oversight by the state may provide an opportunity for plans to selectively enroll individuals who require less HCBS. Despite this risk, we found that CMS does not always take steps to ensure that states have procedures in place to guard against this practice prior to approving their programs. CMS officials told us that they evaluate state programs individually and may not apply all of the detailed concepts in its guidance when developing state-specific requirements for managed care HCBS programs. CMS’s application of the guidance in the three states selected for this review varied across types of HCBS programs. Examples from 1115 Demonstration and HCBS Waiver programs for our case study states include the following: 1115 Demonstration programs: Of the six states we selected for this review, one—New York—operated a managed care HCBS program authorized by an 1115 demonstration. Prior to July 2015, New York used managed care plans to assess and determine individuals’ functional eligibility for certain HCBS programs. One managed care plan admitted to enrolling 1,740 individuals in managed care HCBS whose needs did not qualify them for the program from January 2011 to September 2013, and it resolved allegations that it had submitted false claims for Medicaid HCBS in a $35 million settlement with the U.S. Department of Justice. In 2013, CMS amended the terms and conditions of New York’s demonstration to require the state to use an independent assessor entity to both conduct needs assessments and determine eligibility for managed care HCBS, and New York has contracted with an independent assessor to carry out these functions. While this requirement applied specifically to New York, it does not necessarily apply to other states, as CMS’s terms and conditions for 1115 demonstrations can vary across states. According to CMS, an additional 11 states had managed care HCBS programs approved under 1115 demonstrations as of July 2017. However, CMS officials told us that they did not have information on whether or not these 11 states were using managed care plans to conduct needs assessments for the purpose of determining individuals’ functional eligibility. HCBS Waiver programs: Two of our six selected states—Minnesota and North Carolina—used managed care plans to deliver services for HCBS Waiver programs. In these states, CMS approved HCBS Waiver applications that proposed to use managed care plans to conduct or evaluate needs assessments used to determine functional eligibility for the programs, contrary to CMS’s May 2013 guidance. CMS officials said that when states allow managed care plans to be involved in these assessments, CMS would expect states to provide oversight as part of their quality improvement strategies required under HCBS Waivers. However, CMS does not require states to provide assurances or evidence of oversight directly related to managed care plans’ potential for conflicts of interest when plans are involved in needs assessments that states use to determine functional eligibility. CMS officials told us that states that do allow managed care plans to conduct assessments used to determine eligibility for HCBS should be aware of the potential for conflicts of interest in order to provide adequate oversight. CMS officials also told us that they engage in a conversation with states related to oversight of the assessment process when CMS learns of such states. However, CMS does not collect complete information on which states use managed care plans for needs assessments prior to enrollment, and states may not implement precautions absent a specific CMS requirement to address the potential for these conflicts of interest. The absence of requirements for states to address acknowledged risks is not consistent with federal internal control standards that require federal agencies to identify, analyze, and respond to risks related to achieving defined objectives. Developing Service Plans and Determining the Amount of HCBS to Provide Separate concerns pertain to managed care plans’ involvement in HCBS needs assessments for service planning purposes that are conducted by plans after enrollment. Advocates in two of the three selected states with managed care HCBS programs, New York and North Carolina, expressed concerns about managed care plans’ incentives to reduce their costs by reducing enrollees’ HCBS service levels, leading to reduced access to needed HCBS. For example, advocates in New York highlighted the growth in fair hearings that enrollees initiated to dispute reductions in HCBS they receive, which can result from inaccurate needs assessments. In May 2016, in the preamble to a final rule that amended managed care regulations, CMS responded to concerns from commenters about managed care plans’ involvement in the needs assessments used to develop service plans by stating that managed care plans’ HCBS needs assessments of enrollees are a critical component of the plans’ efforts to manage enrollees’ care. CMS also noted that existing appeals processes, which are similar to fair hearings, provide adequate safeguards to address instances when enrollees believe their needs assessments do not reflect their true needs. However, according to CMS, beneficiaries enrolled in managed long-term services and supports are among the most vulnerable and often require enhanced protections to assure their health and welfare. To implement additional beneficiary protections, the May 2016 managed care regulations require states with managed care HCBS programs to implement a beneficiary support system. A beneficiary support system generally provides individuals with education and assistance related to appeals, grievances, and fair hearings, and assists states with the identification and resolution of systemic issues through review and oversight of program data. These regulations also require states to report annually on the activities and performance of these systems in order to drive continual improvements. CMS stated that reporting requirements of this nature would help the agency address fragmented program information about state managed care programs and help improve oversight efforts. However, as of September 2017, CMS had not issued guidance to states on the content and form of this reporting, and under the regulations, states are not required to submit reports until CMS issues such guidance. CMS officials told us they were unsure whether they would issue this guidance, and thus it is unclear whether and when the reporting requirement will take effect. We previously made a recommendation to CMS that pertains to this issue. Specifically, in a report published in August 2017, we identified similar concerns with the lack of requirements for state managed long- term services and supports programs to report information that CMS needs to adequately oversee states’ programs for ensuring beneficiary access to services. We found that existing state reporting did not always include key elements necessary for CMS to monitor certain key aspects of beneficiaries’ access and quality of care, including data related to appeals and grievances. We recommended that CMS improve its oversight of managed long-term services and supports by taking steps to identify and obtain key information needed to oversee states’ efforts to monitor beneficiary access to quality services. HHS concurred with this recommendation and stated that the agency will take this recommendation into account as part of an ongoing review of the 2016 managed care regulations. This action could help to address the concerns discussed above regarding managed care plans’ potential for conflicts of interest in conducting needs assessments for service planning purposes. Conclusions HCBS needs assessments can directly affect whether individuals are eligible to receive HCBS and the amount of services they receive. Given the growth in spending for Medicaid HCBS and the potential vulnerability of individuals seeking HCBS, it is critical that needs assessments are effective in ensuring that beneficiaries receive the help they need to live independently while at the same time reducing the risk of over-utilization of HCBS. CMS plays an important role in ensuring that states appropriately assess the needs of those seeking HCBS, including addressing the potential for entities that conduct needs assessments to have conflicts of interest. Conflicts of interest can result in inaccurate assessments, potentially leading to provision of unnecessary services or restricting other beneficiaries’ access to needed services. CMS has required states to take actions to avoid or mitigate the potential for conflicts of interest for some HCBS programs, and states that have taken steps to protect against conflicts of interest in HCBS programs have reported improvements; however, we found gaps in federal requirements for such safeguards. These gaps in requirements are inconsistent with federal control standards that require federal agencies to identify, analyze, and respond to risks related to achieving defined objectives. CMS could improve the efficiency and effectiveness of Medicaid HCBS programs by taking additional steps to consistently require all types of states’ programs to avoid or mitigate the potential for conflicts of interest in conducting needs assessments, as appropriate. Recommendation for Executive Action The Administrator of CMS should ensure that all types of Medicaid HCBS programs have requirements for states to avoid or mitigate potential conflicts of interest on the part of entities that conduct needs assessments that are used to determine eligibility for HCBS and to develop HCBS plans of service. These requirements should address both service providers and managed care plans conducting such assessments. (Recommendation 1) Agency Comments and our Evaluation We provided a draft of this report to HHS for review and comment, and HHS provided written comments, which are reprinted in appendix I. HHS also provided technical comments, which we incorporated as appropriate. HHS concurred with our recommendation to ensure that all types of Medicaid HCBS programs have requirements for states to avoid or mitigate potential conflicts of interest on the part of entities that conduct needs assessments. HHS stated that it has a regulatory structure in place to protect against potential conflicts of interest on the part of entities responsible for determining eligibility for HCBS and developing service plans. As described in our report, however, there are gaps in required conflict of interest standards applicable to entities that conduct needs assessments that inform HCBS eligibility determinations. Further, the conflict of interest requirements related to service plans do not apply to all programs, such as State Plan Personal Care Services programs. Developing additional requirements in response to such gaps would further improve efficiency and effectiveness. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Health and Human Services Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Tim Bushfield (Assistant Director), Emily Beller Holland, Anne Hopewell, Laurie Pachter, Chris Piccione, Vikki Porter, Russell Voth, and Jennifer Whitworth made key contributions to this report.
Why GAO Did This Study With approval from CMS, the federal agency responsible for overseeing state Medicaid programs, states can provide long-term care services and supports for disabled and aged individuals under one or more types of HCBS programs. State and federal Medicaid HCBS spending was about $87 billion in 2015. Effective needs assessments help states ensure appropriate access to, and manage utilization of, services and therefore costs. States' processes vary, and challenges include the potential for assessors to have conflicts of interest leading to over- or under-estimating of beneficiaries' needs for HCBS. GAO was asked to examine states' needs assessment processes for provision of long-term services and supports. This report addresses (1) how selected states assess needs for HCBS, and (2) steps CMS has taken to improve coordination and effectiveness of needs assessments, among other objectives. GAO studied six states that varied in terms of assessment tools in use, participation in federal initiatives, HCBS delivery systems, and geographic location; reviewed federal requirements and documents; and interviewed CMS officials and stakeholders. What GAO Found The six selected states that GAO reviewed used multiple approaches to assess individuals' needs for Medicaid home- and community-based services (HCBS). Each state may have multiple HCBS programs authorized under different sections of the Social Security Act. These programs serve beneficiaries who generally need assistance with daily activities, such as bathing or dressing. States establish needs assessment processes to collect data on functional needs, health status, and other areas that they use to determine individuals' eligibility for HCBS and to plan services, such as the amount of services needed. The selected states varied in the extent to which they used different assessments across HCBS programs and used multiple types of entities—such as state or government agencies, contractors, or providers—to conduct them. The Centers for Medicare & Medicaid Services (CMS) has taken steps to improve needs assessments but concerns about conflict of interest remain in regard to HCBS providers and managed care plans. HCBS providers may have a financial interest in the outcome of needs assessments, which could lead to overstating needs and overprovision of services. CMS has addressed risks associated with HCBS provider conflicts, such as by requiring states to establish standards for conducting certain needs assessments, but these requirements do not cover all types of HCBS programs. For example, specific conflict of interest requirements are generally not in place for needs assessments that are used to inform HCBS eligibility determinations. In addition, requirements for states to establish standards to address HCBS providers' potential for conflict of interest in conducting needs assessments that are used for service planning do not apply across all programs. Similarly, managed care plans may have a financial interest in the outcome of HCBS assessments used for both determining eligibility and service amounts. Managed care plans could have an incentive to enroll beneficiaries with few needs, as plans typically receive a fixed payment per enrollee. For example, a plan in one state admitted in a settlement with the federal government to enrolling 1,740 individuals, from 2011 through 2013, whose needs did not qualify them. In 2013, CMS issued guidance that managed care plans may not be involved in assessments used to determine eligibility for HCBS, but CMS has not consistently required states to prevent this involvement. Among three states GAO reviewed with managed care HCBS programs, CMS required one to stop allowing plans to conduct such assessments but allowed plan involvement in two states. The absence of conflict-of-interest requirements across all types of HCBS programs and states is not consistent with federal internal control standards, which require agencies to respond to risks to program objectives. What GAO Recommends GAO recommends that CMS ensure that all Medicaid HCBS programs have requirements for states to address both service providers' and managed care plans' potential for conflicts of interest in conducting assessments. HHS concurred with GAO's recommendation.
gao_GAO-19-224
gao_GAO-19-224_0
Background Overview of Peacekeeping Operations In accordance with the UN Charter, peacekeeping operations aim to maintain international peace and security, among other things. The UN has deployed 71 peacekeeping operations since 1948. As of December 2018, the UN had 14 active peacekeeping operations worldwide (see fig. 1). We have previously reported that UN peacekeeping operations have become more complex since 1998. Traditional UN peacekeeping operations were primarily military in nature and limited to monitoring cease-fire agreements and stabilizing situations on the ground while political efforts were made to resolve conflicts. More recently, in response to increasingly complex situations in which conflicts may be internal, involve many parties, and include civilians as deliberate targets, several UN peacekeeping operations deploy civilian and police personnel, in addition to those from the military, and focus on peacebuilding activities. Key UN Components in Establishing UN Peacekeeping Operations There are three principal UN bodies active in peacekeeping: The General Assembly, which consists of 193 member states that work through membership in one of six main committees and various subsidiary components tasked with specific issue areas. The Security Council, which has 15 members, including 5 permanent members with veto power: the United States, the United Kingdom, France, Russia, and China. The remaining 10 members of the Security Council are elected for 2-year terms to ensure geographical representation. The Secretariat, which comprises the administrative component of the UN and is led by the Secretary-General, who has responsibility for managing multiple UN departments, offices, and activities. The United States holds positions in two of these three components—the General Assembly and the Security Council. See table 1 for more information. The United States’ Role in UN Peacekeeping State’s Bureau of International Organization Affairs (State/IO) and the USUN serve primary roles with regard to the UN. State/IO is the U.S. government’s primary interlocutor with the UN and other international organizations, and is charged with advancing U.S. national interests through multilateral engagement on a range of global issues, including peace and security, nuclear nonproliferation, human rights, economic development, climate change, and global health. The USUN serves as the United States’ delegation to the UN and is responsible for carrying out U.S. participation in the organization. The USUN represents the United States’ political, legal, military, and public diplomacy interests at the UN. As part of its oversight of UN peacekeeping operations, State/IO conducts annual monitoring trips to most UN peacekeeping operations and documents the findings of these trips in Mission Monitoring and Evaluation reports. These reports summarize State/IO’s evaluation of each peacekeeping operation’s progress toward meeting its mandate and identify challenges the operation faces in doing so. State/IO summarizes the findings of these reports for the National Security Council in a U.S. strategy and priorities memorandum that includes recommendations for U.S. action, including how the United States should conduct negotiations and vote on upcoming renewals of the mandates that authorize peacekeeping operations. According to State, the National Security Council conducts an interagency policy formulation process based on this input. Other U.S. government entities also support UN peacekeeping operations. For instance, State’s Bureau of Political-Military Affairs and Bureau of International Narcotics and Law Enforcement Affairs provide capacity-building support for troops and police from troop- and police- contributing countries, respectively, serving in UN peacekeeping operations. Additionally, the Department of Defense participates in UN peacekeeping operations by providing UN forces with equipment, personnel, and other support services. The United States’ Principles of Effective Peacekeeping In April 2017, during a Security Council meeting on peacekeeping, the U.S. Permanent Representative to the UN outlined five principles that the United States believes are critical for effective peacekeeping. She remarked that, while peacekeeping is the UN’s most powerful tool to promote international peace and security, there is room for improvement, citing examples of operations that no longer need to exist or have limited host country consent. To make peacekeeping operations more effective, she emphasized that the UN should identify operations that lack the underlying political conditions for a resolution to the conflict, noting that numerous studies have concluded that such conditions are central to an operation’s success. To guide this process, she announced a set of five principles to which peacekeeping operations should be held: 1. Peacekeeping operations must support political solutions to conflict. 2. Operations must have host country consent. 3. Mandates must be realistic and achievable. 4. There should be an exit strategy, which would articulate the Security Council’s agreement on what success looks like and how to achieve it. 5. The Security Council should be willing to adjust peacekeeping mandates when situations improve or fail to improve. Since the Permanent Representative’s announcement of these principles, State/IO has included an assessment of each peacekeeping operation against these principles in the U.S. strategy and priorities memoranda that it prepares for the National Security Council. With regard to the fifth principle, in these memoranda, State/IO assesses whether and how a mandate itself should be changed, rather than assessing the Security Council’s willingness to change the mandate. Officials indicated that they conduct their assessment in this manner in order to inform and establish the U.S. negotiating position. The UN Security Council Establishes and Renews Peacekeeping Operations, Which Conduct a Range of Tasks Working with UN Member States, the UN Security Council Establishes and Renews Peacekeeping Operations UN Peacekeeping Operations Are Mandated to Perform Tasks Such As Maintaining Ceasefires, Protecting Civilians, and Providing Electoral Assistance Security Council resolutions establishing UN peacekeeping operations define mandates, or tasks, for each operation, and the peacekeeping operations perform a variety of activities to fulfill these tasks. In some cases, these activities are specifically mandated by a Security Council resolution; in others, the peacekeeping operation engages in an activity pursuant to a broad grant of authority to achieve a task. Each UN peacekeeping operation performs a unique set of tasks. The mandates of peacekeeping operations established prior to 1998 tend to include the monitoring of cease-fire as a mandated task, while those established after 1998 also include tasks such as the protection of civilians, facilitation of humanitarian assistance, and enforcement of economic sanctions or an arms embargo. Comparatively, operations in the African region have mandates that include the highest number of tasks. See appendix II for a list of the mandated tasks of all 14 peacekeeping operations. The UN has defined 16 categories into which these activities can be classified, including supervision or monitoring of ceasefire agreements, the protection and promotion of human rights, and protecting civilians. See table 3 for a list and description of these categories. State’s Assessments Show that UN Peacekeeping Operations Generally Do Not Fully Meet U.S. Principles of Effective Peacekeeping and Face Challenges to Achieving Their Mandates Based on our review of State’s most recent assessments and discussions with State officials, we found that despite some military and political successes of individual peacekeeping operations, UN peacekeeping operations generally do not fully meet the U.S.-stated principles of effective peacekeeping and face challenges to achieving their mandates. For the 11 peacekeeping operations with mandates that renew on a regular basis, State prepares strategy and priority memoranda for appropriate committees of the National Security Council to inform the mandate renewal process. We reviewed these memoranda and spoke with State officials about their assessments of these operations against four of the U.S. principles. Table 4 presents GAO’s categorization of the results of State’s assessments. Supporting political solutions to conflict. Based on State’s assessment, we categorized 10 of the 11 peacekeeping operations as having met (five) or partially met (five) the principle of supporting political solutions to the conflict. For example, in Cyprus, State assessed that the United Nations Peacekeeping Force in Cyprus (UNFICYP) met this principle because its activities generally support a political solution, despite the country’s slow progress toward negotiating a final settlement of conflict between the Greek Cypriot and Turkish Cypriot communities. We categorized one peacekeeping operation, the United Nations Mission for the Referendum in Western Sahara, as not meeting this principle. Host country consent. Based on State’s assessment, we categorized all 11 peacekeeping operations as having met (four) or partially met (seven) the principle of host country consent. For example, State officials assessed that the government of the Central African Republic cooperates fully with the UN Multidimensional Integrated Stabilization Mission in the Central African Republic (MINUSCA). With respect to other peacekeeping operations, officials noted that a country’s consent to host an operation differs from cooperation with all aspects of a peacekeeping operation. For example, State reported that while the government of the Democratic Republic of the Congo has consented to the UN Organization Stabilization Mission in the Democratic Republic of the Congo’s (MONUSCO) presence in the country, the government has, at times, been hostile toward and actively taken steps to undermine the mission. Realistic and achievable mandates. Based on State’s assessment, we categorized seven of the 11 peacekeeping operations as having met (two) or partially met (five) the principle of having realistic and achievable mandates. For example, we categorized the African Union-United Nations Hybrid Operation in Darfur (UNAMID) as having partially met this principle because State reports that it has been able to carry out many of its mandated tasks; however, according to State’s assessments, government obstructions, a slow peace process, and mission management inefficiencies prevent the full implementation of UNAMID’s mandate. We categorized the remaining four peacekeeping operations as not meeting this principle. Exit strategies. Based on State’s assessment, we categorized five of the 11 peacekeeping operations as having met (two) or partially met (three) the principle of having an exit strategy in their mandates. For example, we categorized MINUSCA as having partially met the principle because, according to State’s assessment, the operation’s mandate has an exit strategy that will take several years to achieve given the lack of host government capacity. We categorized the remaining six peacekeeping operations as not meeting this principle. For example, based on State’s assessment, we categorized the UN Mission in the Republic of South Sudan (UNMISS) as not meeting this principle because the operation had not considered a near-term exit strategy because of ongoing conflict and the political stalemate in South Sudan. In addition to the four principles in the table, the fifth principle for effective peacekeeping reads as the Security Council’s willingness to change the mandate. In its memoranda, State assessed the fifth principle by examining whether the mandate was achieving its objective and, if not, should be adjusted. Using this method, State assessments show that the Security Council should adjust the mandates of nine of the 11 peacekeeping operations. For example, State assessed that the UNFICYP (Cyprus) mandate should be adapted to address the stalled political process. Although we found that State’s assessments show most peacekeeping operations are not fully meeting the U.S.-stated principles for effective peacekeeping, State officials we interviewed noted the important role UN peacekeeping operations play in maintaining stability in volatile conflicts around the world. These officials noted the dangerous and hostile environments in which peacekeeping operations are located, and, in some cases, human atrocities these operations help prevent. Further, U.S. and UN officials cited UN peacekeeping operations’ strengths, including international and local acceptance, access to global expertise, and the ability to leverage assistance from multilateral donors and development banks. Officials also cited strengths of individual operations, such as the protection of civilians against atrocities in South Sudan, the Democratic Republic of the Congo, and the Central African Republic, assistance toward the peaceful conduct of elections in numerous countries, police capacity building in Haiti, and support to peace processes and agreements in numerous countries. According to State/IO and USUN officials, continual evaluation and adjustment of the mandates of UN peacekeeping missions to better align with the U.S. principles remains a key tenet of the Administration’s UN peacekeeping policy, but the U.S. government faces two key challenges in this regard. First, some aspects of two of the five principles—host country consent and support for a political process—may be outside of the control of any international organization or bilateral partner. For example, MONUSCO’s (Democratic Republic of the Congo) mandate includes the provision of elections assistance in support of the nation’s political process, but, according to State officials, the lack of host government cooperation has relegated MONUSCO’s efforts in this area to technical assistance. Second, these officials explained that the Security Council does not always adopt U.S. proposals to change mandates to align with these principles, such as including an exit strategy. Changing peacekeeping mandates requires nine affirmative votes and no vetoes from permanent Council members, which, according to State and USUN officials, can be difficult. For example, USUN officials stated that the UN Interim Administration Mission in Kosovo (UNMIK) had fulfilled its mandate, but Russia and China were not supporting a vote to close the operation. Moreover, State officials noted that the assessment process using the principles began in 2017 and the United States has had a limited number of opportunities to negotiate changes to peacekeeping mandates because renewals generally occur annually. State officials cited several examples of notable progress, however, in improving the efficiency and focus of UN peacekeeping operations. According to State officials, through U.S. leadership, the Security Council reconfigured the operation in Haiti to focus on police and the rule of law. Additionally, the Security Council changed and downsized the operation in Darfur to reflect current political and security realities. State officials also said that the UN Security Council supported responsible drawdowns of peacekeeping operations, most recently in Cote d’Ivoire, while pushing peacekeepers in Lebanon to use all of their mandated authorities to be more effective in carrying out their tasks. According to State officials, adherence to these principles is not sufficient to guarantee success. An operation could fully meet all the principles, but still face challenges carrying out its mandate because of formidable circumstances, such as insecure security environments or limited government cooperation. However, State officials also noted that these principles describe critical conditions for effective peacekeeping in that an operation that does not meet these principles is unlikely to be able to fully carry out its mandate. Moreover, given the importance of establishing the necessary conditions for peacekeeping success, State/IO and USUN officials acknowledged that State must continue to work with the Security Council to ensure that peacekeeping operations meet the principles of effectiveness, such as modifying mandates to include exit strategies. In doing so, the UN and its member states could have greater assurance that they have set up peacekeeping operations for success. The United States Has Worked with the UN to Adjust Peacekeeping Mandates, but Does Not Have Sufficient Information to Determine if UN Resource Decisions Accurately Reflect These Adjustments The United States Has Worked with Security Council Members to Adjust Peacekeeping Mandates When the U.S. agencies involved in peacekeeping agree that the UN should change a peacekeeping operation’s mandate, USUN officials told us that the USUN works with other Security Council members to make adjustments, such as adding or removing tasks from an operation’s mandate. While not all proposals are adopted by the Security Council, State officials highlighted several types of mandate adjustments the United States has pursued, including: Removal of tasks. State and USUN officials told us they strive to remove tasks from peacekeeping mandates when those tasks have been achieved or are no longer relevant or achievable. For example, officials noted that the USUN successfully advocated that election monitoring be removed from the list of mandated tasks for MINUSCA because the elections had taken place in the previous year and, therefore, the task was no longer relevant. Addition of language to prioritize tasks. State and USUN officials told us that another strategy is to add language to a mandate to designate priority tasks. Officials stated that, as a result of such language in mandates for MINUSCA, MONUSCO, and MINUSMA, management at these peacekeeping operations had shifted mission resources to focus on priority tasks. For example, officials cited MINUSCA’s proposed budget, which increased resources for protection of civilians—a task designated as a priority by the Security Council—and reduced resources for Security Sector Reform, an area of less relevance to the mission given the current situation in the Central African Republic. Addition of language to clarify exit strategies. State and USUN officials noted that adding language to clarify exit strategies aids an operation’s success. For example, for the MINUJUSTH (Haiti) 2017 mandate, USUN officials noted that the United States had advocated successfully for the Security Council to include language calling for an exit strategy with benchmarks to assist the UN in monitoring the progress of the operation’s transition to a non-peacekeeping mission beginning in October 2019. USUN Does Not Have Sufficient Information from the UN on the Cost of Peacekeeping Operations to Determine Accurate Resource Allocation When Adjusting Mandates USUN officials told us that they do not have sufficient information to allow them to determine accurate resource allocation to peacekeeping operations when the Security Council makes a change to the mandate. For example, USUN officials told us that as a result of the Security Council’s decision to reduce resources for specific tasks in MONUSCO’s 2017 mandate—such as Security Sector Reform and Disarmament, Demobilization, and Reintegration activities, where little progress had been achieved—the United States had sought to reduce the MONUSCO budget to reflect this change. However, the USUN did not have complete information from the UN on all of the costs associated with this change, including support costs, such as flight hours and fuel for transport vehicles. In the absence of such information from the UN, USUN officials estimated these costs and advocated for a reduction in MONUSCO’s budget based on their own estimates. USUN officials noted that without input from the UN, they did not have sufficient information to determine the accuracy of their estimates. USUN officials told us that these information gaps exist because UN peacekeeping budgets do not include estimated costs by task. Rather, UN peacekeeping budgets provide information on the operation’s use of financial resources for personnel and operational costs. Thus, according to USUN officials, when the Security Council changes a peacekeeping operation’s mandate—such as by adding or removing a task—it is not clear how to adjust the budget for that operation to accurately reflect the change. UN headquarters officials told us that the UN does not prepare peacekeeping budgets with estimated costs by task because it is challenging to do so. However, senior officials with whom we spoke at two peacekeeping operations said that, despite challenges, it is possible to estimate costs by mandated task, which would provide additional budget transparency for the UN. Further, USUN officials stated that having UN estimates readily available to all member states would not only improve the accuracy of decisions related to resource allocation, but also improve the transparency of the budget negotiation process. UN guidance on peacekeeping states that when the UN changes an existing peacekeeping mandate it should make commensurate changes in the resources available to the operation. Further, internationally- accepted and federal standards for internal control note that organizations should use quality information to make informed decisions to achieve their objectives. Without information on estimated costs by task, USUN and other UN member states have difficulty determining that resources for UN peacekeeping operations accurately reflect changes to the mandates of peacekeeping operations. With this information, the United States and the international community can better ensure that resources provided to peacekeeping operations support the tasks agreed upon by UN member states. Member States Have Expressed Concerns Regarding the Quality of Peacekeeping Performance Information, Despite UN Reform Efforts in This Area Member States Have Expressed Concerns about the Completeness and Timeliness of UN Peacekeeping Performance Data UN member states, including the United States, have expressed concerns regarding the quality of information regarding UN peacekeeping operations. Specifically, according to member states, information on peacekeeping performance can be incomplete and is not always provided on a timely basis, despite ongoing UN efforts to improve performance information. UN Security Council resolutions and peacekeeping guidance documents have stated the importance of having access to quality performance information to make management decisions. For example, UN Security Council resolutions note that data—based on clear and well- defined benchmarks—should be used to improve the performance of peacekeeping operations. The UN’s Special Committee on Peacekeeping Operations has also called for a timely flow of information regarding how well peacekeeping operations perform their mandated activities. Additionally, internationally-accepted and federal standards for internal control also highlight the importance of quality information in enhancing the ability of organizations to achieve their performance goals. Quality information includes information that is complete and provided on a timely basis, among other attributes. Completeness UN member states have expressed concerns regarding the completeness of peacekeeping performance information. For example, USUN officials have noted concerns related to the completeness of performance information about peacekeeping troops. USUN officials noted that while the UN maintains some performance information on peacekeeping operations, such as a database with information on troop capabilities and readiness to deploy, it does not provide a complete picture of peacekeeping performance. Specifically, USUN officials noted that they would like better performance information about when peacekeeping units are engaging well, failing to engage, or lack the training to perform the tasks they have been asked to carry out. Also, the Security Council noted concern in a September 2018 resolution sponsored by the United States about the underperformance of some peacekeepers, such as inaction in the face of imminent threats of physical violence against civilians and conduct issues. Another concern relates to the completeness of performance information about civilian peacekeeping staff. According to the UN, civilian peacekeeping staff, who comprise about 14 percent of all peacekeeping personnel, perform many of the mandated activities of peacekeeping operations, including promoting and protecting human rights, helping strengthen the rule of law, and fostering the political process. However, according to USUN officials, the UN needs more complete information on the performance of these staff. For example, as noted above, UN officials told us that the UN had developed a database to collect performance information on military personnel staffed to UN peacekeeping operations, but did not have a similar way to track information on civilian personnel. Additionally, the Security Council noted in a September 2018 resolution that the UN must improve evaluation of all UN personnel supporting peacekeeping operations, including civilians. Individual member states have concurred, with some stating that better performance information is needed in all sectors of UN peacekeeping and others noting the need for comprehensive information on all peacekeeping personnel, including civilian personnel. The Security Council has also noted concerns about underreporting of information, which can affect data completeness. For example, in a September 2018 resolution, the Security Council expressed concern regarding the underreporting of sexual exploitation and abuse by some UN peacekeepers and non-UN forces authorized under a Security Council mandate, including military, civilian, and police personnel. The UN has reported that instances of sexual exploitation and abuse by peacekeepers undermine the credibility of peacekeeping operations by breaking down the trust between an operation and the communities it serves. Timeliness UN member states have also expressed concerns regarding the timeliness of UN performance information on peacekeeping. For example, USUN officials cited instances of conduct violations by UN troops in the Central African Republic and the Democratic Republic of the Congo about which the Security Council had not been informed for several months. Ultimately, the Security Council learned of these incidents from media reporting and had to seek additional information from the UN Secretariat. Additionally, the Security Council has expressed concern regarding the timely reporting of performance information on police personnel assisting peacekeeping operations. For instance, in Resolution 2382 adopted in November 2017, the Security Council emphasized the need to improve accountability and effectiveness in the performance of peacekeeping operations, requesting that the UN Secretariat provide member states timely and complete information regarding the training needs of police personnel. Further, the UN’s Special Committee on Peacekeeping Operations has also called for a timely flow of information on a range of peacekeeping performance issues, such as reports and evaluations of peacekeeping operations, incidents involving the safety and security of peacekeepers, and troop misconduct, such as sexual exploitation and abuse. For example, in its March 2018 report, the committee stressed the need for timely information sharing about serious incidents involving the safety and security of peacekeepers, noting that prompt reporting of such incidents contributes to their prevention and positive resolution. UN Is in Early Stages of Reform Efforts to Improve Performance Information and the Extent to Which the Efforts Will Address Member States’ Concerns Is Unclear USUN officials told us that they have concerns about the quality of peacekeeping performance data because the UN does not have comprehensive performance information about its peacekeeping operations and officials are unsure whether new UN reforms in this area will address their concerns. USUN officials described various UN sources of performance information on peacekeeping operations, such as strategic reviews conducted by the Secretary-General on the performance of peacekeeping operations and a UN database containing information on peacekeeping troops’ readiness to deploy. However, officials noted that this information is insufficient to help them assess the overall performance of UN peacekeeping operations. For instance, USUN officials noted that the information collected is not standardized across UN peacekeeping operations or for all peacekeeping personnel. Without better information, USUN officials said that they had challenges obtaining a clear picture of the performance of UN peacekeeping operations. According to USUN officials, a culture of performance in peacekeeping is important to better deliver on peacekeeping mandates and improve the safety and security of peacekeepers in the field. Acknowledging challenges related to peacekeeping, the UN Secretary- General announced a peacekeeping reform initiative known as Action for Peacekeeping in March 2018. As part of this effort, the Secretary-General invited member states to help develop a set of mutually agreed principles and commitments to improve peacekeeping operations. The Secretary- General announced these shared commitments in August 2018 and, as of September 2018, 151 member states and several regional organizations had made political commitments to implement them. The declaration of shared commitments includes a commitment to ensure the highest level of peacekeeping performance and to hold all peacekeeping personnel accountable for effective performance by, among other things, ensuring that performance data are used to inform planning, evaluation, deployment decisions and reporting. However, USUN officials told us in October 2018 that their concerns about the quality of UN peacekeeping performance data still remained because the UN is in the early stages of adopting these reforms. Further, USUN officials stated that they have yet to see concrete plans of action and as such, it is not clear to them that the reforms will address their concerns to ensure that the UN provides complete and timely peacekeeping performance information to its member states. For instance, officials stated that in September 2018—6 months after the Action for Peacekeeping agreements to improve the use of performance data to manage peacekeeping operations—the Security Council adopted Resolution 2436, which noted continued concerns related to completeness and timeliness of peacekeeping performance information provided to the Council. Without fully addressing member states’ concerns about the quality of information on the performance of peacekeeping operations, the Security Council is limited in its ability to identify problems and take corrective action to improve the performance of peacekeeping operations. More complete and timely performance information could enhance the Security Council’s ability to effectively manage peacekeeping operations. Conclusions Peacekeeping operations are a key instrument for implementing the UN’s central mission of maintaining international peace and security. As a member state of the UN, a permanent member of the Security Council, and the largest financial contributor to the UN peacekeeping budget, the United States plays a significant role in both the management of peacekeeping operations and encouraging reforms to improve peacekeeping activities. According to State, the U.S.-stated principles for effective peacekeeping are critical conditions for peacekeeping operations to carry out their mandates. Given the importance of establishing the necessary conditions for peacekeeping success, State/IO and USUN officials acknowledged the imperative of continuing to work with the Security Council to ensure that peacekeeping operations meet U.S.-stated principles of effectiveness. In doing so, the UN and its member states could have greater assurance that they have set up peacekeeping operations for success. Without information on estimated costs by task, USUN and other UN member states have difficulty determining that resources for UN peacekeeping operations accurately reflect changes to the mandates of peacekeeping operations. With this information, the United States and the international community can better ensure that resources provided to peacekeeping operations support the tasks agreed upon by UN member states. Additionally, while the UN has initiated reform efforts to strengthen peacekeeping, including better use of performance information, UN member states have continued to express concerns about the quality of this information and note that it is too soon to tell whether reforms will address their concerns. Without fully addressing member states’ concerns about the quality of information on the performance of peacekeeping operations, the Security Council is limited in its ability to identify problems and take corrective action to improve the performance of peacekeeping operations. Recommendations for Executive Action We are making the following three recommendations to State: The Secretary of State should continue to work with the Permanent Representative to the United Nations to ensure that UN peacekeeping operations fully meet principles of effective peacekeeping. (Recommendation 1) The Secretary of State should work with the Permanent Representative to the United Nations to ensure that the United Nations provides information to member states on the estimated costs of mandated peacekeeping tasks to provide better cost information when the Security Council adjusts peacekeeping mandates. (Recommendation 2) The Secretary of State should continue to work with the Permanent Representative to the United Nations to ensure that the United Nations takes additional steps to address member states’ concerns about complete and timely information on the performance of United Nations peacekeeping operations. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to the Departments of Defense and State for review and comment. The Department of Defense told us that they had no comments on the draft report. In its comments, reproduced in appendix V, State concurred with our recommendations. State also provided technical comments, which we incorporated as appropriate throughout the report. We are sending copies of this report to congressional committees; the Acting Secretary of the Department of Defense; and the Secretary of the Department of State. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7141 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology In this report, we examine (1) the United Nations’ (UN) process to establish and renew peacekeeping operations, including the tasks these operations perform; (2) the Department of State’s (State) assessment of the effectiveness of UN peacekeeping operations; (3) how the United States works within the UN to adjust peacekeeping mandates and associated resources; and (4) UN member states’ concerns regarding the UN’s performance information. To examine the UN’s process to establish and renew peacekeeping operations and the tasks these operations perform, we reviewed UN policy and guidance, as well as various UN websites accessed as of November 2018, and interviewed State and UN officials to discuss UN processes. To determine the tasks these operations perform, we analyzed the most recent UN resolution authorizing the peacekeeping operation passed by the Security Council as of December 31, 2018— generally referred to as a mandate—for each of UN’s 14 peacekeeping operations, and categorized the tasks of each operation. We describe UN categories of activities to achieve mandated tasks as listed in the Department of Peacekeeping Operations-Department of Field Support’s Core Pre-deployment Training Materials for United Nations Peacekeeping Operations, which lists and defines 16 categories. We also analyzed the most recent mandate as of December 31, 2018 for the 14 UN peacekeeping operations to identify the date on which the authority for each operation expires and the period of time reported until the next renewal decision. To examine State’s assessment of the effectiveness of UN peacekeeping operations, we analyzed State’s Bureau of International Organization Affairs’ (State/IO) most recent Mission Monitoring and Evaluation reports as of December 2018 and the accompanying U.S. strategy and priorities memoranda outlining U.S. priorities for the operations’ mandate renewal. State bases its Mission Monitoring and Evaluation reports on annual field visits to peacekeeping operations during which assessors interview U.S. and UN officials to evaluate the operation’s progress toward meeting its mandate and identify factors that affect the operation’s ability to do so. Based on these reports, State’s strategy and priorities memoranda summarize U.S. observations on the peacekeeping operation and, among other things, propose options for U.S. action within the Security Council. Each of the 11 memoranda we reviewed also includes State’s assessment of the peacekeeping operation against the U.S. government’s stated principles of effective peacekeeping, which State considers to be critical conditions for an operation to successfully implement its mandate. These principles are whether a peacekeeping operation (1) supports a political solution to conflict, (2) has host country consent, (3) has a realistic and achievable mandate, and (4) has an exit strategy; and (5) whether the Security Council is willing to adjust the mandate if the situation in the country improves or fails to improve. We reviewed State’s memoranda on the operations and considered the following types of factors when determining whether to categorize State’s assessments as met, partially met, or not met: Supporting political solutions: Mediation processes, peace agreements, and support for democratic elections. Host country consent: Consent to the operation, and the necessary freedom of action, both political and physical to carry out its mandated tasks. Realistic and achievable mandates: Extent to which operation tasks appeared feasible in light of current conditions and available resources. Exit strategies: Strategic goals and targets, strategic planning, and timetables for withdrawal. We categorized a principle as “met” if State indicated that the operation was generally succeeding in an area. We categorized a principle as “not met” if State indicated that the operation was generally not succeeding in an area. We categorized a principle as “partially met” if State indicated that the operation had some areas of success, but was generally not succeeding or restricted from success in some way. The fifth principle for effective peacekeeping reads as the Security Council’s willingness to change the mandate. However, in its memoranda, State/IO assesses whether and how a mandate should be changed, rather than assessing the Security Council’s willingness to change the mandate. For this principle, we categorized State’s results as either “yes” or “no.” We coded the results as “yes” if State assessed that the Security Council should adjust the mandate. We categorized the results as “no” if State assessed that the Security Council did not need to adjust the mandate. The coding was conducted by one GAO analyst and separately verified by two other GAO analysts. In December 2018, we met with State/IO and USUN officials to discuss their current assessment of each peacekeeping operation. We updated our categorization of State’s written assessments to reflect the agency’s most current assessment as appropriate. We discussed our methodology and results with officials from the U.S. Mission to the UN (USUN), who confirmed that our methodology and results were valid. We also discussed with these officials additional steps the United States could take to ensure that peacekeeping operations fully meet the principles for effective peacekeeping. We did not independently verify State’s assessment, but we reviewed State’s methodology and discussed it with officials and found the information in State’s reporting to be sufficiently reliable for the purposes of this report. To examine how the United States works within the UN to adjust peacekeeping mandates and associated resources, we interviewed USUN officials to understand the different approaches the Security Council takes to revise mandates and to understand the types of information available to UN member states to determine appropriate resource adjustments when mandates change. We also interviewed a senior official from the UN Department of Field Support’s Field Budget and Finance Division and reviewed UN budget and performance reports to identify how the UN reports on peacekeeping budget information to member states. In addition, we interviewed officials at two of the four peacekeeping operations we selected for in-depth case studies, as discussed below, to determine whether they were able to report on the operation’s budget by mandated task. To determine the extent to which State has sufficient information to advocate for resources adjustments when mandates change, we compared information currently provided by the UN to internationally-accepted and federal standards for internal control, which state that organizations should have quality information to help them make decisions. To examine UN member states’ concerns regarding the UN’s performance information, we interviewed officials from the USUN to understand their concerns regarding performance information available to them from the UN. Based on these interviews, we identified two main issues of completeness and timeliness. To understand the extent to which UN member states share these concerns, we analyzed the UN Special Committee on Peacekeeping’s 2016, 2017, and 2018 annual reports and Security Council resolutions to confirm member states’ concerns related to completeness and timeliness of performance information. We did not independently verify the veracity of these concerns, because we did not have access to the UN’s internal performance information. We also reviewed UN documents on the Secretary-General’s new reform efforts, transcripts of meetings the Security Council held on peacekeeping in 2018, and Security Council resolutions to identify steps the UN is taking to address these concerns. Further, we analyzed the extent to which the UN could better address member state concerns regarding performance information by comparing the Secretary-General’s plans for implementing the UN’s new reform efforts with internationally-accepted and federal standards for internal controls, which identify necessary elements of performance information. To inform our analyses of all four objectives, we also selected UN peacekeeping operations in four countries—the Democratic Republic of the Congo, Haiti, Kosovo, and Lebanon—for in-depth case studies. We selected these peacekeeping operations because they are the largest of the three types the UN employs, and are located in the four geographic regions in which UN peacekeeping operations are currently deployed— Africa, Europe, the Middle East, and the Western Hemisphere. While the findings from these peacekeeping operations cannot be generalized, they provide an illustrative mix of the UN’s peacekeeping activities. To inform our audit, we conducted a literature review using ProQuest language searches, focusing on literature published between 2015 and 2018. In total, we identified and reviewed 12 relevant publications that helped inform our study of the four operations. We conducted fieldwork at peacekeeping operations in Haiti, Kosovo, and Lebanon, and interviewed U.S., UN, and host government officials, as well as representatives of other donor countries and civil society. In lieu of fieldwork, we conducted videoconferences with senior officials at the peacekeeping operation in the Democratic Republic of the Congo. We conducted this performance audit from October 2017 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Mandated Tasks of United Nations Peacekeeping Operations We analyzed United Nations (UN) Security Council resolutions authorizing the 14 UN peacekeeping operations, in effect as of December 31, 2018, and identified the mandated tasks of these operations. See table 5 below for a complete list. Appendix III: Department of State’s Assessment of Challenges United Nations Peacekeeping Operations Face To inform its oversight of United Nations (UN) peacekeeping operations, the Department of State’s Bureau of International Organizations Affairs (State/IO) conducts annual monitoring trips to most UN peacekeeping operations. State/IO evaluates peacekeeping operations’ progress toward meeting their mandates and identifies any challenges to their progress. State/IO documents its findings in Mission Monitoring and Evaluation reports and disseminates these reports for comment to various State bureaus involved in international peacekeeping efforts and to relevant offices in the Department of Defense. The findings of these assessments are intended to inform the National Security Council and the U.S. Mission to the United Nations in their decision-making. We analyzed the most recent Mission Monitoring and Evaluation reports that State had conducted through June 30, 2018. In our analysis of State’s assessments, we found that the challenges State most frequently identified for each UN peacekeeping operation were those associated with host government cooperation, resources, and the security situation. Host Government Cooperation According to the UN, the UN does not deploy a peacekeeping operation unless the organization has the consent of the involved parties, which often include the governments of the countries in which conflicts occur. While host governments generally have consented to the presence of UN peacekeeping operations, State found instances in which the host government did not cooperate fully or did not have a positive relationship with the peacekeeping operation working in-country. For example, in Darfur, State found that while the Sudanese government had demonstrated some progress, it continued to restrict the African Union- United Nations Hybrid Operation in Darfur’s (UNAMID) access and movement in certain regions. Additionally, according to government officials in Kosovo, the government of Kosovo does not engage with the UN Interim Administration Mission in Kosovo (UNMIK) because it considered the operation to have completed its mandate as a transitional authority once Kosovo declared its independence and established a functioning government. As a result, UNMIK works on community trust- building activities with local communities according to the vision and strategic direction of the head of the peacekeeping operation. Resources State found that several operations faced financial, human, and material resource constraints. For example, State assessed that the peacekeeping operations in Mali; the Democratic Republic of the Congo; the Golan Heights, Syria; and Haiti did not have enough funds to meet their needs. State also found that troops in the peacekeeping operations in the Central African Republic; the Democratic Republic of the Congo; and the Golan Heights, Syria did not have enough troops with sufficient skillsets. Further, State found that the operations in the Democratic Republic of the Congo; Haiti; Mali; and Abyei, Sudan lacked adequate equipment. Officials from the peacekeeping operation at the UN Interim Force in Lebanon (UNIFIL) also told us they anticipated a budget shortfall of over $2 million for the 2018-2019 peacekeeping fiscal year as a result of a reduced budget and an increase in UN troop salaries. However, officials at the UN Organization Stabilization Mission in the Democratic Republic of the Congo (MONUSCO) told us about ways in which they were maximizing and readjusting existing resources in spite of these challenges. They stated that MONUSCO’s March 2018 mandate renewal was intended to streamline the operation and was informed by the UN’s most recent strategic review of the operation. Senior MONUSCO officials also told us that, as a result of the review, the Security Council had reduced its work in the justice reform sector by 50 percent because it believed the operation would be able to engage more meaningfully in this arena after the presidential election. Security Situation State identified several peacekeeping operations that worked in environments in which there were ongoing ceasefire violations or unstable security situations. State found that peacekeeping operations in the Democratic Republic of the Congo; the Golan Heights, Syria; Western Sahara; Cyprus; and Lebanon faced ongoing ceasefire violations. State also found that the peacekeeping operations in Mali and the Central African Republic worked in dangerous conditions and the operations in Mali and the Democratic Republic of the Congo faced persistent attacks on civilians. During our field work in Lebanon, UNIFIL officials emphasized the importance and successes of the UNIFIL-facilitated tripartite mechanism, which provides regular opportunities for soldiers from the Lebanese Armed Forces and the Israeli Defense Force to help prevent any event from escalating into a major event. According to U.S. embassy officials, because of the prevalence of armed groups in eastern Congo, the government’s and international community’s response to the Ebola outbreak that started there in August 2018 was significantly more complex and challenging than their response to the May 2018–July 2018 outbreak in northwestern Congo, an area that does not have a significant presence of armed groups. Appendix IV: Synopsis of Four United Nations Peacekeeping Operations and Key Challenges They Face We selected United Nations (UN) peacekeeping operations in four countries—the Democratic Republic of the Congo, Haiti, Kosovo, and Lebanon—for case studies. Below is a synopsis of each of these peacekeeping operations and key challenges they face, according to U.S. and UN officials. Key Facts About DRC Population: Approximately 83.3 million people live in DRC. About 60 percent of the population is under the age of 25, and about 40 percent is under the age of 15. There are over 200 ethnic groups; the majority is Bantu. Map of the Democratic Republic of the Congo (DRC) Government: DRC is a semi- presidential republic. The last presidential election was held on December 30, 2018. Economy: DRC’s estimated gross domestic product for 2017 was $40.4 billion. Conflict and corruption have contributed to the poor economic performance of DRC, despite its vast natural resource wealth. Timeline of Key Events 1960: The Republic of the Congo is granted independence from Belgium. 1960-1964: The UN deploys the United Nations Operation in the Congo (ONUC) to ensure the withdrawal of Belgian forces from the Republic of the Congo, among other things. 1998: “Africa’s World War” begins, with seven countries fighting in DRC. Current Status and Challenges According to U.S. and United Nations (UN) officials, MONUSCO’s most important mandated tasks are the protection of civilians and support to the government of DRC’s elections. According to the Secretary-General, the impact of intercommunal violence and attacks by armed groups continue to persist in eastern and southern DRC and have led to the displacement of thousands of people. Held after several delays, the results of the December 30, 2018 national and provincial elections are expected to result in the first democratic transition of power in the nation’s history. Despite varied disputes over preliminary results and reports of sporadic violence, the UN reports that the elections were relatively peaceful. However, according to the UN, pending the announcement of the final results by the DRC Constitutional Court, the coming days will be critical. 1999: The Lusaka Ceasefire is signed, ending the war. The UN establishes a peacekeeping operation in DRC—United Nations Organization Mission in the Democratic Republic of the Congo (MONUC). July 2010: The UN renames MONUC MONUSCO and updates the peacekeeping operation’s mandate. According to U.S. and UN officials, the biggest challenges MONUSCO faces in carrying out its mandated tasks are the vast size of DRC and the fact that the government of DRC will accept limited help from MONUSCO in carrying out its elections. According to UN officials, MONUSCO is having some success in addressing instability in eastern DRC. For example, MONUSCO said it receives 300 to 400 calls per month alerting it to attacks and that either MONUSCO or DRC forces respond to 90 percent of these calls. In addition, UN officials told us that the Security Council provided MONUSCO with a budget to use for logistical support for elections assistance, so MONUSCO can readily help the DRC government if and when it asks for assistance. Key Facts about Haiti Population: Approximately 10.6 million people live in Haiti. More than 50 percent of the population is under the age of 24. Government: Haiti is a semi- presidential republic. Economy: Haiti’s estimated gross domestic product for 2017 was $8.36 billion. Haiti continues to rely on international economic assistance for fiscal sustainability, with over 20 percent of its budget coming from foreign aid. In 2010, Haiti’s unemployment rate was estimated to be 40.6 percent, and in 2012, 58.5 percent of its population was estimated to be living below the poverty line. Timeline of Key Events 1993: Following a military coup, the UN establishes the first of a series of three peacekeeping operations. The last of these operations leaves in 2000. 2004: The UN establishes the United Nations Stabilization Mission in Haiti (MINUSTAH) to help restore and maintain order after the collapse of the government. 2017: The UN establishes MINUJUSTH as a successor to MINUSTAH, composed of police and civilian personnel and focused on institutional strengthening and development. Current Status and Challenges The United Nations (UN) established MINUJUSTH in 2017 to assist the government of Haiti in strengthening rule-of-law institutions, further support and develop the Haitian National Police, and engage in human rights monitoring, reporting, and analysis. In the resolution establishing MINUJUSTH, the Security Council called on the Secretary-General to develop a 2-year exit strategy with clear benchmarks. The Secretary- General regularly reports on MINUJUSTH’s progress toward reaching its benchmarks. The Security Council resolution extending the MINUJUSTH mandate to April 2019 calls on the Secretary-General to conduct a strategic assessment of the operation by early 2019 and present recommendations on the UN’s future role in Haiti. To facilitate the transition, the UN has created a joint UN Development Program and MINUJUSTH rule-of-law program to continue its work in this area after the peacekeeping operation ends. According to U.S. and UN officials, Haiti continues to struggle with weak institutions and high levels of government corruption. Moreover, according to MINUJUSTH officials, the process of transitioning from the previous peacekeeping operation in Haiti to MINUJUSTH was challenging because of the level of effort involved in liquidating assets, among other things. These officials told us that similar issues will make the MINUJUSTH transition to a non-peacekeeping UN presence equally challenging. Key Facts about Kosovo Population: Approximately 1.9 million people live in Kosovo. About 42 percent of the population is under the age of 25. The primary ethnic group is the Albanian Kosovars, making up approximately 93 percent of the population. Other ethnic minorities include Serbs and Bosnians. Government: Kosovo is a parliamentary republic. Economy: Kosovo’s estimated gross domestic product in 2017 was an estimated $19.6 billion. Kosovo's economy has achieved some stability, but it is still highly dependent on the international community for financial and technical assistance. Kosovo’s unemployment rate is 33 percent, with a youth (under 26) unemployment rate near 60 percent. Timeline of Key Events 1991: Kosovo's Albanians declare independence from Serbia. 1998: Multi-year conflict results in large numbers of casualties, refugees, and displaced persons. Current Status and Challenges The Security Council established UNMIK to provide an interim administration for Kosovo, under which UNMIK had authority over the territory and people of Kosovo, including all legislative and executive powers and administration of the judiciary. Following the declaration of independence by the Kosovo Assembly in June 2008, the tasks of the operation have changed to focus primarily on the promotion of security, stability, and respect for human rights in Kosovo, as well as reducing tensions between Serbia and Kosovo. 1999: A 3-month NATO military operation against Serbia results in the Serbs withdrawing their military and police forces from Kosovo. 1999: UN Security Council Resolution 1244 (1999) places Kosovo under a transitional administration pending a determination of Kosovo's future status. According to U.S. and United Nations (UN) officials, the greatest challenge UNMIK faces in carrying out its mandate is that the Kosovo government will not engage directly with UNMIK. According to U.S., UN, and Kosovo government officials, the Kosovar government will not engage with UNMIK because it views UNMIK’s mandate as obsolete, given Kosovo’s independence. U.S. officials believe that UNMIK has achieved its mandate and should be closed. However, these officials also noted that Russia, as a permanent member of the Security Council with a veto, prevents the affirmative decision necessary to close UNMIK. 2008: The Kosovo Assembly declares Kosovo’s independence. U.S. and UN officials told us that UNMIK has found ways to indirectly assist the Kosovo government, such as by providing funding for government efforts in Kosovo through other UN agencies with which the Kosovo government will engage. For instance, one UN official told us that UNMIK had provided a ground-penetrating radar to the Office of the United Nations High Commissioner for Human Rights to assist in efforts to locate missing persons, which will help clarify the fate and whereabouts of people unaccounted for after the conflict with Serbia. Key Facts about Lebanon Population: Approximately 6.2 million people live in Lebanon. The country is about 27 percent Sunni, 27 percent Shia, and 41 percent Christian. Officially, there are almost 1 million Syrian refugees in Lebanon. Government: Lebanon is a parliamentary republic, with a unicameral legislature that elects the president. Currently, 35 of 128 legislative seats are held by the Shia Amal-Hezbollah coalition. Lebanon's borders with Syria and Israel remain unresolved. Economy: Lebanon’s estimated gross domestic product for 2017 was $52.7 billion, with a real growth rate of 1.5 percent. The growth rate is down from about 7 percent in 2010. Timeline of Key Events 1975-1990: Sectarian violence leads to the Lebanese civil war. 1978: Israel sends troops into Lebanon. March 1978: UNIFIL is established to supervise the withdrawal of Israeli forces from southern Lebanon. Israeli forces withdraw in 2000. Current Status and Challenges UNIFIL was created by the Security Council in March 1978 to supervise the Israeli withdrawal from Lebanon, restore international peace and security, and assist the government of Lebanon in restoring its authority. In late 2006, following renewed conflict between Israel and Lebanon, the Security Council enhanced UNIFIL’s forces and added additional tasks to its mandate to include monitoring the cessation of hostilities and extending UNIFIL’s assistance to help ensure humanitarian access to civilian populations and the voluntary and safe return of displaced persons. The United Nations (UN) reported in March 2018 that the situation in UNIFIL’s area of operations has remained generally calm, but there has been no progress toward implementing a permanent ceasefire. Early 1980s: Israeli forces in southern Lebanon start facing opposition from a militant group that would become Hezbollah, backed by Iran. July-August 2006: Hezbollah captures two Israeli soldiers, sparking a 34-day war with Israel. UN Security Council Resolution 1701 calls for a cease-fire between the two sides and supplements UNIFIL’s mandate. According to U.S. and UN officials, one challenge UNIFIL faces in carrying out its mandate is that Israel and Lebanon have not agreed on a peaceful solution to their conflict. Officials noted that there is no articulated exit strategy for the operation and that the Lebanese Armed Forces lack the capacity to secure the southern border with Israel—a necessary condition for the successful exit of UNIFIL. However, U.S. and UN officials agreed that UNIFIL plays a vital role by deterring further hostilities in southern Lebanon and providing a neutral forum for meetings between Israel and Lebanon. Appendix V: Comments from the Department of State Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Elizabeth Repko (Assistant Director), Shirley Min (Analyst in Charge), Julia Jebo Grant, Sarah Amer, Molly Miller, Debbie Chung, Martin de Alteriis, Neil Doherty, Mark Dowling, Michael Rohrback, and Brandon Hunt made contributions to this report.
Why GAO Did This Study As of December 2018, the UN had 14 ongoing peacekeeping operations with approximately 103,000 personnel. The United States is the single largest financial contributor to these operations, assessed by the UN to contribute an estimated $1.7 billion in fiscal year 2018, according to State. It is also a member of the Security Council, the UN body tasked with maintaining international peace and security. GAO was asked to review UN peacekeeping operations. In this report, GAO examines (1) the UN's process to establish and renew peacekeeping operations, including the tasks these operations perform; (2) State's assessment of the effectiveness of UN peacekeeping operations; (3) how the United States works within the UN to adjust peacekeeping mandates and associated resources; and (4) member states' concerns regarding the UN's performance information. To address these objectives, GAO analyzed UN and U.S. documents and interviewed UN and U.S. officials. GAO also interviewed officials at peacekeeping operations in the Democratic Republic of the Congo, Haiti, Kosovo, and Lebanon. GAO selected these operations because they represent those that perform a variety of tasks and are located in diverse regions. What GAO Found The United Nations (UN) Security Council establishes and renews peacekeeping operations by issuing resolutions, generally referred to as mandates, which can include a range of tasks, such as monitoring ceasefires and protecting civilians. Generally once or twice a year, the Security Council renews an operation's mandate and makes adjustments as needed. GAO's review of the Department of State's (State) assessments as of December 2018 and discussions with State officials found that UN peacekeeping operations generally do not fully meet U.S. principles for effective peacekeeping, which include host country consent and an exit strategy, among others. GAO's review of 11 operations found that all 11 met or partially met the principle of host country consent, while five included or partially included an exit strategy. State officials stated that they must continue to work with the UN to ensure peacekeeping operations meet principles of effectiveness, which they noted are key to success. The United States works with the UN Security Council and member states to adjust peacekeeping mandates, but it lacks sufficient information to determine if associated resources accurately reflect these adjustments. State officials noted that they do not have this information because UN peacekeeping budgets do not estimate costs by mandated task. UN peacekeeping guidance states that when the UN changes a peacekeeping mandate, it should make commensurate changes to that operation's resources. Without information on estimated costs by task, member states have difficulty determining that resources for UN peacekeeping operations accurately reflect mandate changes. The UN has taken steps to improve peacekeeping performance data, but member states have raised concerns about that information's quality, including its completeness and timeliness. Among other concerns, member states note that the UN does not have complete information to assess the performance of civilians, who comprised about 14 percent of peacekeeping personnel, as of December 2018. In March 2018 the UN began peacekeeping reforms, including those to improve performance data. However, according to State officials, these efforts are in the early stages and more work is needed. Without fully addressing member states' concerns about the quality of information, the UN is limited in its ability to improve the performance of peacekeeping operations. What GAO Recommends GAO recommends that State take additional steps to ensure that the UN (1) peacekeeping operations meet principles of effectiveness, (2) provides information on the estimated costs of mandated tasks, and (3) addresses member states' concerns about the quality of performance information. State agreed with GAO's recommendations.
gao_GAO-19-53
gao_GAO-19-53_0
Background History of Military Health System Reforms For over a decade, Congress and DOD have led a series of efforts to address the governance structure of the Military Health System, including recommending and implementing significant organizational realignments. DOD undertook a significant organizational realignment effort in June 2011, creating an internal task force to review the governance of the Military Health System and subsequently identified as priorities cost containment, greater integration, and increased unity of effort. In March 2012, DOD submitted a report to Congress that, among other things, proposed creating the DHA to achieve cost savings at headquarters- and administrative-level organizations, TRICARE, the headquarters of military departments’ medical commands and agencies, and other management organizations within the Military Health System that do not directly provide health care services. DOD established the DHA in September 2013 to provide administrative support for the military departments’ respective medical programs by adopting common clinical and business processes, combining common shared services, and coordinating the work of the military departments’ respective MTFs and care purchased from the private sector. The DHA also assumed the administrative responsibility for managing the MTFs in the National Capital Region. The NDAA for Fiscal Year 2013 required that DOD create a detailed plan for carrying out its health care system reform to include the goals of the reform and performance measures to achieve them; the personnel levels required for the DHA and the National Capital Region Medical Directorate; and specific information on the shared services, among other things. In 2015, we reported on DOD’s establishment of the DHA and made five recommendations, and DOD concurred or partially concurred with all of these recommendations. DOD has implemented two of the five recommendations by completing some baseline personnel assessments of the DHA workforce and reporting the number and cost of administrative headquarters personnel within the Military Health System in DOD’s fiscal year 2018 Defense Health Program budget estimates. Of the three open recommendations, two relate directly to assessing personnel requirements within the DHA. As of January 2018, these recommendations have not been fully addressed and remain open because DOD has not established processes and procedures to create an overall personnel management process for the DHA. In December 2016, Congress expanded the role of the DHA by directing the transfer of responsibility for the administration of each MTF from the military departments to the DHA. Pursuant to section 1073c(a) of title 10, United States Code, the Director of the DHA shall be responsible for the administration of each MTF, including with respect to budgetary matters, information technology, health care administration and management, administrative policy and procedure, military medical construction, and any other matters the Secretary of Defense determines appropriate. Section 702 of the NDAA for Fiscal Year 2017 required that the Secretary of Defense develop a plan to implement section 1073c of title 10, United States Code, that includes the following four elements: A. how the Secretary will carry out subsection (a) of section 1073c of title 10 of the United States Code; B. efforts to eliminate duplicative activities carried out by the elements of the DHA and military departments; C. efforts to maximize efficiencies in the activities carried out by the DHA; and D. how the Secretary will implement section 1073c in a manner that reduces the number of members of the armed forces, civilian employees who are full-time equivalent employees, and contractors relating to the headquarters activities of the Military Health System, as of the date of the enactment of the act. Section 702 of the NDAA for Fiscal Year 2017 also included a provision for us to review DOD’s interim and final reports on the implementation plan. In our review of DOD’s plan in September 2017, we noted that DOD had selected the component model—in which the Director of the DHA would administer each MTF through military department-led intermediary component commands and military department-led MTFs— as the administrative model DOD would use to meet the requirements specified in section 702. Congress, in the Conference report accompanying the NDAA for Fiscal Year 2018 that was issued in November 2017, raised concern about DOD’s lack of progress on the development of the plan and about the component model. Specifically, Congress noted that the component model was an attempt to maintain current stove-piped organizational constructs that risk continued inefficiencies in the Military Health System command and governance structure. In the third interim report, DOD found that the component model would not be adequate to satisfy statutory requirements and subsequently changed from the component model to a new administrative framework. Amendments from the NDAA for Fiscal Year 2019 The NDAA for Fiscal Year 2019 amended section 1073c of title 10, United States Code. The NDAA for Fiscal Year 2019, among other things, provided additional authorities to the Director of the DHA, such as the authority to determine total workforce requirements at each MTF and established within the DHA two subordinate organizations—one for research and development, and one for public health. Additionally, the NDAA for Fiscal Year 2019 extended the date for the transfer of the administration of the MTFs to the DHA from the original deadline of October 1, 2018, to September 30, 2021. Section 1073c of title 10, United States Code, including these amendments, is reproduced in appendix I. Roles and Responsibilities of Key DOD Entities in the Military Health System Currently, the Under Secretary of Defense for Personnel and Readiness, the Assistant Secretary of Defense for Health Affairs, the DHA, and the military departments have various responsibilities for the oversight and management of the Military Health System: The Under Secretary of Defense for Personnel and Readiness is the principal staff assistant and advisor to the Secretary and Deputy Secretary of Defense for health affairs and, in that capacity, develops policies, plans, and programs for health and medical affairs. The Assistant Secretary of Defense for Health Affairs has the primary responsibility for the Military Health System and serves as the principal advisor to the Under Secretary of Defense for Personnel and Readiness for all DOD health policies, programs, and activities. The Assistant Secretary of Defense for Health Affairs also has the authority to develop policies; conduct analyses; issue guidance; provide advice and make recommendations to the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, and others; and provide oversight to the DOD components on matters pertaining to the Military Health System. Further, the Assistant Secretary of Defense for Health Affairs prepares and submits a DOD Unified Medical Program budget to provide resources for the Military Health System. The Director of the DHA, in addition to carrying out the responsibilities outlined above, manages the execution of policy developed by the Assistant Secretary of Defense for Health Affairs. The Secretaries of the military departments coordinate with the Assistant Secretary of Defense for Health Affairs to develop certain Military Health System policies, standards, and procedures and provide military personnel and other authorized resources to support the activities of the DHA, among other things. The Surgeon General of each military department serves as the principal advisor to the Secretary of the military department concerned on all health and medical matters of the military department. DOD Addressed the Statutory Elements for the Transfer of the Administration of the MTFs to the DHA DOD addressed each of the four statutory elements in its June 2018 plan. DOD dedicated most of the plan to describing the governance structure of DOD’s new administrative framework and to describing the schedule for the phased transfer of the administration of approximately 457 MTFs to the DHA by October 1, 2021. DOD’s plan provided less detail on addressing efforts to eliminate duplicative activities; maximizing efficiency; and reducing the number of headquarters-level military, civilian, and contractor personnel. The following provides a summary of what DOD’s plan included for each of the four elements in the statute: Information on efforts to transfer the administration of the MTFs to the DHA. In its plan, DOD described the transfer of the MTFs to the DHA, including budgetary matters, information technology, health care administration and management, administrative policy and procedure, military medical construction, and all other MTF operations. DOD dedicated most of the plan to describing the (1) new governance structure of the proposed administrative framework model and (2) timeline for the phased transfer of the administration of the 457 MTFs from the military departments’ respective medical commands to the DHA. For example, DOD states that Military Health System governance will shift its focus from consensus-driven bodies that address both policy and management issues to a smaller, streamlined set of oversight councils that focus on high-level, Military Health System-wide policy and budgetary matters. According to the plan, the Assistant Secretary of Defense for Health Affairs will resolve matters that involve both the military departments and the DHA. DOD also stated that the DHA plans to establish six intermediate management organizations (two for each region) to assist with the administration and management of the MTFs, which are broken out as follows: an East Region, a West Region, and outside the United States. Further, DOD stated that the DHA had established an Assistant Director position for Health Care Administration, as well as four Deputy Assistant Director positions for Information Operations, Financial Operations, Health Care Operations, and Medical Affairs. Regarding the timeline for the phased transfer, beginning no later than October 1, 2018, DOD will transfer 5 of its approximately 679 MTFs to the DHA for the first phase of the transition. MTFs transferring to the DHA for the first phase include the Womack Army Medical Center, Fort Bragg; the Naval Hospital Jacksonville; Force 81st Medical Group, Keesler Air Force Base; 4th Medical Group, Seymour Johnson Air Force Base; and 628th Medical Group, Joint Base Charleston. In the second phase of the transition, which will begin no later than October 1, 2019, DOD will transfer 244 MTFs from the East Region to the DHA. The third phase will begin no later than October 1, 2020, and will include 134 MTFs from the West Region. The fourth phase will include 79 MTFs outside the United States and begin no later than October 1, 2021. DOD also provided DHA organizational charts for each of the four phases. Information on efforts to eliminate duplicative activities carried out by the DHA and the military departments. In its plan, DOD noted that it is undertaking an analysis of the functions that will be performed at DHA headquarters and at the military departments’ respective medical department headquarters. In the plan, DOD provided three figures listing the functions, functional responsibilities, and functional requirements that will be carried out by the DHA, the DHA intermediate management organizations, and the military departments’ medical department headquarters. Specifically, the functions listed included those functions that should be with the DHA intermediate management organizations, such as Emergency Planning and Preparation, and those functions that should be with the military departments’ medical department intermediate commands or headquarters, such as Quality and Safety for Healthcare in the Operational Setting. The three figures primarily focused on functions to be performed during the first phase of the transition. Information on efforts to maximize efficiencies in the activities carried out by the DHA. In its plan, DOD included information about its three principle efforts currently underway to address efficiencies. Specifically, DOD describes its broader efforts to streamline clinical and business processes across the Military Health System and links some of these broader initiatives to section 702. According to the plan, efforts such as the use of centralized contract support functions and common purchasing, among others, are made possible because of the transfer of the administration of the MTFs to the DHA. Specific to the transfer of MTFs to the DHA as required by section 1073c of title 10 of the United States Code, DOD’s plan stated that the DHA is developing, publishing, and implementing procedural instructions to help administer and manage the MTFs. The plan also states that each MTF transferring to the DHA will establish a performance plan—referred to as a quadruple aim performance plan—to monitor performance. According to the plan, Military Health System leadership adopted the quadruple aim performance plan to monitor MTF performance, which they believe will improve performance and contribute to better outcomes and increased efficiencies. The plan states that the performance of all MTFs in the Military Health System will be monitored using the Military Health System quadruple aim performance plan measures beginning October 1, 2018. Information on reducing headquarters-level military, civilian, and contractor personnel within the Military Health System. In its plan, DOD noted that it has already programmed a 25-percent reduction in personnel positions aligned to medical headquarters across the enterprise. Specific to the transfer of MTFs to the DHA as required by section 1073c of title 10 of the United States Code, DOD states that the DHA will experience personnel growth during each subsequent phase of the transition in order to undertake its new responsibilities. Additionally, the plan states that DOD expects at least a 10-percent reduction (approximately 695 positions from the current baseline) in headquarters military and civilian personnel by the end of the transition. However, the plan does not provide specific details about how it will achieve the 10-percent reduction while the DHA experiences personnel growth during each phase. The plan includes a figure depicting military and civilian full-time equivalent positions for the current baseline of the DHA and the military departments’ respective medical department headquarters and intermediate commands. Contractors are also mentioned in the plan at a high level, but without specific data. Additionally, DOD continues to take steps to evaluate personnel requirements. Specifically, according to two June 2018 Under Secretary of Defense for Personnel and Readiness memorandums, DOD is conducting a review and validation of headquarters-level personnel requirements, which we discuss in more depth later in this report. Additional Information Would Be Useful to Demonstrate How the Plan Will Reduce or Better Manage Duplication and Improve Efficiencies DOD’s June 2018 plan takes steps toward reducing duplication and improving effectiveness and efficiency, as previously discussed. However, the plan has two weaknesses that could be mitigated with additional information from DOD. Specifically, DOD cannot be reasonably assured that its plan will reduce or better manage duplication and improve efficiency since (1) certain functions are excluded from the transfer to the DHA and (2) it is unclear, based on the information in the plan and supporting planning documents, how implementation of the plan will result in the achievement of the stated goal of reducing headquarters-level personnel, including contractor personnel, by 10 percent. DOD Excluded Certain Functions from the Planned Transfer to the DHA That Could Reduce or Better Manage Duplication As part of its approach for addressing the requirements of section 702 of the NDAA for Fiscal Year 2017, DOD excluded 16 medical functions from the transfer to the DHA. In a February 2018 Under Secretary of Defense for Personnel and Readiness memorandum, these functions were identified as being related to operational readiness and installation- specific missions. That memorandum and another memorandum from the Under Secretary of Defense for Personnel Readiness dated May 2018 listed 16 functions that DOD identified as operational readiness and installation-specific medical functions and that would therefore be excluded from the planned transfer to the DHA (see table 1). DOD cannot be reasonably assured that its plans are reducing or better managing duplication because DOD has not defined the functions or analyzed the potential for the 16 functions to be transferred to the DHA. These functions are not defined in the February or May 2018 memorandums or DOD’s plan. The two memorandums list only the functions and state that they are separate from MTF health care delivery services and MTF business operations. One of the memorandums explains that these functions are tied to organizing, training, and equipping personnel for operational readiness missions. These memorandums also do not explain the rationale used to determine that the 16 functions were different from the other MTF health care functions DOD plans to transfer to the DHA. Further, DOD did not provide any analysis or documentation regarding the decision to exclude these 16 functions in the supporting documentation that we reviewed, such as in the concepts of operations for the Assistant Secretary of Defense for Health Affairs, the DHA, the Army, the Navy, and the Air Force. According to senior-level officials from the Assistant Secretary of Defense for Health Affairs and the DHA, there was no formal analysis or documentation to support the decision. With respect to the exclusion of the transfer of the dental care function to the DHA, Assistant Secretary of Defense for Health Affairs and DHA senior-level officials stated that dental clinics serve only servicemembers, not retirees or family member beneficiaries. Therefore, dental care was considered to be an operational readiness function rather than a health care delivery function, according to these same officials. However, this statement is not completely in line with DOD information regarding overseas dental care and family member beneficiaries. According to DOD information regarding dental care overseas, family members of active- duty servicemembers can receive dental care from military dental clinics. As such, in some instances the delivery of dental care is not solely for ensuring the readiness of servicemembers. Further, senior-level officials from the Assistant Secretary of Defense for Health Affairs and the DHA acknowledged that transferring the dental care function from the military departments to the DHA could potentially reduce duplicative activities and result in more efficiencies. According to a senior-level DHA official, splitting health care and dental care results in two separate health care delivery organizations. Across the Military Health System there are approximately 247 (200 in the United States) dental clinics, which represent about a third of DOD’s facilities within the direct care system when including dental clinics, military hospitals, and ambulatory care clinics (i.e., approximately 679 facilities in total). Moreover, senior-level officials from the Assistant Secretary of Defense for Health Affairs and the DHA stated that by transferring a function from the military departments to the DHA, DOD reduces the number of managers of a function from four (i.e., at the Army, the Navy, the Air Force, and the DHA) to only one at the DHA. In our prior work, we have reported that agencies can act to improve the efficiency of their programs by maximizing the level of services provided for a given level of resources, as well as improving programs’ effectiveness in achieving their objectives. In particular, we have highlighted the need for agencies to define their mission, functions, activities, services, and processes when identifying fragmentation, overlap, and duplication among programs. Agencies should also assess how, if at all, the fragmented, overlapping, or duplicative functions are related and how they are being coordinated between agencies. Understanding this relationship will help inform decisions about whether and how to increase efficiency or reduce or better manage fragmentation, overlap, or duplication. Also, agencies should assess whether potential effects in areas such as program implementation, outcomes, and costs are positive or negative. Identifying the positive and negative effects of fragmentation, overlap, or duplication will help agencies determine whether or not actions to reduce or better manage the fragmentation, overlap, or duplication are economical and efficient. However, DOD has not fully determined whether opportunities exist to achieve additional savings due to the lack of analysis, including clear definitions, of the 16 functions that were excluded by DOD. According to senior-level officials from the Assistant Secretary of Defense for Health Affairs and the DHA, there are potential savings by transferring the 16 functions to the DHA, but these have not been adequately analyzed. Without defining and analyzing the 16 functions, DOD cannot assure decisionmakers that it has fully considered all opportunities for reducing or better managing duplication in its plan to transfer the administration of the MTFs to the DHA. DOD Has Not Demonstrated That Its Plan Will Lead to Reductions in Headquarters Personnel As previously discussed, DOD’s plan identifies the functions that will transfer to the DHA. However, DOD’s plan and supporting documents do not provide details on how DOD established the 10-percent reduction of headquarters-level military, civilian, and contractor personnel by 2021, when the administration of the 457 MTFs is to have been transferred to DHA. The plan also states that DHA personnel will grow during each subsequent phase of the transition. Further, information in other related supporting documentation indicates that headquarters-level personnel will increase rather than decrease to achieve the 10-percent reduction goal. Lastly, DOD did not include information in the plan or in its supporting documents concerning contractor personnel reductions. Officials from the Army, the Navy, the Air Force, the DHA, and the Office of Cost Assessment and Program Evaluation could not identify for us what office within DOD established the 10-percent reduction goal. Our review of key planning documents—the concepts of operations for the Assistant Secretary of Defense for Health Affairs, the DHA, the Army, the Navy, and the Air Force—found that these documents also did not provide details for the 10-percent reduction of headquarters personnel. Specifically, although these documents included some information regarding personnel reductions, they did not include specific details concerning the 10-percent reduction of headquarters personnel. DOD states in the plan that the DHA will experience incremental growth in staffing during each phase of the transition in order to undertake its new responsibilities, but does not explain how it will achieve its 10-percent reduction goal given the projected growth. Further, DOD does not provide any data in the plan about how much the DHA will grow during each phase. Senior-level officials from the offices of the Assistant Secretary of Defense for Health Affairs and the DHA stated that there were no explicit restrictions in section 702 of the NDAA for Fiscal Year 2017 that would prohibit the DHA from increasing its number of personnel. However, section 702 does require that the Secretary implement section 1073c in a manner that reduces the number of members of the armed forces; civilian employees who are full-time equivalent employees; and contractors relating to the headquarters activities of the military health system, which includes the DHA. Further, the projected growth described in DOD’s plan is also consistent with a June 2018 DHA pre-decisional draft briefing concerning full-time equivalent positions based on current information provided by the military departments, which describes a transfer of personnel to the DHA from the military departments rather than a reduction in personnel. According to the briefing, full-time equivalents to support future DHA headquarters and intermediate management organizations would not lead to any reductions in personnel. On the contrary, the briefing states that full-time equivalents for military and civilian personnel would increase by 38 percent at the DHA and result in additional costs. A senior-level DHA official confirmed that the information in the briefing relates to a transfer of personnel from the military headquarters to the DHA for health care delivery, not a reduction in personnel that would result in no cost savings. The briefing also states that information related to current and future state full-time equivalent positions is misleading because contractor data, as well as other relevant personnel data, are not included. Regarding contractor data, DOD did not include any detailed information related to the reduction of contractor personnel in the plan. Specifically, information concerning contractor personnel reductions was not included in the figure or other parts of the section concerning headquarters-level personnel reductions. Overall, contractors are referenced only five times in the entire plan: Three of the references are simply repeating the language from the statutory requirement. Another reference reiterates that the DHA will assume management responsibilities for civilian and contractor personnel performing health care delivery functions and operations. The last reference from the section of the plan related to personnel reductions states that DOD is planning for headquarters personnel reductions, to include military, civilian, and contractor personnel. In reviewing the concepts of operations for the Assistant Secretary of Defense for Health Affairs, the DHA, the Army, the Navy, and the Air Force for details on contractor personnel, we found that most of these documents did not provide details regarding contractors. Four out of five of the aforementioned concepts of operations did not include information concerning contractors in the context of personnel reductions. Although the Assistant Secretary of Defense for Health Affairs’ concept of operations does include information about contractors in the context of personnel reductions, the information does not provide further details about DOD’s plans for this effort. According to DOD Directive 1100.4, Guidance for Manpower Management, it is DOD policy that personnel requirements are driven by workload and shall be established at the minimum levels necessary to accomplish mission and performance objectives. This directive states that personnel is a resource and that changes in personnel shall be preceded by changes to the programs, missions, and functions that require personnel resources. Additionally, the directive states that assigned missions shall be accomplished using the least costly mix of personnel (military, civilian, and contract) consistent with military requirements, among other considerations. The directive also states that military (active and reserve) and civilian manpower resources shall be programmed in accordance with validated personnel requirements, among others. Moreover, key change management practices concerning workforce reductions state that before implementing workforce reduction strategies, it is critical that agencies carefully consider how to strategically downsize the workforce and maintain the staff resources to carry out its mission. These same key change management practices also define “efficiency” as maintaining federal government services or outcomes using fewer resources (such as time and money) or improving or increasing the quality or quantity of services or outcomes while maintaining (or reducing) resources. However, DOD’s ability to develop an analytically-based goal for personnel reductions associated with the transfer of administration to DHA, a plan to achieve that goal given that it is projecting growth in personnel, and how contractors factor into its plan has been limited for two reasons. First, DOD has not validated headquarters-level personnel requirements. Second, DOD has not conducted a comprehensive review—a review that, per DOD’s own guidance, would involve establishing at minimum levels the requirements necessary to accomplish mission and performance objectives and reflect the consideration of the least costly mix of personnel (i.e., military, civilian and contract) consistent with military requirements, among other considerations, to meet the validated requirements. Senior-level officials from the offices of the Assistant Secretary of Defense for Health Affairs and the DHA stated that information regarding contractor personnel reductions was not included in the plan because DOD probably did not have these data. These same officials said that it is difficult to obtain contractor personnel data. As we previously noted, DOD has faced challenges with understanding DHA headquarters personnel requirements and composition. In 2015, we reported on DOD’s establishment of the DHA and on how, among other things, DOD could not determine DHA’s effect on Military Health System administrative and headquarters personnel levels. We found that the DHA had not completed the personnel requirements assessment process or developed a baseline estimate of personnel in the Military Health System before the DHA was created. As discussed previously, we made five recommendations, with which DOD concurred or partially concurred. As of January 2018, DOD had not taken action to fully address three of these recommendations. Of the three recommendations that had not been fully addressed, two relate directly to DHA personnel requirements. Specifically, we recommended the following: To provide decision makers with appropriate and more complete information on the continuing implementation, management, and oversight of the DHA, the Secretary of Defense should direct the Assistant Secretary of Defense for Health Affairs to develop a comprehensive requirements assessment process that accounts for needed future skills through the consideration of potential organizational changes and helps ensure appropriate consideration of workforce composition through the determination of the final status of military personnel within the DHA. To provide decision makers with appropriate and more complete information on the continuing implementation, management, and oversight of the DHA, the Secretary of Defense should direct the Assistant Secretary of Defense (Health Affairs) to develop a plan for reassessing and revalidating personnel requirements as the missions and needs of the DHA evolve over time. Since the recommendations concerning DHA personnel requirements have not been fully addressed and DHA is in the middle of a significant organizational change, it would be timely for DOD to validate headquarters-level personnel requirements and conduct a comprehensive review to determine the appropriate mix of personnel. This validation and comprehensive review should occur prior to transferring authority, direction, and control of the MTFs to the DHA for the third phase, which, as previously noted, is scheduled to begin no later than October 1, 2020. In June 2018, DOD directed a review and validation of headquarters-level personnel requirements. The Under Secretary of Defense for Personnel and Readiness issued two memorandums concerning the review of headquarters-level personnel requirements. The June 7, 2018, memorandum directs the establishment of cross-service manpower teams to conduct a baseline review of DHA headquarters’ current and future personnel requirements. Similarly, the June 15, 2018, memorandum directs the establishment of a working group to determine the appropriate manning of all above MTF-level medical activities in the military departments. This memorandum also requires the working group to review and validate the results of the cross-services manpower teams’ assessment of DHA headquarters activities, among other requirements. Officials with the Office of the Under Secretary of Defense for Personnel and Readiness involved in these efforts said that the goal of the current review is to identify the DHA’s current and future baseline personnel requirements. However, according to these same officials, the review will not (1) validate personnel requirements because of time constraints, (2) identify potential personnel reductions, or (3) consider workforce composition. These officials also clarified that a comprehensive personnel requirements study would take a considerable amount of time and would generate more technical estimates of the work being performed. They said such a study would review major functions and subfunctions, as well as get down to the task level and analyze work processes, which would allow for making process improvement suggestions. In September 2018, the Office of the Under Secretary of Defense for Personnel and Readiness issued the report on DHA’s personnel requirements. The report stated that DHA personnel requirements would increase to support an expanded mission and included several recommendations one of which was to conduct a military essentiality review of DHA positions and functions. According to officials with the Office of the Under Secretary of Defense for Personnel and Readiness, each military department provided headquarters personnel data, which will be reviewed as part of the upcoming Program Budget Review cycle. Until DOD validates headquarters-level personnel requirements and conducts a comprehensive review that considers the least costly mix of personnel, DOD may not be able to achieve its goal of reducing headquarters-level personnel by 10 percent while maintaining the efficient and effective provision of healthcare services. Furthermore, Congress will lack important information to determine the extent to which the transfer of the administration of the MTFs to the DHA is being planned and implemented effectively and efficiently. Conclusions Congress required DOD to provide a plan to transfer the administration of the MTFs from the military departments to the DHA. DOD provided a final implementation plan, which made significant changes to the administrative approach described in two of DOD’s initial interim plans. In its final plan, DOD addressed all of the elements of the statute. However, the plan did not provide details to demonstrate how DOD will reduce duplicative activities or headquarters-level personnel. Without defining and analyzing the 16 functions currently excluded from transfer to the DHA, validating headquarters-level personnel requirements, and conducting a comprehensive review to determine, per DOD guidance, the least costly mix of personnel, DOD and congressional decisionmakers are not positioned to know how, whether, and to what extent undertaking this significant reform effort will improve effectiveness and efficiency in the administration of the MTFs. Recommendations for Executive Action We are making the following three recommendations to the DOD: The Secretary of Defense should ensure that the Assistant Secretary of Defense for Health Affairs, in coordination with Director of the DHA and the Surgeons General of the military departments, define and analyze the 16 operational readiness and installation-specific medical functions currently excluded from transfer to the DHA to determine whether opportunities exist to reduce or better manage duplicative functions and improve efficiencies in the administration of the MTFs. (Recommendation 1) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Health Affairs, in coordination with DHA Assistant Director for Health Care Administration and the Secretaries of the military departments, validate headquarters-level personnel requirements to determine that they are established at the minimum levels necessary— per DOD guidance—to accomplish missions and achieve objectives before transferring authority, direction, and control of the MTFs to the DHA for the third phase. (Recommendation 2) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Health Affairs, in coordination with DHA Assistant Director for Health Care Administration and the Secretaries of the military departments, conduct a comprehensive review to identify the least costly mix—per DOD guidance—of military, civilian, and contractors needed to meet validated requirements—that is, to perform the functions identified at the DHA headquarters and intermediate management organizations and at the military departments’ headquarters and intermediate commands. Additionally, this comprehensive review should be completed before transferring authority, direction, and control of the MTFs to the DHA for the third phase. (Recommendation 3) Agency Comments and Our Evaluation In written comments reproduced in appendix II, DOD concurred with all three recommendations and noted the actions it was taking to address each recommendation. In response to our third recommendation, DOD noted that it has completed an extensive review of manpower requirements for the management structure of the DHA. The September 2018 report by the Office of the Under Secretary of Defense for Personnel and Readiness is a first step toward addressing our recommendation. The report provided initial information concerning DHA’s personnel requirements. As we noted in our report, however, DOD needs to identify the least costly mix—per DOD guidance—of military, civilian, and contractors once it has validated requirements for DHA. As an additional comment, DOD noted that since our draft report was provided for comment it has refined the estimated projected growth in full- time equivalents for military and civilian personnel at the DHA from 38 percent to 14 percent. In its comments, DOD stated that it continues to believe that it will achieve a 10 percent reduction. However, as we stated in this report, DOD has not demonstrated the extent to which its plan to transfer the MTFs to the DHA will lead to reductions in headquarters-level personnel. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Defense; the Under Secretary of Defense for Personnel and Readiness; the Assistant Secretary of Defense for Health Affairs; the Director, Cost Assessment and Program Evaluation; the Director, Defense Health Agency; the Surgeon General of the Army; the Surgeon General of the Navy; and the Surgeon General of the Air Force. If you or your staff have any questions concerning this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. Appendix I: Section 1073c of Title 10, United States Code The National Defense Authorization Act (NDAA) for Fiscal Year 2017 amended Chapter 55 of title 10, United States Code to include a new section: § 1073c Administration of Defense Health Agency and military medical treatment facilities. Section 1073c of title 10, United States Code, as amended by Pub. L. No. 115-91, §§ 713, 1081 (2017), and Pub. L. No. 115-232, § 711 (2018) reads as follows: § 1073c. Administration of Defense Health Agency and military medical treatment facilities (a) Administration of military medical treatment facilities. (1) In accordance with paragraph (4), by not later than September 30, 2021, the Director of the Defense Health Agency shall be responsible for the administration of each military medical treatment facility, including with respect to-- (A) budgetary matters; (B) information technology; (C) health care administration and management; (D) administrative policy and procedure; (E) military medical construction; and (F) any other matters the Secretary of Defense determines appropriate. (2) In addition to the responsibilities set forth in paragraph (1), the Director of the Defense Health Agency shall, commencing when the Director begins to exercise responsibilities under that paragraph, have the authority— (A) to direct, control, and serve as the primary rater of the performance of commanders or directors of military medical treatment facilities; (B) to direct and control any intermediary organizations between the Defense Health Agency and military medical treatment facilities; (C) to determine the scope of medical care provided at each military medical treatment facility to meet the military personnel readiness requirements of the senior military operational commander of the military installation; (D) to determine total workforce requirements at each military (E) to direct joint manning at military medical treatment facilities and intermediary organizations; (F) to address personnel staffing shortages at military medical (G) to select among service nominations for commanders or directors of military medical treatment facilities. (3) The military commander or director of each military medical treatment facility shall be responsible for-- (A) ensuring the readiness of the members of the armed forces and civilian employees at such facility; and (B) furnishing the health care and medical treatment provided at such facility. (4) The Secretary of Defense shall establish a timeline to ensure that each Secretary of a military department transitions the administration of military medical treatment facilities from such Secretary to the Director of the Defense Health Agency pursuant to paragraph (1) by the date specified in such paragraph. (5) The Secretary of Defense shall establish within the Defense Health Agency a professional staff to provide policy, oversight, and direction to carry out paragraphs (1) and (2). The Secretary shall carry out this paragraph by appointing the positions specified in subsections (b) and (c). (b) DHA Assistant Director. (1) There is in the Defense Health Agency an Assistant Director for Health Care Administration. The Assistant Director shall-- (A) be a career appointee within the Department; and (B) report directly to the Director of the Defense Health Agency. (2) The Assistant Director shall be appointed from among individuals who have equivalent education and experience as a chief executive officer leading a large, civilian health care system. (3) The Assistant Director shall be responsible for the following: (A) Establishing priorities for health care administration and management. (B) Establishing policies, procedures, and direction for the provision of direct care at military medical treatment facilities. (C) Establishing priorities for budgeting matters with respect to the provision of direct care at military medical treatment facilities. (D) Establishing policies, procedures, and direction for clinic management and operations at military medical treatment facilities. (E) Establishing priorities and between the military medical treatment facilities. (c) DHA Deputy Assistant Directors. (1) (A) There is in the Defense Health Agency a Deputy Assistant Director for Information Operations. (B) The Deputy Assistant Director for Information Operations shall be responsible for policies, management, and execution of information technology operations at and between the military medical treatment facilities. (2) (A) There is in the Defense Health Agency a Deputy Assistant Director for Financial Operations. (B) The Deputy Assistant Director for Financial Operations shall be responsible for the policy, procedures, and direction of budgeting matters and financial management with respect to the provision of direct care across the military health system. (3) (A) There is in the Defense Health Agency a Deputy Assistant Director for Health Care Operations. (B) The Deputy Assistant Director for Health Care Operations shall be responsible for the policy, procedures, and direction of health care administration in the military medical treatment facilities. (4) (A) There is in the Defense Health Agency a Deputy Assistant Director for Medical Affairs. (B) The Deputy Assistant Director for Medical Affairs shall be responsible for policy, procedures, and direction of clinical quality and process improvement, patient safety, infection control, graduate medical education, clinical integration, utilization review, risk management, patient experience, and civilian physician recruiting. (5) Each Deputy Assistant Director appointed under paragraphs (1) through (4) shall report directly to the Assistant Director for Health Care Administration. (d) Certain responsibilities of DHA Director. (1) In addition to the other duties of the Director of the Defense Health Agency, the Director shall coordinate with the Joint Staff Surgeon to ensure that the Director most effectively carries out the responsibilities of the Defense Health Agency as a combat support agency under section 193 of this title. (2) The responsibilities of the Director shall include the following: (A) Ensuring that the Defense Health Agency meets the the the commanders of operational needs of combatant commands. (B) Coordinating with the military departments to ensure that the staffing at the military medical treatment facilities supports readiness requirements for members of the armed forces and health care personnel. (C) Ensuring that the Defense Health Agency meets the military medical readiness requirements of the senior military operational commanders of the military installations. (e) ADDITIONAL DHA ORGANIZATIONS.—Not later than September 30, 2022, the Secretary of Defense shall, acting though the Director of the Defense Health Agency, establish within the Defense Health Agency the following: (1) A subordinate organization, to be called the Defense Health (A) led, at the election of the Director, by a director or commander (to be called the Director or Commander of Defense Health Agency Research and Development); (B) comprised of the Army Medical Research and Materiel Command and such other medical research organizations and activities of the armed forces as the Secretary considers appropriate; and (C) responsible for coordinating funding for Defense Health Program Research, Development, Test, and Evaluation, the Congressionally Directed Medical Research Program, and related Department of Defense medical research. (2) A subordinate organization, to be called the Defense Health (A) led, at the election of the Director, by a director or commander (to be called the Director or Commander of Defense Health Agency Public Health); and (B) comprised of the Army Public Health Command, the Navy–Marine Corps Public Health Command, Air Force public health programs, and any other related defense health activities that the Secretary considers appropriate, including overseas laboratories focused on preventive medicine, environmental health, and similar matters. (f) Definitions. In this section: (1) The term "career appointee" has the meaning given that term in section 3132(a)(4) of title 5. (2) The term "Defense Health Agency" means the Defense Agency established pursuant to Department of Defense Directive 5136.13, or such successor Defense Agency. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments In addition to the contact named above, Lori Atkinson, Assistant Director; Alexandra Gonzalez; Rebecca Guerrero; Mae Jones; Mary Jo LaCasse; Kirsten Leikem; Steven Putansu; and Sarah Veale made key contributions to this report.
Why GAO Did This Study In fiscal year 2017, DOD provided health care to 9.4 million beneficiaries, including servicemembers, retirees, and their families at a cost of $43 billion. For more than a decade, partially in response to congressional mandates, DOD has worked to address inefficiencies in the Military Health System to control costs. To further achieve efficiencies, the NDAA for Fiscal Year 2017 required DOD to develop an implementation plan that addressed four elements related to transferring the administration of the MTFs to the DHA. DOD issued the plan in June 2018. The NDAA also included a provision for GAO to review the plan. GAO determined whether (1) DOD's plan included the statutory elements related to the transfer of administration of the MTFs to the DHA and (2) additional information would be useful to demonstrate that the plan will reduce or better manage duplication and improve efficiencies. GAO assessed DOD's plan against the required elements and, where appropriate, considered the extent to which the plan provided detailed information related to key change management practices identified in past GAO work. What GAO Found The Department of Defense's (DOD) June 2018 plan addressed the four statutory elements for the transfer of the administration of the military treatment facilities (MTFs) from the military departments to the Defense Health Agency (DHA). Specifically, the plan provided information on (1) how the DHA will take administrative responsibility of the MTFs; (2) efforts to eliminate duplicative activities; (3) efforts to maximize efficiencies in the DHA's activities; and (4) reductions of headquarters-level military, civilian, and contractor personnel. DOD dedicated most of the plan to describing the governance structure of the proposed administrative framework and to describing the timeline for a phased transfer of the approximately 457 MTFs to the DHA by October 1, 2021. Initially, DOD was to transfer responsibility for the administration of the MTFs to the DHA by October 1, 2018. However, Congress in the National Defense Authorization Act (NDAA) for Fiscal Year 2019 amended the law to allow, among other things, DOD to complete the transfer by September 30, 2021. DOD has taken key steps in its June 2018 plan to improve the effectiveness and efficiency of the administration of MTFs. However, DOD's plan has two weaknesses that could be mitigated with additional information. Specifically, DOD excluded 16 operational readiness and installation-specific medical functions from consideration for transfer to the DHA. DOD did not define or analyze the potential effect of excluding these functions, which include dental care, substance abuse, and occupational health. Senior officials from the DHA and the Assistant Secretary of Defense for Health Affairs acknowledged that transferring the dental care function, for example, from the military departments to the DHA could potentially reduce duplicative activities. DOD's plans to achieve the stated goal of reducing headquarters-level personnel, including contractor personnel, by 10 percent are unclear. In its June 2018 plan, DOD states that the DHA will experience personnel growth during each phase of the transition, but that it expects to reduce headquarters-level personnel by 10 percent by 2021. However, the plan does not provide specific details about how DOD will achieve the established goal of reducing headquarters-level personnel by 10 percent while the DHA experiences personnel growth. Further, the plan does not address whether and how contractor personnel factor into the reduction. This lack of clarity exists because DOD has not validated headquarters-level personnel requirements or conducted a comprehensive review to identify the least costly mix of military, civilian, and contractor personnel to meet the validated requirements. Until DOD takes action to resolve these two weaknesses, DOD will likely not be well positioned to reduce or better manage duplication and improve efficiencies, including reducing headquarters-level personnel across the Military Health System. Furthermore, Congress will lack important information to determine the extent to which the transfer of the administration of the MTFs to the DHA is being planned and implemented effectively and efficiently. What GAO Recommends GAO recommends that DOD define and analyze the 16 operational readiness and installation-specific medical functions for duplication, validate headquarters-level personnel requirements, and identify the least costly mix of personnel. DOD concurred with all three recommendations and noted actions it was taking to address each one.
gao_GAO-19-148
gao_GAO-19-148_0
Background VA administers its services and programs through three distinct administrations—Veterans Health Administration (VHA), Veterans Benefits Administration, and the National Cemetery Administration. VHA is the largest property holder within VA and is responsible for overseeing health care delivery to enrolled veterans and managing all VA medical facilities. VHA’s VISNs are responsible for overseeing medical facilities, and VA works with the VISNs and local medical facilities to manage its real property assets through VA’s capital-planning process. Responsibilities for Disposing of Properties Various VA offices share responsibilities for managing and disposing of real properties. Specifically: VISNs and local facilities are responsible for identifying, planning, and managing underutilized and vacant properties, including executing demolitions of buildings. Office of Capital Asset Management, Engineering, and Support, within VHA, is responsible for supporting the property disposal efforts of VISNs and local facilities, including providing funding for demolitions (if properties are part of a minor construction project or non-recurring maintenance project). Office of Construction and Facilities Management, within VA’s Office of Acquisition, Logistics and Construction, is responsible for: (1) developing and updating policies and procedures on disposal actions (except enhanced-use leases) and executing them; (2) coordinating the Steward B. McKinney Homeless Assistance Act’s (McKinney- Vento Act) screening process for potential homeless use prior to disposal; (3) overseeing implementation of required federal environmental reviews for planning and construction of major projects and real property actions; and (4) promulgating policy related to historic preservation, among other things. Office of Asset Enterprise Management (Asset Enterprise Office), within the VA’s Office of Management, is responsible for: (1) ensuring local facility disposal requests align with VA policy; (2) reviewing real- property inventory data, including annual disposal plans; (3) monitoring completion of disposal projects; (4) executing enhanced- use lease-related disposals; and (5) overseeing the Strategic Capital Investment Planning process, among other responsibilities. VA’s Disposal Process According to VA’s guidance on managing underutilized properties and disposals, the process for managing vacant properties usually begins with VISNs and local medical facilities. Together, they are responsible for identifying underutilized real properties and updating this information in the CAI database, which VA uses to manage its real property. VA has also identified and prioritized disposal options VISNs and local facilities have for determining what to do with vacant and underutilized properties they have identified. As shown in figure 2, VA’s first priority is to re-use vacant and underutilized properties within the department. If properties cannot be re-used, then VA looks at disposal options that would remove them from its inventory. If no disposal options are feasible, then VA may choose to close or “mothball” properties. Properties in the CAI database with utilization rates that are less than 50 percent—including vacant properties—are candidates for disposal, and VISNs’ and local facilities’ managers are required to develop a disposal plan for all vacant buildings or update an existing plan for these facilities each year. VA may choose from several options to dispose of vacant and underutilized properties, including: entering into an enhanced-use lease, demolition, like-kind exchange, transfer of real properties to the state for nursing home use, declaring excess property for disposals through GSA, or mothball, among others. (See fig. 3.) The disposal process differs depending on the disposal method selected. As part of the disposal process, VA is required to take certain actions, including conducting environmental reviews and considering the effects of its actions on historic properties. Accordingly, VA conducts “due diligence” reviews on vacant properties, and these reviews include complying with selected federal requirements described in table 1 below. Number and Characteristics of Disposals From fiscal years 2012 through 2017, VA disposed of 577 properties (including 471 buildings with about 5-million gross square feet), primarily through demolition of medical facilities and enhanced-use lease agreements (see fig. 4). These two methods accounted for the disposal of 3.6-million gross square feet of building space. VA used other disposal methods, such as transferring property to states for nursing home care or negotiating a sale, for the remaining 50 properties, as shown in figure 4 below. As of July 2018, VA reported initiating the disposal or re-use of 167 of the 430 vacant buildings the Secretary identified for disposal in June 2017. Of the 471 building disposals from fiscal years 2012 through 2017, VA disposed of 203 buildings in fiscal year 2012 alone in contrast to 61 building disposals in fiscal year 2017, as shown in figure 5. A VA official attributed the decline in disposals from fiscal year 2012 to fiscal year 2013 to limitations placed on VA’s enhanced-use lease authority in 2012. The characteristics of the 471 buildings VA disposed of varied from fiscal years 2012 through 2017. The majority (331 out of 471) was offices, housing quarters, service buildings, and warehouses; other buildings included hospitals, laboratories, and outpatient healthcare facilities. VA reported many of these buildings as historic, as shown in figure 6. More than a third of the vacant buildings designated as non-historic were demolished. Almost a third of the buildings—primarily housing quarters— were disposed of using enhanced-use leases. VA Is Addressing Some of Its Ongoing Disposal Challenges but Lacks Procedures to Manage Property Disposals VA officials and stakeholders we spoke with said that administering both environmental and historic reviews are key challenges for disposals. Two other ongoing challenges—the marketability of VA properties and prioritizing funding for disposals—were also mentioned as factors impeding VA’s property disposal efforts. As part of VA’s initiative to begin the re-use or disposal process for 430 vacant properties within 2 years, VA has begun addressing its environmental and historic review challenges. For example, VA established a working group to assist VISNs’ and local facilities’ managers in conducting these reviews. While VA is addressing challenges related to these reviews, limited interest in purchasing or leasing VA properties and competition for funding with other important VA projects directly related to veterans’ care are ongoing challenges that continue to hinder disposal efforts. VA is Taking Steps to Facilitate Environmental and Historic Reviews, but Properties’ Marketability and Competing Priorities Remain Challenging Environmental and Historic Reviews VA officials and stakeholders we spoke with cited the time it takes to complete the required environmental and historic reviews as a challenge in managing the disposal process. Although VA does not maintain data on how long these reviews can take or how long it takes to dispose of its properties, in our review of 31 selected properties, we found variation in the timespan to conduct environmental and historic reviews. The environmental reviews of these properties took about 2 years on average to complete, depending on the condition of the property. For example, an environmental review of temporary storage facilities in Biloxi took about a year, as no environmental issues were identified. In another case, it took about 2 years to conduct an environmental review of VA’s Cincinnati-Fort Thomas property, as asbestos and lead paint were identified during the course of the review. For those disposals requiring historic reviews, we found that it took about 5 years on average, depending on the complexity of the disposal. For example, it took 5 years to complete a historic review of the St. Louis, Jefferson Barracks property due to the need to collaborate with multiple stakeholders, including the neighboring Army National Guard base, the state’s historic preservation office, local community council, community organizations, and many veteran service organizations; and addressing the adverse effects on historic properties, according to VA officials. VA officials and stakeholders we spoke with stated that due to lack of staff expertise and resources, VISNs’ and local facilities’ managers may choose to contract out these reviews, but procuring contractors may also add time to the disposal process, as facility managers need to define the terms of work and identify contractors. Further, environmental and historic reviews can affect VA’s decision- making process with regard to choosing a disposal method, potentially lengthening the time it takes for disposal. For example, VA officials told us that they began a historic review on the Pittsburgh-Highland Drive property in 2012 but discontinued the review in 2013, partially due to disagreements with historic preservation stakeholders about the proposed demolition of some historic buildings. After 4 years, in 2017, VA decided to declare the property as excess and turn it over to GSA for disposal. According to VA officials, this required a different historic review, as it entailed a different disposal method. GSA is currently administering the additional historic review of this property. VA has begun taking actions to reduce the time it takes to conduct environmental and historic reviews as part of VA’s initiative to begin the process of re-using or disposing of 430 vacant buildings within 2 years. For example, VA worked with the Advisory Council on Historic Preservation to obtain a program comment alternative to reduce time spent with historic preservation stakeholders when consulting on “ancillary utilitarian support buildings and structures,” such as a boiler plant or a sewage plant. VA officials also told us that they established a headquarters-level working group consisting of experts in historic preservation and environmental reviews as well as real property transactions to assist VISNs’ and local facilities’ managers in administering disposals, including conducting these reviews, and in moving them forward. VA officials also told us that they awarded four regional contracts with contractors to complete the environmental and historic reviews and expedite the disposal process. VA officials and historic preservation stakeholders we spoke with also said they can have disagreements on how to meet the historic review requirements, and such disputes can add time to the review process. The historic preservation stakeholders commented that VA does not consult with them early in the disposal’s decision-making process and does not provide adequate information on the adverse effects of demolishing a historic property as well as other potential methods through which VA could dispose of a property. VA officials we spoke with stated that they have been consulting with historic preservation stakeholders on all disposal projects as required. To improve collaboration and communication between VA and external stakeholders, VA developed a toolkit in June 2017 on how to effectively communicate with stakeholders. This communications toolkit responded to our recommendation for VA to develop and distribute guidance for VISNs’ and local facilities’ managers to use when communicating with stakeholders on facility alignment changes, and we subsequently closed this recommendation. Competing Priorities VA officials and stakeholders we spoke with also pointed out that competing priorities for VA funds is another remaining challenge. VA officials stated that projects to demolish buildings compete for funding with other capital projects, such as renovating inpatient units. Since VA’s mission is to provide health care services, demolishing buildings is not as high a priority compared to other projects that may lead to providing better health care services. VA officials also told us that competing priorities can affect how long it takes to dispose of vacant properties. If a demolition project is part of a construction project, then VA may give it a relatively high priority for funding. For example, at VA’s Dayton campus it took about a year from when VA requested funding in 2016 to demolish two historic buildings in 2017. A VA official said that due to a $1 million donation to build a Fisher House on VA’s Dayton campus, funds were prioritized to demolish two national historic landmark buildings to make space available for construction of the Fisher House. However, according to other VA officials, demolition projects in and of themselves do not rank well for funding; such rankings can affect the time it takes for disposal. For example, a VA official said that VA had initially planned to demolish a temporary building on the Cleveland Wade Park campus sometime during the 2012-to-2013 time frame; however, VA did not demolish the temporary building until 2017, in part due to the longer than expected time it took for VA to allocate funds to this project. If funds are not available for demolition, a building can remain vacant for many years. For example, VA closed several properties on its Sepulveda Ambulatory Care Center campus in North Hills, CA, after they sustained major damage from the 1994 Northridge earthquake. According to VA officials, competing funding priorities, among other factors, contributed to the long wait to demolish these vacant properties, which had not been disposed of as of October 2018 (see fig. 7). VA officials also noted that waiting for VA to allocate funds to demolish properties can result in additional potential cost later on. For instance, VA officials mentioned that since buildings on the Sepulveda campus have been vacant for many years, they now qualified for historic status, requiring them to undergo a historic review—a requirement that could have been avoided if VA had demolished them more than 20 years ago when they were originally identified for disposal. VA officials and stakeholders we spoke with identified property characteristics that affect the marketability of VA properties—historic status, deficient physical conditions, location, unusable building configuration, and repair costs—as barriers for disposal. This is a long- standing challenge that limits VA’s ability to re-use or dispose of vacant and underutilized properties. In our recent analysis of VA’s CAI data, we found that a majority of VA’s vacant properties (about 78 percent) from fiscal years 2012 through 2017 have an historic status, and the average age of those vacant properties is about 91 years old. As discussed earlier, historic reviews can be lengthy and can make the disposal process challenging, according to VA officials. Also, older buildings are likely to have configurations that are difficult to use or are in need of significant repair. VA officials and stakeholders said that the location of VA properties limits disposal options. For example, a VA official told us that demolition is sometimes the only disposal option available when a deficient building is located on an existing VA campus and cannot be re-used or disposed of and removed from VA’s inventory. VA officials also stated that historic buildings are frequently located in the middle of a campus and sometimes cannot be easily demolished due to the historic designation (see fig. 8). In these cases, VA will close and “mothball” the building to minimize maintenance and operations costs and let the buildings sit vacant as an interim measure. VA officials commented that there are also safety and security challenges associated with disposing of or re-using a building located in the middle of a VA campus. For example, a local facility manager told us that when two of its buildings on campus were leased out to an organization on a short-term lease for use as dormitories, young adults from the dormitories gained access to private inpatient areas, violating patients’ privacy. This is consistent with our previous findings that many disposable VA properties located in the middle of medical campuses draw limited private sector interest making some disposal options challenging. VA officials and stakeholders we spoke with—including commercial real estate experts—also indicated that it can be difficult to attract developers for several reasons. In one instance, a VA official and a stakeholder we spoke with told us that it took multiple years to identify developers that would take on environmental mitigation efforts as part of the negotiated sale and transfer of VA’s properties to the City of Fort Thomas, Kentucky. According to a stakeholder, developers were not willing to take on the cost and risk of environmental mitigation without a title to the property and no guaranteed income from the property. VA, however, could not transfer the property title to a third party without first meeting federal standards for cleaning up the environmental hazards on the properties. While the issue was ultimately addressed, it took several years to complete the deal. VA Lacks Clear Procedures to Manage Property Disposals Another challenge that VA officials and stakeholders raised was VA’s lack of clear disposal procedures. Several VA officials and stakeholders we spoke with stated that it is unclear what specific steps need to be taken for disposals, what are the targeted time frames for completing those steps, and who is responsible for completing them. VA’s guidance on managing underutilized properties and disposals provides policies and procedures on a portfolio level, such as VA’s priorities for disposing of vacant properties and the different disposal options available. However, VA’s guidance does not specify sequential steps and actions that need to be taken at the project level to plan, implement, and execute property disposals for VISNs’ and local facilities’ managers. Further, a VA official in headquarters told us that VA does not have formal guidance on selecting any particular disposal methods. While we found that documentation on policies and procedures exists for some specific disposal methods, such as enhanced-use lease projects, VA officials told us that policies and procedures for other disposal actions, such as transferring or declaring property as excess and disposing of it through GSA, are not documented. A VA official in headquarters told us that informal guidance may exist in some VISNs, but no standardized procedures on managing a disposal project is available. VA officials said there are no step-by-step procedures to refer to when using a disposal options more complex than demolishing a building. A VISN facilities’ manager we spoke with further pointed out that a decision-tree to help plan, implement, and execute for the different disposal methods does not exists to help local facilities navigate through VA’s decentralized and complex disposal process. VA officials told us that its disposal process is decentralized, an approach that can contribute to unclear procedures for disposal projects. According to VA officials, VISNs’ and local facilities’ managers are responsible for making disposal decisions, developing a disposal plan, and executing the disposal. As previously discussed, different VA program offices are responsible for different disposal actions, depending on the disposal method that VISNs’ and local facilities’ managers are considering. VA officials noted that this decentralized approach to managing disposals can make it difficult for VISNs’ and local facilities’ managers as well as local stakeholders to know when or how best to coordinate with the appropriate VA offices. A real property stakeholder we spoke with also noted that common uncertainties in working with VA, such as its lack of a clear and timely disposal process, can hinder developers’ interests in VA properties. Specifically, the stakeholder stated that VA’s decision-making process is divided among different entities within VA, a situation that may add time to the disposal process, and stated that having a clear and timely disposal process may provide a level of certainty for developers. VA officials and stakeholders also said that in some cases, VISNs’ and local facilities’ managers may lack the knowledge and experience to manage disposals. For example, VA officials told us that while facility managers generally know what actions are needed to demolish properties, they are not familiar with actions that need to be taken for transferring or selling properties to a third party or turning excess property over to GSA for disposal. VA officials also mentioned staff turnover and the infrequency of disposals as contributing factors to staff’s lack of knowledge on procedures for disposing of properties. For example, two facilities’ managers we spoke with said that in their many years of working for VA they have never reported a property as excess and disposed of it through GSA, until recently. VA officials and stakeholders further noted that VISNs’ and local facilities’ managers may lack expertise conducting historic and environmental reviews as they are usually engineers, who are not experts on environmental and historic issues. For example, a VISN facility manager informed us that a local facility manager was not familiar with administering an environmental review, a lack that led to a misstep in the review and duplication of work and added time to the disposal process. While VA has policies and guidance on historic and environmental reviews, our review of these documents showed that they do not provide guidance on how to make decisions, what actions to take, what are the targeted time frames for taking those actions, and who should be completing those actions. Further, while VA officials with experience in disposals may estimate how long these reviews can take, VA does not have documented guidance on estimated time frames (milestones) for taking those actions. Federal internal controls call for documentation to help management oversee execution of procedures by establishing and communicating the “who, what, when, where, and why” to personnel. Documentation also provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel and a means to communicate that knowledge as needed to external parties, such as external auditors or interested third parties. Federal internal controls also call for management to define objectives in specific terms—in this case, disposal actions—so they are understood at all levels of the entity. This understanding involves clearly defining what is to be achieved, who is to achieve it, how it will be achieved, and the estimated time frames for achievement. Without procedural documentation that describes the disposal options and the actions needed to carry out the disposal, including estimated time frames, it is difficult for VISNs’ and local facilities’ managers to plan, implement, and execute the different disposal options available and efficiently dispose of vacant properties. A procedural document at the project level may include information on who is authorized to make decisions and include estimated time frames around historic and environmental reviews to ensure timely and appropriate disposal of VA properties. For example, VA officials with experience in disposals estimated that it should take about 6-to-8 months for a property disposal, if there are no environmental and historic issues involved and funding is available. For disposals where environmental and historic reviews are needed, those officials told us it should take about 2- to-4 years from when VA decided to dispose of a property to complete the disposal. According to facilities managers we spoke to, additional procedural documentation at the project level could help VISNs’ and local facilities’ managers navigate through the complex disposal process and avoid missteps or delays in the disposal of vacant properties. VA Enhanced Its Collection of Data on Vacant Properties but Lacks Key Information to Track and Monitor Disposals VA Has Taken Steps to Enhance How It Collects Data on Vacant Properties To enhance the monitoring of its real property and to meet reporting requirements, VA officials told us VA has taken steps in the last 6 years to improve its real property inventory and the data it collects on its vacant properties, including properties VA has identified for disposal. These steps include: Requiring VISNs’ and local facilities’ managers to verify and certify the accuracy of the information in the CAI. VA’s Asset Enterprise Office sends out an annual call for facility managers to verify and certify the validity of vacant property data for each of the facilities. Requiring VISNs’ and local facilities’ managers to make ongoing updates to the CAI database. VA’s annual data-call memo requires these managers to continuously update the data as they take actions. Facility managers we spoke to stated they update this information regularly, including when actively planning disposal projects and individual projects are complete. One facility manager told us that VA’s Asset Enterprise Office is “actively pushing” local managers to update this information, and the data in the CAI have improved as a result. Generating “discrepancy reports” to identify problems with inaccurate or outdated property data in the CAI. VA officials in headquarters told us that facility managers review these reports and explain any identified discrepancies regarding vacant properties, including those identified for disposal. VA officials told us they then correct any errors. Discrepancy reports include checks on whether facility managers have specified a disposal method for each disposal, estimated an associated disposal’s cost, and entered a planned future year for the disposal. Refining the database by, for example, adding new “business rules” to limit user errors. VA officials told us that since 2012 it has implemented program changes and new business rules to the CAI database to address inaccuracies in the data, including data that support disposal information. For example, a VA official in headquarters told us that to decrease the number of errors caused by users entering data more than once, the database now limits the number of times users may enter the same information. This prevents multiple data entries appearing for, for example, the year a building was built, according to VA officials. VA officials in headquarters also told us they developed similar business rules to identify “clearly wrong” data entries and duplicative data. For instance, users cannot enter letters in numeric fields which, they told us, has led to fewer errors. VA Does Not Collect Key Information to Track and Monitor Property Disposals Although VA has enhanced its data collection efforts for vacant properties, we found that VA does not collect all the information necessary for its headquarters officials to track and monitor the disposal of VA’s vacant properties. As part of its annual call for validating data, VA requires facility managers to record certain information about disposals in the CAI, including: which buildings are identified for disposal, whether a disposal plan is in place, when the disposal is to occur, what type of disposal method is to be used, and what are the costs associated with the disposal. However, VA does not have the ability in its CAI to collect detailed data on the status of disposal projects—specifically, data fields for facility managers to input detailed information on the status of: (1) disposal actions, (2) due diligence reviews, and (3) approvals, such as environmental permits that are necessary to complete the disposal. Since CAI does not have this information, VA’s Asset Enterprise Office, as part of the Secretary’s initiative to begin the re-use or disposal process of 430 buildings, developed a standalone spreadsheet to track and monitor the disposal status of these buildings. Then, according to officials in VA’s Asset Enterprise Office, they had to ask local facility managers what was the status of each individual disposal. Federal internal-control standards state that management should use quality information to achieve an entity’s objectives and establish and operate monitoring activities to monitor the internal control system and evaluate the results. This includes management obtaining data on a timely basis and using it for effective monitoring, which includes controls to achieve complete and accurate data. While the Secretary’s initiative has raised the priority of tracking and monitoring VA’s real property disposals, the CAI does not contain key information to improve VA’s routine tracking as called for in internal controls. A key official in VA’s Asset Enterprise Office told us that officials there usually leave it to local facilities to track key information and that the CAI currently does not collect this information. Without incorporating information needed to better track and monitor disposals through VA’s primary real property tracking database—CAI—VA may not be able to efficiently track and monitor its real property disposals going forward after the Secretary’s initiative is completed. VA officials in headquarters told us that without data on the actions and status of disposals, including steps taken to complete environmental and historic reviews, they are unable to track and monitor the progress of disposal projects—including the length of time these reviews take—and to identify any areas where management may assist local facilities in disposing of properties. For instance, as previously mentioned, VA officials in headquarters told us they used the information gathered as part of the 430 re-use or disposal initiative to identify and award contracts to perform environmental and historic reviews and, as a result, more quickly expedited the disposal process. In addition, VA officials in headquarters do not collect documentation, such as environmental and historical review documents, that could allow headquarters staff to verify the status of disposal projects. As mentioned, federal internal controls state that management should use quality information to achieve an entity’s objective, including obtaining data on a timely basis and using these data for effective monitoring, which includes controls to achieve complete and accurate data. Further, VA requires VISNs’ and local facilities’ managers to record a planned or completed disposal in the CAI, including updating information as changes occur. However, a key official in VA’s Asset Enterprise Office told us the CAI database does not currently have enough space for facility managers to upload supporting documentation, including environmental and historic review documents. As part of the Secretary’s initiative to begin the re-use or disposal process for 430 buildings, VA’s Asset Enterprise Office set up a website to collect and exchange documents, such as environmental and historic review documents from local facility managers. This process allowed VA’s Asset Enterprise staff to verify the disposal information of the properties in the spreadsheet using this collected information. While VA created a website to exchange documentation as part of the 430 re- use or disposal initiative, this website is separate from CAI and was created because VA had not previously collected supporting documents in CAI. However, a VA official told us that when they compared information they collected from the website, they found the information in CAI is not always correct and appropriately updated. As we have previously found, documentation provides a means to retain organizational knowledge while mitigating the risk of having that knowledge limited to a few personnel. Documentation can also ensure that knowledge gets communicated to external parties, such as external auditors. As previously mentioned, some VA staff lack expertise and organizational knowledge to properly document a variety of disposal options. VA also experiences frequent staff turnover. These issues, together with the inability of facilities’ managers to upload disposal-related documents directly into CAI, puts VA at risk of losing valuable information about the disposal process. For example, according to a stakeholder we spoke with, VA could not readily provide information about consulting stakeholders on historic properties, as required by historic review requirements. A VA official told us that after contacting facility managers for information about specific disposal projects as part of the 430 initiative, they found disposal procedures were not consistently documented and, in some cases, documents were missing. VA officials in headquarters provided us with a draft proposal to enhance the CAI in several ways, including: to add specific data fields for dates, including completion dates for reviews and to increase the capacity of the CAI to allow facility managers to upload disposal documentation, including environmental and historic review documentation. However, the proposed changes do not include some key information, such as the start dates for compliance reviews, so VA cannot monitor and track when the reviews began and how disposals are progressing. Additionally, a VA official we spoke with could not provide a specific time frame for increasing the capacity of CAI, as VA is currently working on developing space requirements that are needed to increase capacity and help estimate a time frame. Conclusions Given that the number of VA’s vacant buildings has been generally increasing in the last 6 years and the implementation of the VA Asset and Infrastructure Review Act of 2018 could lead to more unneeded buildings, effectively managing VA’s real property disposal is crucial. Otherwise, VA may maintain a large inventory of vacant buildings that may be costly to secure and maintain. While effectively disposing of excess and underutilized property has been a long-standing challenge for VA, the agency has taken some positive actions, such as examining ways to streamline the historic review process, having some documented procedures, and improving data collection efforts on vacant properties. However, without documented procedures for all the disposal options to assist VISNs’ and local facilities’ managers in planning, implementing, and executing disposals and navigating the complex property-disposal process, VISNs and local facilities—which are responsible for managing their real property—may continue to struggle to facilitate property disposals efficiently. Also, without important information on the status of disposal projects and supporting documents, it is unclear how VA can monitor and track disposals, including identifying any areas where management can assist in the disposal of its vacant properties. Recommendations for Executive Action We are making the following three recommendations to the VA: 1. The Secretary should develop clear procedures for each of VA’s disposal options to help facilities’ managers plan, implement, and execute projects to dispose of vacant and unneeded properties. (Recommendation 1) 2. As VA implements its plans to enhance the CAI to collect key data on disposal projects, the Secretary should collect data on disposal status information and time frames (e.g., environmental and historical reviews’ starting dates) to ensure VA has the information it needs to track the length of the disposal process and identify any areas where management may assist local facilities in implementing property disposals. (Recommendation 2) 3. As VA pursues its plans to enhance the CAI, the Secretary should increase the capacity of the CAI to allow local facilities to upload disposal-specific documentation, such as environmental- and historical-review documents, to ensure all documentation related to a property’s disposal is available to appropriate parties, including VA officials. (Recommendation 3) Agency Comments We provided a draft of this report to VA for review and comment. In written comments, reproduced in appendix II, VA concurred with our recommendations and stated that it has begun or is planning to take actions to address them. VA also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Veteran’s Administration, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact Andrew Von Ah at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objective, Scope, and Methodology This report examines the U.S. Department of Veterans Affairs’ (VA) efforts to dispose of properties, including the management of its real property disposals. Specifically, we address: (1) the challenges VA faces disposing of its vacant property and how it is addressing those challenges and (2) the extent to which VA is tracking and monitoring the disposal of its properties. To address these objectives, we reviewed relevant laws, regulations, policies, handbooks, and other documents related to VA’s real property management, including VA’s Handbook and Directive on Managing Underutilized Real Property Assets, including Options for Reuse and Disposal and VA’s Capital Asset Inventory User Guide as well as VA’s annual budget submissions to Congress to fully understand VA’s disposal process. To examine the full scope and extent of VA’s vacant and disposed of properties, we obtained and analyzed data from VA’s Capital Asset Inventory for fiscal years 2012 through 2017 and assessed their reliability. To assess the reliability of VA’s data we: (1) looked for any missing data, outliers, or other obvious data errors; (2) reviewed existing documentation about the data and the system that produced them; (3) reviewed VA’s processes for checking and validating the data; and (4) interviewed officials knowledgeable about the data. We found the data to be reliable for our purposes of identifying the number and type of vacant and disposed of buildings and the characteristics of those buildings. To identify challenges that VA faces when disposing of property and how VA is addressing them, we selected a non-generalizable sample of 31 properties using data from VA’s Capital Asset Inventory as mentioned above. The 31 properties we selected were either completed in fiscal year 2017 or planning was under way for disposal, including through the General Services Administration (GSA). Specifically, we selected properties that: captured a range of disposal methods available to VA using VA’s current process for disposal, included both recently planned and completed disposals to observe disposals in different phases of planning and were likely documented by current VA staff, and represented a variety of building and disposal characteristics, including associated disposal costs, historic status, age, and size. The challenges faced by these selected properties cannot be used to make inferences about all VA properties. However, they illustrate the range of challenges that VA faces in disposing of properties. In addition, to help identify disposal challenges VA faces, including those challenges that were identified as a lengthy time frame for disposal, we obtained and reviewed documents related to the 31 selected properties, including environmental review reports and historic review documents. We used environmental and historic review documents to help estimate the timespan for disposals, including time frames to conduct these reviews. We also conducted semi-structured interviews with VA officials and external stakeholders, who were involved or knowledgeable about the disposal of these selected properties and are familiar with VA’s disposal process. These included interviews with facility managers from VA’s Veterans Integrated Service Networks (VISN) and local facilities who were knowledgeable about the disposal of the 31 selected properties. This group represented 7 of VA’s 18 VISNs and 10 local medical facilities, including two local medical facilities—Perry Point (MD) and Sepulveda (CA)—with planned disposal projects—we visited. We also interviewed external stakeholders who included officials from the GSA; veterans service organizations (e.g., Veterans of Foreign Wars and the American Legion); a local community that purchased VA properties, a major commercial real estate company; and historic preservation groups (e.g. Advisory Council on Historic Preservation and the National Conference of State Historic Preservation Officers) as well as selected State Historic Preservation Officers to obtain their perspectives on VA’s disposal challenges. To identify common challenges, along with illustrative examples and lengthy time frames, we reviewed and analyzed documents from the 31 properties we selected as well as interviews with VA officials and external stakeholders. This analysis included one analyst reading through all of the documents and interviews, creating a list of challenges mentioned, and then a subsequent analyst verifying this list. To identify steps VA has taken to address challenges, we reviewed documents and interviewed officials from VA’s Office of Asset Enterprise Management and its Office of Construction & Facilities Management as well as Veterans Health Administration’s Office of Capital Asset Management and Engineering Support. We then assessed VA’s efforts to address these challenges against applicable federal internal control standards. To determine the extent to which VA is tracking and monitoring the disposal of its vacant properties, we reviewed the current data fields in VA’s Capital Asset Inventory, as well as VA’s planning and guidance documents, including the Fiscal Year 2017 Capital Asset Inventory and Disposal Plans Updates (Annual Call Memo). In addition, we interviewed VA officials in headquarters, including VA’s Office of Asset Enterprise Management and the Office of Construction and Facilities Management to determine the extent to which VA is tracking and monitoring the disposal of its vacant properties. We obtained and reviewed a copy of VA’s data discrepancy report for fiscal year 2016 that VA uses to verify data and track and monitor vacant properties and disposals. We also reviewed VA’s planning documents, including a tracking spreadsheet that VA is using to monitor the disposal of vacant properties. In addition, we interviewed VA officials, including facility managers from VISNs and local facilities, to obtain their perspective on VA’s efforts to track and monitor disposals, specifically. Subsequently, we assessed VA’s plan to track and monitor these properties against applicable federal internal controls. We conducted our work from November 2017 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II Comments from the Department of Veterans Affairs Appendix III: GAO Contact and Staff Acknowledgments Contact Staff Acknowledgments In addition to the individual named above, Kyle Browning; Cathy Colwell (Assistant Director); Gina Hoover; Jennifer Kim (Analyst in Charge); Brian Lepore; Jeff Mayhew; Nitin Rao; Malika Rice; Minette Richardson; Todd Schartung; Michelle Weathers; and Crystal Wesco made key contributions to this report.
Why GAO Did This Study VA is one of the largest federal property-holding agencies, and its inventory of vacant buildings has generally increased over the last 6 years. Disposing of its excess properties has been a long-standing challenge. GAO was asked to review how VA manages its real property disposals. This report addresses: (1) the challenges VA faces in disposing of its vacant properties and how it is addressing those challenges and (2) the extent to which VA is tracking and monitoring the disposal of its vacant properties. GAO reviewed VA's policies and planning documents regarding property disposals. GAO also selected 31 properties that were either disposed of or planned for disposal in fiscal year 2017, among other selection criteria. GAO interviewed VA officials and stakeholders involved in the disposal of the 31 selected properties and familiar with VA's disposal process, including steps VA is taking to address challenges. What GAO Found Conducting required environmental and historic reviews in a timely manner is among the challenges the Department of Veterans Affairs (VA) faces in its real property disposal process. These reviews include assessing the potential effects of property disposals on the environment and historic preservation. VA is taking steps to address these ongoing challenges. For example, VA has established a working group consisting of experts in historic preservation, environmental reviews, and real property to assist facilities' managers in expediting disposals. However, other ongoing challenges remain, including the marketability of VA properties and VA's lack of clear procedures for property disposals. While VA has guidance on disposals at the broad portfolio level, GAO determined that this guidance does not contain step-by-step procedures at the project level to assist facilities' managers to plan, implement, and execute disposals for the different disposal options. (See figure.) For example, a number of managers told GAO that they were not familiar with actions to take when transferring properties to a third party or turning over excess property to the General Services Administration for disposal. VA officials commented that facilities' managers do not frequently dispose of properties, so a procedural document outlining the steps and who is responsible for taking those steps may help staff navigate more complex disposal processes and avoid missteps and delays. VA has enhanced its data collection on vacant properties, but the agency does not collect information needed to track and monitor disposal projects at the headquarters level. For example, VA requires facilities' managers to verify and certify the validity of vacant property data in the database used to manage real property—the Capital Asset Inventory. On disposal projects, however, VA lacks certain information, such as the status of environmental or historical reviews, to monitor progress. According to VA, the Capital Asset Inventory currently does not have enough capacity to collect key information and supporting documentation. VA officials said they plan to increase the capacity, but VA has not yet included some key information in the Capital Asset Inventory that could enable VA to monitor the progress of disposals. Without information on the status of disposal projects, VA cannot readily track and monitor its progress and identify areas where facilities' managers may need additional assistance. What GAO Recommends GAO is making three recommendations. These include developing disposal procedures for facilities' managers to help plan, implement, and execute disposal projects and collecting key information on the status of disposal projects, as VA implements its plans to increase the capacity of VA's Capital Asset Inventory. VA concurred with GAO's recommendations.
gao_GAO-18-192
gao_GAO-18-192_0
Background DOD Strategies Inform Combatant Command Plans DOD, through the Secretary of Defense and the Chairman of the Joint Chiefs of Staff, develops department-wide strategic guidance based on direction from the President and issues this guidance through strategy documents. According to joint doctrine and Chairman of the Joint Chiefs of Staff guidance, combatant commanders use strategy documents as guidance for planning operations. Specifically, combatant commanders translate this guidance into their commands’ campaign and contingency plans. The military services organize, train, equip, and provide forces to the combatant commanders to execute command plans. The combatant commander must make certain the combatant command can execute these plans. PACOM is one of six geographic Unified Combatant Commands of the U.S. Armed Forces. With an area of responsibility extending from the waters off the west coast of the United States to the western border of India, and from Antarctica to the North Pole, PACOM is the primary U.S. military authority in the Pacific. In 2016, PACOM reported that approximately 380,000 U.S. military and civilian personnel were assigned to this area. PACOM describes the 36 nations that comprise the Asia- Pacific region as home to more than 50 percent of the world’s population and 3,000 different languages, several of the world’s larger militaries, and five nations allied with the United States through mutual defense treaties or agreements. PACOM’s commander reports to the President and the Secretary of Defense through the Chairman of the Joint Chiefs of Staff, and is supported by four service component commands: U.S. Pacific Fleet, U.S. Pacific Air Forces, U.S. Army Pacific, and U.S. Marine Forces, Pacific. DOD’s Previous Rebalance to the Pacific Strategy and Current Policy In President Obama’s speech to the Australian Parliament in November 2011, he stated that after a decade of fighting two wars, the United States was turning its attention to the vast potential of the Asia-Pacific region. The President described the U.S. as a historic Pacific power whose interests are inextricably linked with Asia’s economic, security, and political order. According to a senior administration official, the United States planned to implement a comprehensive, multidimensional strategy in the Asia-Pacific region. PACOM used military strategy documents to implement presidential strategic direction to rebalance efforts to the Pacific. However, according to officials from the Office of the Under Secretary of Defense for Policy, the Joint Staff, and the U.S. Pacific Command there was no single rebalance-specific strategy document. Instead, these officials identified a number of strategy documents published since 2012 that guided activities associated with the rebalance to the Pacific effort. Based on our interviews with U.S. Pacific Command (PACOM) and DOD officials, we focused our review on six strategy documents, issued between 2012 and 2015, that these officials considered relevant and representative of DOD’s previous strategy to implement the rebalance to the Pacific through 2016. The six documents that we reviewed are: Sustaining U.S. Global Leadership: Priorities for 21st Century Defense. DOD issued this document in January 2012. This publication reflected presidential strategic direction to DOD and described the key military missions for which the department would prepare. In describing the security environment, this strategic guidance stated that the United States would, of necessity, rebalance toward the Asia-Pacific region. Quadrennial Defense Review (QDR). According to DOD guidance, the QDR articulates a national defense strategy consistent with the broader government-wide National Security Strategy by defining force structure, modernization plans, and a budget plan allowing the military to successfully execute the full range of missions within that strategy. The 2014 QDR referred to the rebalance to the Pacific as a part of sustaining U.S. presence and posture abroad to better protect U.S. national security interests. National Military Strategy (NMS). The 2015 NMS described how DOD would employ military forces to protect and advance U.S. national interests. The NMS provided focus for military activities by defining a set of military objectives and concepts used by the combatant commanders and others. The 2015 NMS referenced the rebalance to the Pacific as part of a national military objective. The NMS was informed by the QDR. Guidance for the Employment of the Force (GEF). According to joint doctrine, the GEF provides direction to combatant commands for operational planning, force management, security cooperation, and posture planning. The GEF is the method through which the Secretary of Defense translates strategic priorities in the QDR and other strategy documents into direction for operational activities. The GEF is described in joint doctrine as an essential document for combatant command planners as it provides the strategic end states for the deliberate planning of campaign and contingency plans. Joint Strategic Capabilities Plan (JSCP). The JSCP is the primary vehicle through which the Chairman of the Joint Chiefs of Staff directs the preparation of joint plans. The JSCP provides military strategic and operational guidance to combatant commanders for the preparation of plans based on current military capabilities. The JSCP tasks combatant commanders to develop campaign, contingency, and posture plans and translates requirements from the GEF and other guidance into prioritized military missions, tasks, and plans. The JSCP is informed by the GEF and the NMS. PACOM 2015 Theater Campaign Plan (DRAFT) (TCP). Campaign plans, such as PACOM’s TCP, focus on the combatant command’s steady-state or daily activities and operationalize combatant command theater strategies. According to joint doctrine, joint planning draws from tasks identified in the GEF and JSCP and campaign plans should focus on the combatant command’s steady-state activities. These include ongoing operations, military engagement, security cooperation, deterrence, and other shaping or preventive activities. Campaign plans provide the vehicle for linking steady-state shaping activities to the attainment of strategic and military end states. In January 2018, DOD announced its new 2018 National Defense Strategy that cited as the department’s principal priorities the long-term strategic competition with China and Russia. The strategy also stated that concurrently the department would sustain its efforts to deter and counter rogue regimes such as North Korea and Iran, defeat terrorist threats to the United States, and consolidate gains in Iraq and Afghanistan while moving to a more resource-sustainable approach. In February 2018, the Assistant Secretary of Defense for Asian and Pacific Security Affairs notified GAO that although DOD continues to prioritize the Asia-Pacific region, the rebalance to the Pacific is no longer U.S. policy. DOD Strategy Documents Associated with Rebalancing to the Pacific Collectively Included Most of the Desired Elements of an Effective National Strategy Six DOD strategy documents that helped guide the rebalance to the Pacific collectively included most of the desired elements of an effective national strategy. We have previously reported that effective national strategies incorporate six characteristics, and their associated desired elements. Table 1 lists desired elements that we adapted from our prior work and tailored toward our review of the six DOD strategy documents. We found these six DOD strategy documents that collectively guided the rebalance to the Pacific included, to varying degrees, 24 of the 31 desired elements we determined as being the most relevant to an effective strategy for the rebalance. For example, as a set, the six strategy documents contained a detailed description of the operating environment in which activities for the rebalance were to take place and included references that described the relationship of the rebalance to the Pacific to other strategies, goals, and objectives. The strategy documents referenced their purposes and, in unclassified and general descriptions, the threats that the strategies were to address including long-range missile threats and weapons of mass destruction. Collectively, the strategy documents referred to selected types of resources needed, such as the deployment of ships and aviation assets, and who would be implementing the strategies. We were, however, unable to find any reference to 7 of the 31 elements in any of the six strategy documents. For example, 2 of the 7 missing elements were: Lack of a documented, consistent definition of the rebalance to the Pacific. Based on our systematic review, we found that none of DOD’s six strategy documents issued from 2012 to 2015 included a definition of the rebalance to the Pacific that described the rebalance’s key terms, major functions, mission areas or activities. Further, DOD officials from the Office of the Under Secretary of Defense for Policy, the Joint Staff, and the U.S. Pacific Command involved in planning and implementing the rebalance to the Pacific were unable to identify a definition for the rebalance to the Pacific in the strategy documents, and consequently could not provide a definition that was in use consistently across the department. During discussions about the absence of a definition, these PACOM officials told us that all PACOM activities were rebalance activities, even activities that were underway before the President’s announcement to rebalance. Senior DOD policy officials referred us to the speeches of senior administration officials given since the President’s 2011 address to derive the definition of the rebalance. However, as noted earlier, after the President’s speech in 2011, there were a number of pronouncements from senior administration officials that varied over time. The lack of consistent attributes to a strategy can make it difficult for policy makers to assess its effectiveness and accountability. Lack of a documented end state for the rebalance to the Pacific. Based on our systematic review, we found that none of DOD’s six strategy documents from 2012 to 2015 identified an end state for the rebalance to the Pacific. Identifying the end state is a desired element associated with establishing goals and objectives for effective strategies and plans. Joint doctrine also states that military planners must know where to look for the guidance to ensure that plans are consistent with national priorities and are directed toward achieving national security goals and objectives. A national strategy that identified the end state of the rebalance could distinguish new efforts from the longstanding U.S. military presence in the region, and the associated increase in resources to support the post-2011 rebalancing. For example, we found a lack of clarity concerning the end state for the rebalance. DOD officials from the Office of the Under Secretary of Defense for Policy, Joint Staff, and PACOM—whom we interviewed because they were involved in planning and implementing the rebalance to the Pacific—said that they were unaware of an end state for DOD’s efforts to rebalance. The same officials told us that there was no foreseeable end state because, as long as the Asia-Pacific region was important to the U.S., the focus would remain on the region. However, officials from different military service components told us that their individual services had an end state for their service-specific activities to support the rebalance. For example, officials from U.S. Army Pacific told us that they had completed their service’s rebalance. They stated that they achieved the end state with the completion of force posture changes and that some efforts supporting rebalancing had begun before rebalancing was inaugurated. In contrast, a Marine Corps official in the Pacific reported there was no end state for rebalancing. According to the official, Marine Corps activities such as posture realignments supported rebalancing, but these longstanding activities were ongoing prior to the President’s announcement to rebalance. Moreover, we found a lack of an awareness of a command-wide end state for rebalancing and coordination among the various military service activities in support of rebalancing. It was unclear how service-defined end states could have been fully integrated or prioritized for funding without a consistent overall end state for DOD’s overall effort. In such instances, a department-wide defined end state could have helped with the allocation of resources because the most important priorities would be known. A clear and consistent definition for rebalance and the identification of an end state, as well as the inclusion of the other 5 missing elements, could have better positioned decision makers to effectively plan, manage, and assess DOD’s progress toward rebalancing efforts to the Pacific. According to DOD officials from the Office of the Under Secretary of Defense for Policy responsible for policy for the rebalance to the Pacific, the speeches by senior administration officials between 2012 and 2015 supplanted the need to identify and document a definition of the rebalance or an end state in a strategy document. However, as noted earlier, these statements included varying descriptions of the strategy and objectives over time. According to a DOD official from an office with department-wide performance management responsibilities, defining the rebalance to the Pacific and identifying the initiative’s strategic objectives, or end state, were both important for establishing accountability and measuring progress. For instance, a definition could have helped those charged with implementation to distinguish activities essential to operationalizing the strategic guidance to rebalance from those activities that were routine or peripheral to that effort. Further, knowing the end state could have helped management make the best use of resources, enable the assessment of progress toward a particular goal, and as described in joint doctrine, facilitate the development of strategic and military objectives. In moving forward in the Asia-Pacific region, considering the identification of strategic end states (one of the desired elements of an effective national strategy that is also discussed in joint doctrine) —as well as the other missing elements— could help position DOD to achieve its objectives in the region. Agency Comments We provided a draft of this report to DOD for review. DOD had no comments. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Policy; the commander of the U.S. Pacific Command; the Chairman of the Joint Chiefs of Staff; and the Secretaries of the military departments. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-5431 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Appendix I: Scope and Methodology To determine the extent to which the Department of Defense (DOD) has developed strategy documents to guide the rebalance to the Pacific that include desired elements of an effective national strategy, we conducted a search of the literature, from January 2010 to July 2015, to identify official statements on, guidance for, and studies of DOD’s implementation of the rebalance to the Pacific. We reviewed department guidance, such as Chairman of the Joint Chiefs of Staff instructions and joint publications, to understand DOD’s processes and procedures for developing and disseminating guidance and strategic plans. We also interviewed DOD officials from numerous organizations listed below who were involved with planning, providing guidance or implementing the rebalance to the Pacific to identify DOD’s rebalance efforts and whether a strategy or strategies existed that focused on or included the rebalance. The organizations contacted included: Under Secretary of Defense (Comptroller) and Chief Financial Officer Office of the Deputy Chief Management Officer, Deputy’s Assistant Secretary of Defense for Asian and Pacific Security Affairs Assistant Secretary of Defense for Logistics and Materiel Readiness Assistant Secretary of Defense for Strategy, Plans and Capabilities Director of the Office of the Secretary of Defense Cost Assessment U.S. Marines Corps Forces, Pacific U.S. Pacific Air Forces U.S. Transportation Command Based on these interviews and written responses to questions we submitted to the officials associated with these organizations, officials identified documentation and speeches that they indicated informed DOD organizations about implementing the rebalance. Also, based on this information, we found that there was not a single strategy or plan that provided guidance for or outlined DOD’s implementation of the rebalance to the Pacific. Instead, DOD officials from multiple offices identified a number of strategy documents that guided activities associated with the rebalance to the Pacific, including government-wide documents. Based on our interviews with U.S. Pacific Command (PACOM) and DOD officials, we focused our review on the six selected strategy documents, issued between 2012 and 2015, that these officials considered relevant and representative of DOD’s previous strategy to implement the rebalance to the Pacific. Those six strategy documents are described earlier in the main report. We reviewed and analyzed these six strategy documents to determine whether, as a set, they included the 31 desired elements of the associated key characteristics of an effective national strategy. Our prior work on effective national strategies included examples of desired elements that we adapted and tailored toward our review of DOD strategy documents. We selected 31 desired elements as most relevant to DOD’s rebalance effort and for systematically reviewing DOD’s strategy documents associated with the rebalance. These elements and associated key characteristics are described in table 2 below. To determine whether as a set these strategy documents included the desired elements of an effective national strategy, we reviewed each strategy document using a scorecard method, using the following steps: First, we developed scorecards with a two-level scale of “address” and “did not address.” We used a binary scale of “address” or “did not address” and scored a passage as “address” if it included any part of an element description in order to provide the widest latitude in determining whether the selected passage included the specific element. Also, we used 31 desired elements from the six characteristics to make the comparison because these elements provided more specificity than the broad six characteristics. Second, analysts reviewed all of the selected passages from each strategy document and determined whether they were relevant to understanding the rebalance to the Pacific in order to reach agreement on which passages they would consider in the comparison to the desired elements. The readers agreed upon the inclusion and exclusion of passages before assessing whether these passages included the desired elements. Third, two analysts reviewed the relevant passages in each strategy document related to the rebalance and determined whether or not the passages included the element. The analysts used the scorecards to score each passage. Fourth, upon completion of the independent scoring process for each strategy document, the analysts compared their respective scores and reconciled any differences, thereby reaching a consensus on the final score. As needed, a third analyst facilitated reconciliations where there was a difference in the assessment reached by the individual analysts and documented the consensus results. Lastly, upon completion of scoring, the team compiled and summarized the results. To further corroborate our systematic review of the six strategy documents, we asked officials from DOD organizations responsible for the Asia-Pacific region a standard set of related questions. We asked officials these questions in order to obtain DOD’s perspective regarding the applicability of using the selected desired elements and associated key characteristics in reviewing these specific DOD strategy documents. We conducted this performance audit from July 2015 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, key contributors to this report were Guy LoFaro, (Assistant Director), Pedro Almoguera, Patricia Donahue, Richard Powelson, Paulina Reaves, Michael Shaughnessy, and Stephen Woods. Related GAO Products Combating Terrorism: Strategy to Counter Iran in the Western Hemisphere Has Gaps That State Department Should Address. GAO-14-834. Washington, D.C.: September 29, 2014. U.S. Public Diplomacy: Key Issues for Congressional Oversight. GAO-09-679SP. Washington, D.C.: May 27, 2009. Influenza Pandemic: Further Efforts Are Needed to Ensure Clearer Federal Leadership Roles and an Effective National Strategy. GAO-07-781. Washington, D.C.: August 14, 2007. Financial Literacy and Education Commission: Further Progress Needed to Ensure an Effective National Strategy. GAO-07-100. Washington, D.C.: December 4, 2006. Rebuilding Iraq: More Comprehensive National Strategy Needed to Help Achieve U.S. Goals. GAO-06-788. Washington, D.C.: July 11, 2006. Combating Terrorism: Evaluation of Selected Characteristics in National Strategies Related to Terrorism. GAO-04-408T. Washington, D.C.: February 3, 2004. Combating Terrorism: Observations on National Strategies Related to Terrorism. GAO-03-519T. Washington, D.C.: March 3, 2003.
Why GAO Did This Study In 2011, President Obama announced that the United States would turn its attention to the Asia-Pacific region and make the U.S. presence there a top priority. Rebalancing to the Pacific became strategic guidance that informed military planning. By the end of 2015, DOD published strategy documents that included references to the rebalance to the Pacific or related concepts. In February 2018, the Assistant Secretary of Defense for Asian and Pacific Security Affairs stated that while DOD continues to prioritize the Asia-Pacific region, the rebalance to the Pacific is no longer U.S. policy. DOD has published the 2018 National Defense Strategy, which establishes an objective of maintaining a favorable regional balance in the Pacific region, among other regions. Prior to the change in policy, House Report 114-102 included a provision for GAO to review matters related to the U.S. rebalance to the Asia-Pacific region. GAO evaluated the extent to which DOD developed strategy documents to guide the rebalance to the Pacific that included desired elements of an effective national strategy. GAO analyzed six DOD strategy documents that officials identified as providing guidance for the rebalance to the Pacific to determine whether, as a set, they included desired elements associated with an effective national strategy. DOD had no comments on this report. What GAO Found Department of Defense (DOD) strategy documents that collectively guided the rebalance to the Pacific included most of the desired elements of an effective national strategy. The U.S. Pacific Command (PACOM), which is responsible for the Asia-Pacific region, used DOD strategy documents to implement the President's direction to rebalance to the Pacific, which generally refocused U.S. efforts to that region. PACOM officials told GAO that there was no single rebalance-specific strategy document. Instead, officials identified a number of strategy documents published since 2012 that guided activities associated with the rebalance to the Pacific, including: Sustaining U.S. Global Leadership: Priorities for 21st Century Defense ; Quadrennial Defense Review ; National Military Strategy ; Guidance for the Employment of the Force ; Joint Strategic Capabilities Plan ; and the PACOM 2015 Theater Campaign Plan (DRAFT) . Based on GAO's analysis, DOD's six strategy documents that guided the rebalance to the Pacific included 24 of the 31 desired elements of an effective national strategy. However, two key elements were missing from the group of strategy documents: (1) a definition of the rebalance to the Pacific, and (2) the identification of the overall results desired, or end state, for the rebalance. DOD officials also could not identify a definition for the rebalance to the Pacific in the strategy documents or provide a definition that was used consistently across the department. According to a DOD official with performance management responsibilities, defining the rebalance to the Pacific and identifying the initiative's strategic objectives, or end state, were important for establishing accountability and measuring progress. For instance, a clear definition of rebalance could have helped those charged with implementation to distinguish activities essential to operationalizing the strategic guidance from activities that were peripheral to that effort. Similarly, knowing the end state could have helped management make the best use of resources, enable the assessment of progress, and facilitate the development of strategic and military objectives. In moving forward in the Asia-Pacific region, considering the identification of strategic end states as well as other missing elements could help position DOD to achieve its objectives in the region.
gao_GAO-18-347
gao_GAO-18-347_0
Background The Ticketing Marketplace The marketplace for primary and secondary ticketing services consists of several types of participants, including primary market ticketing companies, professional ticket brokers, secondary market ticket exchanges, and ticket aggregators (see table 1). Other parties that play a role in event ticketing, as discussed later in this report, include artists and their managers, booking agents, sports teams, producers, promoters, and operators of event venues (such as clubs, theaters, arenas, or stadiums). The private research firm IBISWorld estimated that online ticketing services (including ticketing for concerts, sporting events, live theater, fairs, and festivals) represented a $9 billion market in 2017, which included both the primary and secondary markets. Another private research firm, Statista, estimated that U.S. online ticketing revenues for sports and music events totaled about $7.1 billion in 2017. Estimates of the total number of professional ticket sellers vary. IBISWorld estimated that the U.S. market for online event ticket sales included 2,571 businesses in 2017. The Census Bureau lists more than 1,500 ticket services companies as of 2015 based on the business classification code for ticket services. However, this does not provide a reliable count of companies in the event ticketing industry because it includes companies selling tickets for services such as bus, airline, and cruise ship travel, among other services. However, a small number of companies conducts the majority of event ticket sales. In the primary ticket market—where tickets originate and are available at initial sale—Ticketmaster is the largest ticketing company. DOJ estimated that Ticketmaster (whose parent company is now Live Nation Entertainment) held more than 80 percent of market share in 2008, and it was still the market leader as of 2017. Less than a dozen other companies control most of the rest of the primary market, by our estimates. In the secondary market—where resale occurs—more companies are active, but StubHub estimated it held roughly 50 percent of market share as of 2017. According to Moody’s Investors Service, Ticketmaster, which in addition to its primary market ticketing has a U.S. resale subsidiary, held the second-largest market share as of 2016. The majority of ticket sales occur online, through a website or mobile application. Ticketmaster’s parent company reported that 93 percent of its primary tickets were sold online in 2017. The industry research group LiveAnalytics reported that in 2014, 68 percent, 50 percent, and 49 percent of people attending concerts, sporting events, and live theater or arts events, respectively, had recently purchased a ticket online. Regulation The event ticketing industry is not federally regulated. However, the Federal Trade Commission Act prohibits unfair or deceptive acts or practices in or affecting commerce, and FTC can enforce the act for issues related to event ticketing and ticketing companies. One federal statute specifically addresses ticketing issues—the BOTS Act, which prohibits, among other things, circumventing security measures or other systems intended to enforce ticket purchasing limits or order rules. The act also makes it illegal to sell or offer to sell any event ticket obtained through these illegal methods and granted enforcement authority to FTC and state attorneys general. The Department of Justice’s Antitrust Division plays a role in monitoring competition in the event ticketing industry. In 2010, Live Nation and Ticketmaster—respectively, the largest concert promoter and primary ticket seller in the United States—merged to form Live Nation Entertainment, Inc. DOJ approved the merger after requiring Ticketmaster to license its primary ticketing software to a competitor, sell off one ticketing unit, and agree to be barred from certain forms of retaliation against venue owners who use a competing ticket service. DOJ may also inspect Live Nation’s records and interview its employees to determine or secure compliance with the terms of the final judgment clearing the merger. State government agencies generally invoke state laws on unfair and deceptive acts and practices to address ticketing violations, according to representatives of two state attorney general offices. In addition, several states have laws that directly apply to event ticketing. For example, some states restrict the use of bots, several other states impose price caps (or upper limits) on ticket resale prices, and states including Connecticut, New York, and Virginia restrict the use of nontransferable tickets (tickets with terms that do not allow resale). Several states require brokers to be licensed and adhere to certain professional standards, such as maintaining a physical place of business and a toll-free telephone number, and offering a standard refund policy. Ticketing Practices, Prices, Fees, and Resale Vary by Industry and Event The concert, sports, and theater industries vary in how they price and distribute tickets. Many tickets are resold on the secondary market, typically at a higher price. Among a nongeneralizable sample of events we reviewed, we observed that primary and secondary market ticketing companies charged total fees averaging 27 percent and 31 percent, respectively, of the ticket’s price. On the Primary Market, Ticketing Practices Vary by Industry and Popular Events Are Sometimes Priced below Market Concerts Ticketing practices for major concerts include presales and pricing that varies based on factors like location and the popularity of the performer. Tickets to popular concerts are often first sold through presales, which allow certain customers to purchase tickets before the general on-sale. Common presales include those for holders of certain credit cards or members of the artist’s fan club, although promoters, venues, or other groups also may offer presales. Credit card companies might provide free marketing for events or other compensation in exchange for exclusive early access to tickets for their cardholders. In addition, the artist usually has the option to sell a portion of tickets to its fan club. The venue’s ticketing company might want to limit the number of tickets allocated to fan clubs because the artist and manager can sell them through a separate ticketing platform, according to three event organizers we interviewed. There are no comprehensive data on the proportion of tickets sold through presales because this information is usually confidential. Industry representatives told us that 10 percent to 30 percent of tickets for major concerts typically are offered through presales, although it can be as many as about 65 percent of tickets for major artists performing at large venues. In addition, fan club presales usually represent 8 percent of tickets, although it may be more if the fan club presale uses the venue’s ticketing company, according to two event organizers. A large ticketing company told us that 10 percent of tickets may be available for fan club presales. A 2016 study by the New York State Office of the Attorney General found that an average of 38 percent of tickets were allotted to presales for the 74 highest-grossing concerts at selected New York State venues in 2012–2015. Additionally, venues, promoters, agents, and artists commonly hold back a small portion of tickets from public sale. “Holds” may be given or sold to media outlets, high-profile guests, or friends and family of the artist. They also may be used to provide flexibility when the seating configuration is not yet final. Promoters typically will release unused holds before the event, offering the tickets to the public at face value. As with presales, little comprehensive data exist on the proportion of tickets reserved for holds. Industry representatives told us holds typically represent a relatively small number of tickets—a few hundred for major events or perhaps a thousand for a stadium concert. The New York Attorney General report’s review of a sample of high-grossing New York State concerts found that approximately 16 percent of tickets, on average, were allocated for holds. Of those holds, many went to venue operators— for example, one arena with around 21,000 seats usually received more than 900 holds per concert held there. The average face-value ticket price in 2017 among the top 100 grossing concert tours in North America was $78.93, according to Pollstar. Concert ticket prices vary by city or day of the week, based on anticipated demand. The main parties involved in price setting are the artist and her or his management team, promoter, and booking agent. Venues sometimes provide input based on their knowledge of prevailing prices in the local market. Ticketing companies sometimes offer tools or support to help event organizers price tickets based on their analysis of sales trends. Concert ticket prices are generally set to maximize profits, according to event organizers. In terms of production costs, the artist’s guarantee—the amount the artist is paid for each performance—is usually the largest expense. The most popular artists can command the highest guarantees and their concerts also tend to have the highest production costs. However, for some high-demand events, tickets might be “underpriced”— that is, knowingly set below the market clearing price that would provide the greatest revenue. Artists may underprice their tickets for a variety of reasons, according to industry stakeholders and our literature review: Reputation risk. Artists may avoid very high prices because they do not want to be perceived as gouging fans. Similarly, event organizers told us some artists have a certain brand or image—such as working- class appeal—that could be harmed by charging very high ticket prices. Affordability. Some event organizers told us that artists want to price tickets below market to provide access to fans at all income levels. Sold-out show. Event organizers may price tickets lower to ensure a sold-out show, which can improve the artist and event organizers’ reputations and might help future sales. Audience mix. Some artists prefer to have the most enthusiastic fans at their shows, rather than just those able to pay the most, especially in the front rows, where tickets are generally the most expensive. Ancillary revenue. Better attendance through lower ticket prices can increase merchandise and concession sales, which can be a substantial source of revenue. In addition, event organizers may unintentionally underprice concert tickets because of imperfect information about what consumers are willing to pay. Tickets are also priced based on the prices and sales of the artist’s (or similar artists’) past tours, but demand can be hard to predict. Three event organizers told us that they have started using data from the ticket resale market to help set prices because that is a good gauge of the true market price. Sporting Events For major league professional sports, most decisions about ticket pricing and ticket distribution are made by the individual teams rather than by the league. According to the three major sports leagues we interviewed, their teams generally sell most of their tickets through season packages, with the remainder sold for individual games. Teams favor packages because they guarantee a certain level of revenue for the season. Representatives of two major sports leagues told us that their teams sold an average of 85 percent and 55 percent, respectively, of their tickets through season packages. One league told us that some of its teams increasingly offer not only full-season packages, but also partial-season packages. Another league said that in some cases, its teams might need to reserve a certain number of single game-day tickets—for example, as part of an agreement when public funds helped build a new stadium. Representatives of the three sports leagues we interviewed told us that their teams do not use presales and holds to the same extent as the concert industry. Although teams do not sell a significant number of tickets through presales, they might offer first choice of seats to season ticket holders or individuals who purchased tickets in the past. In terms of holds, one league told us it requires its teams to hold a small number of tickets for the visiting team and teams might also hold a few tickets for sponsors and performers. Another league told us it does not have league- wide requirements on holds, but its teams sometimes hold a small number of seats for media. Sports teams generally set their ticket prices to maximize revenue, based on supply and anticipated demand, according to the leagues we interviewed. Ticket prices typically vary year-to-year, based on factors such as the team’s performance the previous season and playing in a new stadium. Teams in many leagues use “dynamic pricing” for individual game tickets. They adjust prices as the game approaches based on changing demand factors, such as team performance and the weather forecast. The sports leagues with whom we spoke said teams’ pricing considerations are based in part on a desire to have affordable tickets for fans of different income levels. In addition, one league told us its teams rely heavily on revenues other than ticket sales, such as from television deals and sponsorships. Theater Tickets for Broadway and national touring shows are distributed through direct online sales as well as several additional channels, including day- of-show discount booths, group packages, and call centers. Industry representatives told us that these shows use presales and holds, but not as extensively as the concert industry. At our request, a company provided us with data for five Broadway shows from June 2016 to September 2017. Approximately 13 percent of tickets in this sample were sold through presales, almost all of which were group sales (offered to particular groups prior to the general on-sale). Less than 1 percent of tickets in this sample were sold through presales offered to specific credit cardholders. Two shows in high demand held back an average of about 6 percent of tickets, while the other three shows held back about 1 percent. Producers and venue operators generally set prices, which are influenced by factors like venue capacity and the length of run needed to recoup expenses, according to industry representatives. According to the Broadway League, from May 22, 2017, to February 11, 2018, the average face-value price of a Broadway show was $123—an average of $127 for musicals and $81 for plays. Industry representatives told us they sell about 10 percent of tickets through day-of-show discount booths. Even the most popular shows typically offer steep discounts for a small number of tickets through lotteries or other means. Tickets for some of the most popular Broadway shows have sometimes been underpriced, according to Broadway theater representatives, who told us they feel obligated to maintain relatively reasonable prices and to allow consumers of varying financial resources to attend their shows. Additionally, some shows are underpriced because their popularity was not anticipated. At the same time, in recent years, producers have started charging much higher prices (sometimes exceeding $500) for premium seats or for shows in very high demand, which allows productions to capture proceeds that would otherwise be lost to the secondary market. Relationships between Event Organizers and Brokers Sometimes event organizers work directly with brokers to distribute tickets on the secondary market. For high-demand events, event organizers may seek to capture a share of higher secondary market prices without the reputation risk of raising an event’s ticket prices directly. For lower-demand events, selling tickets directly to brokers can guarantee a certain level of revenue and increase exposure (by using multiple resale platforms rather than a single ticketing site). In major league sports, teams sell up to 30 percent of seats directly to brokers, according to a large primary ticket seller. For Broadway theater, one company told us it regularly distributes about 8 percent to 10 percent of its tickets to a few authorized secondary market brokers. In the concert industry, it is unclear how often artists and event organizers sell tickets directly through the secondary market. Any formal agreements would be in business-confidential contracts, according to industry representatives, and artists may be concerned about disclosing them for fear of appearing to profit from high resale prices. All the artists’ representatives with whom we spoke denied that their clients sold tickets directly to secondary market companies. However, a Vice President of the National Consumers League has cited evidence of cases in which ticket holds reserved for an artist were listed on the secondary market. A representative of one secondary market company told us of two cases in which representatives of popular artists approached his company about selling blocks of tickets for upcoming tours. Tickets to Popular Events Are Often Resold on the Secondary Market at Prices above Face Value Ticket resale prices can be significantly higher than primary market prices and brokers account for most sales on major ticket exchanges. When tickets on the primary market are priced below market value—that is, priced less than what consumers are willing to pay—it creates greater opportunities for profit on the secondary market. Resale transactions typically occur on secondary ticket exchanges—websites where multiple sellers can list their tickets for resale and connect with potential buyers. Primary ticketing companies have also entered the resale market. For example, Ticketmaster allows buyers to resell tickets through its TM+ program, which lists resale inventory next to primary market inventory, and it owns the secondary ticket exchange TicketsNow.com. Generally speaking, the secondary market serves two types of sellers: (1) those who buy or otherwise obtain tickets with the intent of reselling them at a profit (typically, professional brokers), and (2) individuals trying to recoup their money for an event they cannot attend (or sports season ticket holders who do not want to attend all games or use resale to finance part of their season package). Representatives from the four secondary ticket exchanges with whom we spoke each said that professional brokers represent either the majority or overwhelming majority of ticket sales on their sites. Sellers set their own prices on secondary ticket exchanges, but some exchanges offer pricing recommendations. The exchanges allow adjustment of prices over time, and sellers can lower prices if tickets are not selling, or raise prices if demand warrants. Software tools exist that assist sellers in setting prices and in automatically adjusting prices for multiple ticket listings. However, resale prices are not always higher than the original price, and thus brokers assume some risk. In some cases, the market price declines below the ticket’s face value—for example, for a poorly performing sports team. The leading ticket exchange network has publicly stated that it estimates that 50 percent of tickets resold on its site sell for less than face value. However, we were unable to obtain data that corroborated this statement. Relatively few studies have looked at the ticket resale market for major concert, sporting, or theatrical events. Our review of relevant economic literature identified six studies that looked at ticket resale prices, one of which also looked at the extent of resale (see table 2). In general, the studies found a wide range of resale prices, perhaps reflecting the different methodologies and samples used or the limited amount of information on ticket resale. Additionally, the data reported are several years old and will not fully reflect the current market. For illustrative purposes, we reviewed secondary market ticket availability and prices for a nongeneralizable sample of 22 events. Among our selected events, the proportion of seats that were listed for resale ranged from 3 percent to 38 percent. In general, among the 22 events we reviewed, listed resale prices tended to be higher than primary market prices. For example, tickets for one sold-out rock concert had been about $50 to $100 on the primary market but ranged from about $90 to $790 in secondary market listings. For 7 of the 22 events, we observed instances in which tickets were listed on the resale market even when tickets were still available from primary sellers at a lower face-value price. For example, one theater event had secondary market tickets listed at prices ranging from $248 to $1,080 (average of $763), while a substantial number of tickets for comparable seats were still available on the primary market at $198 to $398. We did not have data to determine whether the resale tickets actually sold at their listed price. However, as discussed later, it is possible that some consumers buy on the secondary market, at a higher price, because they are not aware that they are purchasing from a resale site rather than the primary seller. Total Ticket Fees Averaged 27 Percent on the Primary Market and 31 Percent on the Secondary Market for Events We Reviewed Ticket fees vary in amount and type among the primary and secondary markets, and among different ticketing companies and events. Primary Market Fees Companies that provide ticketing services on the primary market typically charge fees to the buyer that are added to the ticket’s list price and can vary considerably. A single ticket can have multiple fees, commonly including a “service fee,” a per-order “processing fee,” and a “facility fee” charged by the venue. Most primary ticketing companies offer free delivery options, such as print-at-home or mobile tickets, but charge additional fees for delivery of physical tickets. Venues usually have an exclusive contract with a single ticketing company and typically negotiate fees for all events at the venue, though in some cases they do so by category of event. Ticketing companies and venues usually share fee revenue and in some cases, the venue receives the majority of the fee revenue, according to primary ticketing companies. In addition, event organizers told us that promoters occasionally negotiate with the venue to add ticket fees or receive fee revenue. Ticketing companies told us that they do not have a set fee schedule and amounts and types of fees vary among venues. Fees can be set as a fixed amount, a fixed amount that varies with the ticket’s face value (for example, $5 for tickets below $50 and $10 for tickets above $50), a percentage of face value, or other variations. While ticketing fees vary considerably, the 2016 New York Attorney General report found average ticket fees of 21 percent based on its review of ticket information for more than 800 tickets at 150 New York State venues. (In other words, a ticketing company would add $21 in fees to a $100 ticket, for a total price to the buyer of $121.) The 21 percent figure encompassed all additional fees, including service fees and flat fees, like delivery or order processing fees. We conducted our own review of ticketing fees for a nongeneralizable sample of a total of 31 concert, theater, and sporting events across five primary ticket sellers’ websites: In total, the combined fees averaged 27 percent of the ticket’s face value, and we observed values ranging from 13 percent to 58 percent. Service fees were, on average, 22 percent of the ticket’s face value, and we observed values ranging from 8 percent to 37 percent. Fourteen of the events we reviewed had an additional order processing fee, ranging from $1.00 to $8.20. Five of the events we reviewed had an additional facility fee, ranging from $2.00 to $5.10. Table 3 shows the ticketing fees observed for events sold through three of the largest ticket companies we reviewed. A sixth ticketing company that focuses on theater uses a different fee structure. It simply charges two flat service fees across all of its events ($7 for tickets below $50 and $11 for tickets above $50), plus a base per- order handling charge of $3. Additionally, we noted that the 6 sporting events we observed tended to have lower fees than the 16 concerts and 9 theater events we observed. Specifically, sporting events had total fees averaging roughly 20 percent, compared to about 30 percent for concerts and theater. Secondary Market Fees Fees charged by secondary ticket exchanges we reviewed were higher than those charged by primary market ticket companies. Secondary ticket exchanges often charge service and delivery fees to ticket buyers on top of the ticket’s listed price. For 7 of the 11 secondary ticket exchanges we reviewed, the service fee was a set percentage of the ticket’s list price. Three of the remaining exchanges charged fees that varied across events, and the fourth did not charge service fees. Among the 10 exchanges that charged fees: In total, the combined fees averaged 31 percent of the ticket’s listed price, and we observed values ranging from 20 percent to 56 percent. Service fees, on average, were 22 percent of the ticket’s listed price, and we observed values ranging from 15 percent to 29 percent. In addition to the service fee, 8 of the 10 exchanges charged a delivery fee for mobile or print-at-home tickets, ranging from $2.50 to $7.95. Eight of the exchanges also charged a fee to the seller (in addition to the buyer), which was typically 10 percent of the ticket’s sale price. (For example, if a ticket sells for $100, the seller would receive $90 and the exchange $10.) Table 4 provides additional information about the fees charged by three of the largest ticket resale exchanges. Consumer Protection Concerns Include the Ability to Access Face-Value Tickets and the Fees and Clarity of Some Resale Websites The technology and other resources of professional brokers give them a competitive advantage over individual consumers in purchasing tickets at their face-value price. Views vary on the extent to which the use of holds and presales also affect consumers. Many ticketing websites we reviewed did not clearly display their fees up front, and a subset of websites— referred to as white-label—used marketing practices that might confuse consumers. Other consumer protection concerns that have been raised involve the amount charged for ticketing fees, speculative and fraudulent tickets, and designated resale exchanges (resale platforms linked to the primary ticket seller). For Tickets to Popular Events, Consumers Often Must Pay More Than Face Value Tickets to popular events often are not available to consumers at their face-value price, frequently because seats sell out in the primary market almost as soon as the venue puts them on sale. Brokers’ Competitive Advantage Brokers whose business is to purchase and resell tickets have a competitive advantage over individual consumers because they have the technology and resources to purchase large numbers of tickets as soon as they go on sale. Some consumer advocates, state officials, and event organizers believe that brokers unfairly use this advantage to obtain tickets from the primary market, which restricts ordinary consumers from buying tickets at face value. As a result, consumers may pay higher prices than they would if tickets were available on the primary market. In addition, some event organizers and primary ticket sellers have expressed frustration that the profits from the higher resale price accrue to brokers who have not played a role in creating or producing the event. Some professional brokers use software programs known as bots to purchase large numbers of tickets very quickly. When tickets first go on sale, bots can complete multiple simultaneous searches of the primary ticket seller’s website and reserve or purchase hundreds of tickets, according to the 2016 report by the New York State Office of the Attorney General. Seats reserved by a bot—even if ultimately not purchased— appear online to a consumer as unavailable. This, in turn, can make inventory appear artificially low during the first minutes of the sale and lead consumers to the secondary market to seek available seats, according to event organizers we interviewed. Bots can also automate the ticket-buying process, as well as identify when additional tickets are released and available for purchase. During its investigation of the ticketing industry, the New York State Office of the Attorney General identified an instance in which a bot bought more than 1,000 tickets to a single event in 1 minute. In addition, bots can be used to bypass security measures that are designed to enforce ticket purchase limits. For example, bots can use advanced character recognition to “read” the characters in a test designed to ensure that the buyer is human. Although the BOTS Act of 2016 restricts the use of bots, as discussed later, it is not yet clear the extent to which the act has reduced their use. Brokers have other advantages over consumers in the ticket buying process, according to the New York State Attorney General’s report and industry stakeholders we interviewed. For example, some brokers employ multiple staff, who purchase tickets as soon as an event goes on sale. In addition, brokers can bypass sellers’ limits on the number of tickets allowed to be purchased by using multiple names, addresses, credit card numbers, or IP (Internet protocol) addresses. Finally, to access tickets during a presale, some brokers join artists’ fan clubs or hold multiple credit cards from the company sponsoring the presale. Role of Holds and Presales Holds and presales may limit the number of tickets available to consumers at face value, according to some consumer groups, secondary market companies, and other parties. For example, the National Consumers League testified that events with many holds and presales sell out more quickly during the general on-sale because fewer seats are available. Consumers may not be aware that many seats are no longer available by the time of the general on-sale. In addition, the National Consumers League and New York State Office of the Attorney General said they believe the use of holds and presales raise concerns about equity and fairness. They noted that most holds go to industry insiders who have a connection to the promoter or venue, while credit card presales are available only to cardholders, who typically are higher- income. The New York State Attorney General’s office and seven event organizers with whom we spoke expressed concerns that presales benefit brokers, who take special measures to access tickets during presales. However, other industry representatives told us that holds and presales do not adversely affect consumers. They noted that for most events, the number of tickets sold through presales is not very high and few tickets are held back. Additionally, two event organizers and representatives from a primary ticketing company noted that most presales are accessible to a broad range of consumers—such as tens of millions of cardholders. As a result, the distinction between what constitutes a presale and a general on-sale can be slim. Furthermore, some fan clubs may try to limit brokers’ use of presales. For example, one manager said his artist’s fan club gives priority for presales to long-time fan club members. In addition, some industry representatives noted that holds and presales serve important functions that can benefit consumers. For example, credit card presales can reduce event prices by funding certain marketing costs, and fan club presales can offer better access to tickets to artists’ most enthusiastic fans, according to event organizers with whom we spoke. And as noted earlier, holds serve various functions, such as providing flexibility for seating configuration. Some Ticketing Websites We Reviewed Were Not Fully Transparent about Ticket Fees and Relevant Disclosures Among the largest primary and several secondary market ticketing companies, we identified instances in which fee information was not fully transparent. We reviewed the ticket purchasing process for a selection of primary and secondary ticketing companies’ websites, including a subset of secondary market websites known as “white-label” websites. We reviewed the extent to which the companies’ websites clearly and conspicuously presented their fees and other relevant information and also recorded the point at which fees were disclosed in the purchase process. While FTC staff guidance states that there is no set formula for a clear and conspicuous disclosure, it states that among several key factors are whether the disclosure is legible, in clear wording, and proximate to the relevant information. In recent reports, the National Economic Council (which advises the President on economic policy) and FTC staff have expressed concern about businesses that use “drip pricing,” the practice of advertising only part of a product’s price up front and revealing additional charges later as consumers go through the buying process. Primary Market Ticketing Companies For the 23 events we reviewed, the largest ticketing company—believed to have the majority of the U.S. market share—frequently did not display its fees prominently or early in the purchase process. For 14 of 23 events we reviewed, fees could be learned only by (1) selecting a seat; (2) clicking through one or two additional screens; (3) creating a user name and password (or logging in); and (4) clicking an icon labeled “Order Details,” which displayed the face-value price and the fees. For 5 of the 23 events, the customer did not have to log in to see the fees, but the fees were visible only by clicking the “Order Details” icon. For 4 of the 23 events, fees were displayed before log-in and without the need to take additional steps. Additionally, for 21 of the 23 events, ticket fees were displayed in a significantly smaller font size than the ticket price. For the five other primary market ticketing companies whose ticketing process we reviewed, fees were displayed earlier in the purchase process and more conspicuously. All five companies displayed fees before asking users to log in, including one that displayed fees during the initial seat selection process. Four of the five companies displayed fees in a font size similar to that of other price information and in locations on the page that were generally proximate to relevant information. However, for all companies we reviewed, fees and total ticket prices were not displayed during the process of browsing for different events. We found that two primary ticket sellers that sometimes offer nontransferable tickets (that is, tickets whose terms and conditions prohibit transfer) had prominently and clearly disclosed the special terms of those tickets—for example, that the buyer’s credit card had to be presented at the venue and the entire party had to enter at the same time. One company’s website displayed these conditions on a separate screen for 10 seconds before allowing the buyer to proceed. The other company’s website similarly displayed information about the tickets’ nontransferability on a separate page in clear language in a font size similar to the pricing information. Secondary Ticket Exchanges We also reviewed disclosure of fees and other relevant information on the websites of 11 secondary ticket exchanges and resale aggregators. Two of the 11 websites displayed their fees conspicuously and early in the purchase process, and a third site did not charge ticketing fees. However, we found that ticket resale exchanges sometimes lacked transparency about their fees: Fees often were revealed only near the end. Seven of the 11 websites disclosed ticket fees only near the end of the purchase process, after the consumer entered an e-mail or logged in. Three of those seven websites displayed fee information only after the credit card number or other payment information was submitted. Fees sometimes were not conspicuously located. On 2 of the 11 websites, some fees were not displayed alongside the ticket price, but instead were only visible by clicking a specific button. Font sizes were small in two cases. On 2 of the 11 websites, fees were displayed in a font size significantly smaller than other text. In contrast to primary market sellers, secondary market sellers’ websites sometimes did not clearly disclose when a ticket was nontransferable. Disclosures on secondary market ticket exchanges varied, in part because individual sellers are permitted to enter their own descriptions about ticket characteristics. In some cases, the seller identified nontransferable tickets only by labeling them “gc,” indicating that a gift card would be mailed to the buyer to present for entry to the venue. To further review nontransferable ticket listings, we contacted the customer service representatives of three large secondary ticket exchanges to ask about a nontransferable ticket listing. We asked if we would have difficulty using the ticket because the venue’s or ticket seller’s website stated that only the original buyer could use the ticket, with one website noting that picture identification might be required for entry. Customer service representatives of all three exchanges told us that despite the purported restrictions, we would be able to use the ticket to gain entry to the venue. To confirm these statements, we contacted officials of these venues, who acknowledged that picture identification had not been required for entry at these events. Consumers may not always be aware they are purchasing tickets from a secondary market site at a marked-up price. In a 2010 enforcement action, FTC settled a complaint against Ticketmaster after alleging, among other things, that the company steered consumers to its resale site, TicketsNow, without clear disclosures that the consumer was being directed to a resale website. The settlement requires Ticketmaster, TicketsNow, and any other Ticketmaster resale websites to clearly and conspicuously disclose when a consumer is on a resale site and that prices may exceed face value, and to include “reseller price” or “resale price” with ticket listings. In addition, in January 2018, the National Advertising Division, a self-regulatory organization, asked FTC to investigate the fee disclosure practices of StubHub, a large secondary ticket exchange, alleging the company did not clearly and conspicuously disclose its service fees when it provides ticket prices. White-Label Websites for Ticket Resale A subset of ticket resale websites, known as “white label,” used marketing practices that might confuse consumers. A company providing white-label support allows affiliates to connect its software to their own, uniquely branded website. This is sometimes also described as a “private label” service in the industry. For event ticketing, a ticket exchange offering white-label support provides the affiliate company with access to its ticket inventory and services, such as order processing and customer service. However, the affiliate uses its own URL (website address), sets the ticket prices and fees, and conducts its own marketing and advertising. Two secondary ticket exchanges operate white-label affiliate programs, under which affiliates create unique white-label websites for ticket resale. While we did not identify data on the number of white-label websites for event ticketing, they commonly appear in the search results for all types of venues, including smaller venues like clubs and theaters. White-label websites often market themselves through paid advertising on Internet search engines, appearing at the top of search results for venues. Thus, they are often the first search results consumers see when searching for event tickets. Figure 1 provides a hypothetical example of a white-label advertisement on a search engine, as well as the typical appearance of a white-label website. In 2014, FTC and the State of Connecticut announced settlements with TicketNetwork—one of the exchanges operating a white-label program— and two of its affiliates after charges of deceptively marketing resale tickets. The complaint alleged that these companies’ advertisements and websites misled consumers into thinking they were buying tickets from the original venue at face value when they were actually purchasing resale tickets at prices often above face value. According to the complaint, the affiliate websites frequently used URLs that included the venue’s name and displayed the venue’s name prominently on their websites in ways that could lead consumers to believe they were on the venue’s website. The settlements prohibited the company and its affiliates from misrepresenting that they are a venue website or that they are offering face-value tickets, and from using the word “official” on the websites, advertisements, and URLs unless the word is part of the event, performer, or venue name. They also required that the websites disclose that they are resale marketplaces, that ticket prices may exceed the ticket’s face value, and that the website is not owned by the venue or other event organizers. FTC staff with whom we spoke told us that they were aware that similar practices have continued among other white-label companies. Staff told us they have continued to monitor white-label websites and related consumer complaints. Additionally, a wide range of stakeholders with whom we spoke—including government officials, event organizers, and other secondary ticket sellers—expressed concerns about these websites. In particular, they were concerned that consumers confused white-label websites for the venue’s website. We reviewed 17 websites belonging to eight companies that were affiliates of the two secondary ticket exchanges offering white-label programs. We identified the sites by conducting online searches for nine venues (including stadiums, clubs, and theaters) on two of the largest search engines. All nine of the venues had at least one white-label site appear in the paid advertising above the search results. We observed the following: Sites could be confused with that of the official venue. Fourteen of the 17 white-label websites we reviewed used the venue’s name in the search engine’s display URL, in a manner that could lead a consumer to believe it was the venue’s official website. In addition, 5 of the 17 webpages used photographs of the venue and 11 provided descriptions of the venue (such as its history) that could imply an association with the venue. Fees were higher than on other resale sites. Total ticketing fees (such as “service charges”) for the white-label sites ranged from 32 percent to 46 percent of the ticket’s list price, with an average of 38 percent. These fees were generally higher than those of other ticket resellers—for example, the secondary ticket exchanges that we reviewed charged average fees of 31 percent. Fees were revealed only near the end. All 17 of the white-label sites we reviewed disclosed their fees late in the purchase process. Ticketing fees and total prices were provided only after the consumer had entered either an e-mail address or credit card information. Other key disclosures were present but varied in their conspicuousness. All 17 of the white label webpages we reviewed disclosed on their landing page and check-out page that they were not associated with the venue and were resale sites whose prices may be above face value. However, this information was presented in a small font or in an inconspicuous location (not near the top of the page) for the landing page of 7 of these webpages, as well as for the check-out page of 12 of the 17 webpages. Ticket prices were higher than other resale sites. The ticket price charged for the events we reviewed on the white-label sites had an average markup of about 180 percent over the primary market price. By comparison, other ticket resale websites we reviewed had an average markup of 74 percent. In some cases, we observed white-label websites selling event tickets when comparable tickets were still available from the primary seller at a lower price. For example, two white-label sites were offering tickets to an event for $90 and $111, respectively, whereas the venue’s official ticketing website was offering comparable seats for $34. (All figures include applicable fees). Given the significantly higher cost for the same product, some consumers may be purchasing tickets from a white-label site only because they mistakenly believe it to be the official venue’s site. As we discuss in greater detail later in this report, in February 2018, Google implemented requirements for resellers using its AdWords service that are intended, among other things, to prevent consumer confusion related to white-label sites. Other Consumer Protection Issues Have Been Identified Ticket fees, the use of speculative tickets, ticket fraud, and designated resale exchanges have raised consumer protection concerns among government agencies, industry stakeholders, and consumer advocates. Amount Charged for Ticket Fees Consumer protection advocates, event organizers, and some government entities have expressed concerns about high ticket fees. For example, the New York State Attorney General’s report expressed concern about what it deemed high ticketing fees charged for unclear purposes. The report found that among online platforms, vendors of event tickets appeared to charge fees to consumers higher than most other online vendors. Concerns about high ticket fees also were frequently cited in 2009 congressional hearings on the proposed merger of Live Nation and Ticketmaster. In addition, some managers and agents we interviewed said their clients were dissatisfied with high ticket fees. Data we received from FTC’s Consumer Sentinel Network indicated 67 complaints related specifically to event ticket fees from 2014 through 2016. A 2010 analysis by the Department of Justice said that the dominance of one company, Ticketmaster, in the primary ticketing market allowed the company to maintain high ticket fees. The report noted high barriers to entry for competitors, among which were high startup costs, Ticketmaster’s reputation for providing quality service to venues, and long-term exclusive contracts that large venues typically sign with one ticketing company. In addition, with the merger, Live Nation Entertainment owns both the largest primary ticket seller (Ticketmaster) and largest promoter (Live Nation), and owns many large venues and an artist management company. When the ticketing company is owned by a major promoter, the combined firm’s ability to bundle ticketing services and access to artists would require competitors to offer similar services in order to compete effectively, according to the Department of Justice analysis. In an attempt to mitigate these potential effects, the Department of Justice final judgment on the merger prohibited certain forms of retaliation against venues that contract with other ticketing companies. In the United Kingdom, where the venue and promoter typically contract with multiple ticket sellers, ticket fees are lower than in the United States— around 10 percent to 15 percent of the ticket’s face value, according to a recent study. Industry experts generally consider the secondary market for event ticketing to be more competitive than the primary market because of the large number of brokers participating in the industry. According to a report by the National Economic Council, fees in this market may be higher than expected because of the lack of transparency described earlier—consumers may be more willing to accept high fees and less likely to comparison shop when fees are disclosed at the end of a multistep purchase process. An FTC staff report made a similar point regarding hotel resort fees, noting that fees disclosed only at the end of the shopping process could harm consumers by making it more difficult to comparison shop for hotels. In addition, consumers who are led to believe that white-label ticketing sites are the official venue site may accept high fees because they think they are buying tickets from the primary ticketing provider, according to two industry representatives with whom we spoke. The level of fees in the secondary market might also be affected by partnerships between the primary and secondary ticket seller. Primary ticketing companies sometimes offer resale options or use of designated resale exchanges (discussed below). The American Antitrust Institute has expressed the view that these relationships can reduce inventory for rival secondary sellers and in turn, can result in higher fees, as the primary ticket seller essentially has a monopoly over both markets. Speculative Tickets A speculative ticket refers to a ticket put up for sale by a broker when the broker does not yet have the ticket in hand, perhaps because the event has not yet gone on sale. Brokers may sell speculative tickets because they anticipate they will be able to secure the tickets (whether on the primary or secondary market) and sell them for a profit. The terms of use of most secondary sites we reviewed did not allow speculative ticket listings. However, while we were unable to identify comprehensive data on the extent of speculative tickets, numerous industry representatives told us that these sites commonly do not enforce this prohibition and listing of speculative tickets was widespread. One common form of speculative ticketing occurs when brokers offer tickets after a popular artist has announced a concert schedule but not yet begun ticket sales, according to industry representatives. Several concerns exist around the use of speculative ticketing: The buyer may never get the ticket. Speculative ticket listings can result in canceled orders if the broker cannot obtain the ticket, or cannot obtain it at a price that would result in a profit. For example, it was reported that many fans who thought they purchased tickets to the 2015 Super Bowl actually purchased speculative tickets that were subsequently canceled when the supply of tickets was less than expected. According to industry stakeholders, consumers can typically obtain a refund on a canceled order from the broker or secondary ticket exchange, but may still face disappointment, inconvenience, or costs associated with nonrefundable travel to the planned event. The seat location is not guaranteed. Brokers selling speculative tickets typically do not specify the seat number but rather promise a certain section of the venue, according to two event organizers we interviewed. However, because the broker does not have the ticket in hand, consumers can receive seats that are worse or different than advertised. Speculative ticketing can cause consumer confusion. One large ticket resale exchange told us it only allows trusted brokers to sell speculative tickets under certain circumstances and requires sellers to use a special label for these listings. However, we observed other exchanges that are less transparent and do not make clear to the buyer that the ticket is speculative. Consumers may not be aware that tickets have not officially gone on sale yet and eventually may be available on the primary market at a lower price. In its 2010 enforcement action against Ticketmaster and its resale exchange, TicketsNow.com, FTC alleged that the companies failed to tell buyers that many of the resale tickets advertised were being sold speculatively. The settlement required Ticketmaster and its affiliates to disclose if a ticket was being sold speculatively and to otherwise refrain from misrepresenting the status of tickets. FTC staff also sent warning letters to other resale companies that may have been at risk of violating the FTC Act with regard to their speculative ticketing practices. More recently, in 2015 a request by the New York State Attorney General resulted in three major ticket exchanges removing speculative ticket listings for an upcoming tour. Representatives from one of the secondary ticket exchanges told us that while it is difficult to determine if a listing is truly speculative, they have removed listings when they have information from event organizers to indicate that no one could have obtained the tickets. Posing as a consumer, a GAO investigator made 11 inquiries to customer service representatives of two of the largest secondary ticket exchanges about two events listing tickets that appeared to be speculative. The customer service representatives generally acknowledged that the sellers did not yet have the tickets in hand but assured the investigator that the tickets would be provided. Fraudulent Tickets Event tickets are sometimes fraudulent—for example, a fraudster may create and sell a counterfeit ticket or multiple copies of the same print-at- home ticket, according to industry representatives. We did not identify comprehensive data on the extent of ticket fraud. Event organizers with whom we spoke said that they typically only see a handful of fraudulent tickets at popular events, and do not consider fraudulent ticketing to be a widespread problem. A limited search of FTC’s Consumer Sentinel Network data identified relatively few complaints—an estimated 19 related to fraudulent tickets from 2014 through 2016. Industry representatives told us fraudulent tickets are most common for the most popular events and were often purchased on the street outside the venue or through an online classified advertisement. According to industry representatives, fraudulent ticketing is rare on secondary market exchanges, in part because the exchanges can take action against sellers of fraudulent tickets, such as fining them or banning them from future sales. The National Association of Ticket Brokers requires its members to have a policy to reimburse consumers for fraudulent tickets. Two secondary market participants told us the most common fraudulent activity they must address is credit card fraud by buyers rather than invalid tickets posted by sellers. Designated Resale Exchanges Designated resale exchanges are resale platforms that are linked to the primary ticket seller. They are most commonly used in major league sports. The four major sports leagues have agreements with one of two ticketing companies that allow consumers to buy and sell tickets through an official “fan-to-fan” resale marketplace. In addition, some individual teams and venues have an agreement with a third company to use its resale platform, which uses paperless tickets and can facilitate ticket transfers from one consumer to another or restrict transfers altogether (such as with nontransferable tickets). On these exchanges, when a consumer lists a ticket for resale, the exchange electronically confirms the seller’s identity, then cancels the original ticket information (such as a barcode) and reissues the ticket with the new buyer’s name. According to the three sports leagues we interviewed, designated resale exchanges are generally optional—for example, the sports leagues allow brokers and consumers to use other secondary market exchanges as well. A representative of one of the major sports leagues told us the exchanges provide added revenue to teams because the teams receive some of the fee revenues from sales on the exchanges. The exchanges provide data on event attendees, which is valuable for marketing and security purposes, according to another sports league and a primary ticket seller. In addition, the exchanges can reduce resale fraud because the primary seller verifies the legitimacy of the ticket being resold, according to representatives of the three leagues we interviewed. However, some academics and secondary market participants we interviewed have argued that designated resale exchanges work to the detriment of consumers. For example, one academic study stated that a primary ticket seller’s dominance in the secondary market can substantially reduce inventory for rival secondary sellers, thus impeding competition in the resale market. The study stated that reduced secondary market competition, in turn, can result in higher fees. In 2015, a U.S. district court dismissed StubHub’s antitrust complaint against the Golden State Warriors basketball team and Ticketmaster, LLC. StubHub claimed that the Warriors’ and Ticketmaster’s exclusive resale agreement restricted secondary market competition for professional basketball tickets in the Bay Area, but the court disagreed. Some designated resale exchanges use price floors, below which consumers may not sell their tickets. One sports league’s exchange has a price floor of $6, while the exchanges of two other sports leagues do not have league-wide price floors, according to league representatives. In addition, we identified instances of individual teams using price floors on their designated resale exchanges. One purpose of price floors is to protect brand reputation, according to league representatives, because too low a ticket price can lessen an event’s perceived value. Price floors also can prevent the secondary market from undercutting a team’s own (primary market) price. However, some consumer organizations and secondary ticket sellers said price floors were unfriendly to consumers. Season ticket holders might be unable to sell tickets for low-demand games for which market prices were lower than the floors. In addition, the New York State Attorney General’s office noted that consumers might not always be aware that price floors were in effect and thus pay more than they would on another exchange. Effects of Ticket Resale Restrictions and Disclosures on Consumers and Business Would Vary Policymakers, consumer organizations, and industry participants have proposed or implemented a number of ticket resale restrictions and disclosure requirements, each of which have or would have advantages and disadvantages for consumers or industry participants (see table 5). Event ticketing is not federally regulated and some industry participants are using or exploring technology and other market-based approaches to address concerns related to secondary market activity. Nontransferable Tickets Can Reduce the Price Some Consumers Pay but Also Limit Flexibility Some event organizers make tickets to their events nontransferable—that is, the terms and conditions of the ticket prohibit its transfer from one person (in whose name the ticket is issued) to another. The prohibition can be enforced by requiring consumers to bring to the venue the credit or debit card used for purchase and matching photo identification. The consumer then receives a seat locator slip—akin to a consumer swiping a credit card at the airport to retrieve a boarding pass. At least three states—Connecticut, New York, and Virginia—have laws that restrict ticket issuers’ ability to sell nontransferable tickets. Similar legislation has been introduced in several other states in recent years. The use of nontransferable tickets, even in states where they are legal, is relatively uncommon. For example, an artist advocacy group told us that some events that use them make only the first several rows of seats nontransferable. One large primary ticketing company told us it estimated that less than 5 percent of its events used nontransferable tickets, while another told us nontransferable tickets represented less than 1 percent of its tickets in total. Almost all nontransferable tickets are for concerts; the practice is rare for sporting events and theater, according to industry stakeholders with whom we spoke. Advantages of Nontransferable Tickets Advantages to consumers of nontransferable tickets stem from the goal of preventing ticket resale—allowing consumers to pay face value rather than a higher price on the secondary market. As described earlier, markups on the secondary market can be substantial. Proponents of nontransferable tickets, which include a large primary ticket seller and some event organizers and well-known artists, have argued they are an important tool that makes it harder for brokers to resell tickets for profit. We identified one empirical study on the effects of nontransferable tickets on resale activity. A 2013 study in the Journal of Competition Law and Economics compared two events using nontransferable tickets to comparable events using transferable tickets at the same venues. It found that nontransferable tickets significantly reduced resale and that prices were significantly higher for the relatively small portion of nontransferable tickets that were resold. In addition, there is anecdotal evidence that nontransferable tickets reduce the rate of resale and allow more consumers to access tickets at face-value prices. Many stakeholders told us that making tickets nontransferable reduces secondary market activity, with some stakeholders citing specific examples. For instance, the manager of a large concert venue that primarily uses nontransferable tickets told us that resale is much less common for the venue’s events than for comparable events at similar venues. Similarly, the manager of a major musical artist told us that using nontransferable tickets for a subset of seats on a recent arena tour resulted in minimal listings for those seats on the secondary market. The New York State Attorney General’s report stated that nontransferable paperless tickets “appear to be one of the few measures to have any clear effect in reducing the excessive prices charged on the secondary markets and increasing the odds of fans buying tickets at face value.” But, while we identified evidence that nontransferable tickets limit resale, they may not eliminate resale because sellers may not follow the restriction. Disadvantages of Nontransferable Tickets However, other parties—including primary and secondary market participants, consumer advocacy groups, academics, and government agencies—have noted that nontransferable tickets can have the following disadvantages to consumers and adverse effects on markets: Financial loss. With nontransferable tickets, ticket buyers who cannot attend an event can lose the ability to recoup their money through resale. Inconvenience. Nontransferable tickets can be inconvenient because the buyer may need to present identification, a debit or credit card, or both, to gain entry to the venue, which can create delays. Nontransferable tickets also can create challenges for consumers buying tickets for others (including as a gift) because the ticket terms may require the buyer and original purchase card be present to gain entry. However, a primary ticket seller and a promoter told us these obstacles can be overcome—for example, through mechanisms allowing buyers to transfer tickets upon request, and by using processes to speed venue entry (such as automated kiosks). Economic inefficiency. When nontransferable tickets are priced below the prevailing market price in the primary market, this creates excess demand, and tickets are sold without regard to consumers’ willingness to pay. Traditional economics maintains that an efficient market would result in tickets going to those willing to pay the highest price, which nontransferability inhibits by restricting a secondary market. In addition, some academics have noted that consumers may be less willing to buy nontransferable tickets because they do not offer the “insurance” that comes with the ability to resell them. Potential impingement on property rights. Some consumer groups and secondary market participants have argued that nontransferable ticket policies impinge on consumers’ property rights. These parties argue that once consumers buy a ticket, they should be able to do whatever they like with it. Effect on competition. The New York State Attorney General’s office and some economics literature have cautioned that use of nontransferable tickets by primary ticketing companies can impede competition in the secondary market by making these companies’ own resale exchanges the only way to transfer tickets. Caps on Resale Prices Can Have Advantages and Disadvantages Several states have caps on the price at which tickets can be resold, while others have repealed caps and some studies have questioned their enforceability. For example, Kentucky generally prohibits the resale of event tickets for more than either face value or the amount charged by the venue, and Massachusetts prohibits resale by brokers of most tickets for more than $2 above face value, with the exception of relevant service charges. New Jersey allows a maximum markup of 20 percent or $3 (whichever is greater) for nonbrokers and a maximum markup of 50 percent for registered brokers, but does not limit resale prices for nonbrokers for sales over the Internet. A number of other states— including Minnesota, Missouri, New York, and Connecticut—repealed their price cap laws in the 2000s. However, the New York State Attorney General’s 2016 report recommended bringing back a price cap, through a “reasonable limit” on resale markups. Price caps are generally intended to protect consumers from high markups and increase the fairness of ticket distribution so that the wealthiest consumers do not have disproportionate access to tickets. In theory, price caps offer consumers the advantages of nontransferable tickets without the disadvantages: they limit high secondary-market prices but still allow consumers to transfer tickets to others or resell tickets they cannot use. However, three government studies we reviewed stated that price caps are difficult to enforce and are rarely complied with. A 1999 report by the New York Attorney General noted that ticket resellers “almost universally disregarded” a cap in place at the time. Representatives from the office told us enforcement of such a cap might be easier now because the secondary market is largely on the Internet, which offers greater price transparency. A 2016 study of the United Kingdom’s ticket market noted that enforcement of a price cap was complicated by the fact that ticket resellers were not a well-defined group and sales could occur on various platforms and across jurisdictions. Similarly, the New York State Department of State noted in 2010 that enforcement of price caps can be challenging. In addition, critics of price caps have said that caps might force resale activity underground, which would reduce transparency and protections (such as refund guarantees) that legitimate secondary market exchanges provide. Both the largest ticket exchange and the largest primary market ticket company have opposed price caps, with the ticket exchange arguing that they would result in street-corner transactions, where the risk of counterfeit and fraud would be significant. On formal exchanges, transactions can be monitored and regulated. As with nontransferable tickets, price caps also can create economic inefficiencies because tickets are not necessarily allocated to those willing to pay the highest price. A 2010 study by the New York State Department of State compared publicly available secondary market listings for high-demand concerts in New York to the same artists’ concerts in nearby states with price caps. It found no definitive evidence that price caps resulted in greater or lesser availability on the secondary market or in lower resale prices. The study noted that online resale prices routinely exceeded the price caps. However, the authors of the study acknowledged that their findings were limited by their inability to obtain data on ticket sales and availability from secondary sellers. Stakeholder Views Vary on Effects of Additional Disclosure Requirements Legislative or regulatory actions to improve disclosure and transparency of ticket fees, resale markups, and ticket availability have advantages and disadvantages. Up-front Fee Disclosure Some government stakeholders have suggested improving fee transparency through a legal requirement to disclose ticket fees earlier in the purchase process. As discussed earlier, ticketing companies in the primary and secondary markets vary on when and how they disclose their fees, and some disclose fees only upon checkout. No federal law expressly addresses fee disclosure in event ticketing. However, at least one state requires disclosure of fees at the beginning of the purchase process. On the primary market, up-front fee disclosure helps decision making by informing consumers of the total ticket price early in the process. It also helps consumers decide whether to buy from the ticketer’s website or at the box office, where there typically are no fees. On the secondary market, up-front fee disclosure aids comparison shopping by helping consumers identify the resale exchange with the best total price. Sellers that do not provide enough or full information on prices through hidden fees could have competitive advantage because they would be perceived as offering lower prices over their competitors who do provide full information showing the price. For products and services in general, FTC staff guidance advocates that fees be disclosed up front, particularly before the point at which the consumer has decided to make a purchase. Figure 2 provides examples of different approaches to displaying prices and fees. Currently, FTC relies on the Federal Trade Commission Act—which prohibits unfair or deceptive acts or practices—to address problems related to fee disclosures. But FTC staff said it is challenging and resource-intensive to use the act to address inadequate fee disclosures industry-wide because it requires proving violations on a case-by-case basis. FTC staff told us that, depending on the circumstances, a legislative disclosure requirement that specified requirements for fees could facilitate enforcement activity and create a more level playing field for consumers and sellers. Eleven industry stakeholders and three consumer advocacy groups with whom we spoke similarly expressed support for a requirement that ticketing fees be disclosed up front. Many noted that fees should be fully transparent to consumers. However, a primary ticket seller, two venue managers, and a secondary ticket seller we interviewed questioned the need for an up-front fee disclosure requirement. For example, a primary ticket seller stated that knowing fees up front would not affect a consumer’s decision of whether or not to buy a ticket. The two venue managers believed that the timing of the fee disclosure was not important, as long as fees are disclosed before consumers complete the purchase. Representatives of one secondary ticket exchange said that up-front disclosure of fees could be challenging because a ticket’s fee is not stable—for example, the fee can change based on price fluctuations, different delivery methods, and the use of promotion codes. The National Economic Council has stated that “all-in pricing,” a form of up-front pricing, may be preferable to other methods of fee disclosure. All-in pricing incorporates the ticket’s face value and all mandatory fees and taxes, as illustrated in figure 2 above. According to the National Economic Council, all-in pricing eases comparison across vendors. The FTC staff report analyzing hotel resort fees supported all-in pricing for that industry because it said that breaking out fees, instead of providing a single total price, hindered consumer decision making and often resulted in consumers underestimating the total price. Officials from two state attorney general offices told us that all-in pricing could be advantageous, noting that fee disclosures represent their most significant enforcement issue related to the ticketing industry. Three secondary ticket sellers told us they might support a requirement to provide all-in pricing, but only if it was required of all ticket sellers. In 2014, the largest secondary market ticketing company began using all-in pricing, with its listings displaying a single total price that incorporated fees. However, the company soon discontinued all-in pricing as the default because, it told us, it put the company at a competitive disadvantage with other secondary market providers whose fees were not included in the initial ticket price displayed to consumers. A requirement that all ticket sellers provide up-front fee disclosure would mitigate or resolve that issue. One argument against a requirement for all-in pricing is that such regulation would restrict ticket companies’ flexibility in choosing how to disclose fees. In addition, a manager, a promoter, and two artist advocacy groups said all-in pricing could give fans the incorrect impression that the artist was charging the full ticket price and receiving its revenues, because the portion of the price going toward ticketing fees would not be transparent. Disclosing Face Value on Resale Sites Some federal and state policymakers have proposed requirements for resellers to disclose a ticket’s face value on secondary ticket websites. Georgia and New York State have enacted similar requirements, with statutes requiring resellers to disclose both the face value of tickets and their list price. Requiring that ticket resellers disclose the ticket’s face value can have several advantages. First, it makes the reseller’s markup transparent. Second, it can help consumers assess the quality of the seat location and compare similar seats across resale listings. Third, it might reduce the possibility that consumers mistake a reseller’s website for a venue website, as described earlier. This, in turn, could encourage consumers to recognize they are viewing a secondary market exchange and comparison shop for a better price elsewhere. However, a requirement that resellers disclose a ticket’s face value can present challenges because the definition of “face value” may not always be clear, according to three ticket resellers and FTC Bureau of Consumer Protection staff. If the face value does not incorporate fees and taxes charged on the primary market, it would not reflect the full amount paid by the original buyer. Similarly, some tickets are sold through VIP packages that do not itemize the price of the ticket and other components, such as backstage access or parking. In addition, with dynamic pricing, a ticket’s face value can change frequently. Furthermore, season tickets may display a higher face value than the season ticket holder paid because teams usually sell the packages at a discount. A requirement to disclose a ticket’s face value also could create compliance costs for secondary ticket exchanges, and could be difficult to enforce, according to some stakeholders. Three secondary ticket exchanges told us they do not currently collect information on a ticket’s face value and would have difficulty verifying the value provided by the listing broker—in part because of the challenges in defining face value, as described above. The New York State Office of the Attorney General stated in its 2016 report that most resellers cannot comply with the state’s disclosure requirement because most secondary ticket exchanges do not offer the option to show the ticket’s face value alongside its list price, despite having the capability to add such functionality. In addition, an official from Georgia’s Athletic and Entertainment Commission told us that resellers largely disregarded the state’s requirement to disclose face value. Disclosing Ticket Availability Another proposal, advocated by secondary market stakeholders, among others, would require primary ticket sellers to disclose how many tickets are available when an event first goes on sale to the general public. For instance, a venue or ticket seller might be required to provide the venue capacity and number of tickets available for sale after accounting for presales and holds. A 2017 law in Ontario, Canada, requires primary ticket sellers to provide certain information about venue capacity and presales, according to testimony by the Ontario Attorney General. Such a disclosure would provide consumers a clearer picture of ticket availability and help them manage expectations and make informed decisions, according to three consumer advocacy groups and two academics with whom we spoke. In addition, the National Association of Ticket Brokers and a secondary ticket exchange stated that disclosing ticket availability would shed light on what some consider excessive holds and presales by the primary market. They said that brokers often are blamed when events quickly sell out on the primary market, whereas there may have been relatively few tickets available for sale in the first place. The New York State Office of the Attorney General stated that the lack of transparency about the manner in which tickets are distributed creates a level of mistrust among consumers. However, many primary market stakeholders with whom we spoke— including promoters, managers, venue operators, and primary ticket sellers—said such a disclosure would have little-to-no benefit. First, some of them noted that ticket inventory can change as event production details evolve and holds are released, making it difficult to provide an accurate number of tickets available at any one time. Second, some said this disclosure would be confusing or meaningless for consumers, with one promoter noting that for high-demand events, a consumer’s odds of getting a ticket are low regardless of whether he or she knows the number of available tickets. Another promoter noted that the seat maps used to select seats when purchasing tickets already provide information on ticket availability. Many stakeholders also told us such a disclosure would only help brokers by giving them information useful in buying tickets and setting resale prices. In addition, a venue manager noted that information on ticket sales is considered proprietary and artists and event organizers should not be required to disclose confidential business information. Event Ticketing Is Not Federally Regulated and Some Stakeholders Cite Market-Based Approaches to Address Concerns about Secondary Market Activity Federal agencies face constraints in addressing ticketing issues. Some industry players are implementing technological and market-based approaches that seek to address concerns about secondary market activity. Federal Regulatory Environment As noted earlier, the event ticketing industry is not federally regulated. In contrast, in the airline industry, the Department of Transportation can issue regulations regarding the disclosure of airline fees. Staff from FTC’s Bureau of Consumer Protection told us that—in addition to the enforcement activity noted earlier—they monitor consumer complaints related to the event ticket industry. However, they said they have resource and other constraints that make it difficult to conduct industry- wide investigations related to ticketing practices. Issues around the level and transparency of fees are not unique to the event ticketing industry. For example, as noted earlier, FTC staff have raised concerns about mandatory “resort fees” charged by many hotels but not immediately disclosed (such as in online price search results). In addition, according to the National Economic Council, sellers of other goods and services—such as car dealers and telecommunications companies—sometimes offer low prices up front that rise substantially with the addition of mandatory fees revealed later in the purchase process. As such, options for regulating the transparency of fees can have applicability broader than that of event ticketing. As noted earlier, the BOTS Act, which prohibits circumventing security measures or other systems intended to enforce ticket purchasing limits or order rules, went into effect in December 2016. However, a variety of industry, consumer, academic, and government stakeholders have expressed doubt that the BOTS Act would have much of an effect on prohibited bot use. Several of these stakeholders told us that bot users can easily evade detection and that enforcement of the act would be extremely difficult, in part because a lot of bot use occurs—or could shift—outside the United States. As of February 2018, FTC had not taken any enforcement action related to the act, but FTC staff told us they were monitoring the situation. The degree to which legislation combatting bots is effective may depend in part on the extent to which state attorneys general pursue enforcement actions. As of February 2018, we identified two states that had taken enforcement actions related to bot use. In May 2017, the New York State Office of the Attorney General announced settlements totaling $4.11 million with five ticket brokers which, among other offenses, violated New York State law by using bots to purchase and resell tickets. In April 2016, the office announced settlements totaling $2.7 million with six ticket brokers for similar violations. In February 2018, the Washington State Office of the Attorney General announced settlements totaling $60,000 with two ticket companies that used bots in violation of the state’s ticketing law. Market-Based Approaches Industry players, including ticket companies and event organizers, are using or exploring technology and market-based approaches that seek to address concerns about secondary market activity. Examples of these approaches and their potential effects include the following: Delivery delays. Ticket sellers sometimes use delivery delays, meaning they do not provide the ticket immediately upon purchase. Instead, buyers receive their tickets (in paper or print-at-home form) closer to the day of the event. Delivery delays can inhibit resale activity because they give brokers less time to buy and resell tickets, and allow primary ticket sellers to review whether brokers and bots made bulk purchases, according to some promoters and primary ticket sellers. However, secondary market sellers we interviewed generally argued against delivery delays, with two sellers saying it can be inconvenient and stressful for consumers to receive a ticket just a few days before an event. Dynamic pricing. The use of dynamic pricing—which adjusts prices over time based on demand—can reduce secondary market activity by pricing tickets closer to their market clearing price. Raising primary market ticket prices, such as through dynamic pricing, does not necessarily benefit consumers but can help ensure that more ticket revenue accrues to the artist or team rather than ticket resellers. Verified fan program. At least one major ticket company has a program to sell tickets to pre-approved “verified fans,” to help ensure that more consumers and fewer brokers can access tickets on the primary market. New technology. Two stakeholders noted the potential for distributed ledger technology in ticketing. The technology associates a unique identification code with the ticket and its owner, which can help restrict transfer of the ticket and ensure its authenticity. Adding concerts. Artists can seek to make their ticket prices accessible by increasing the supply of seats—for example, one major artist has added concert dates with the express purpose of matching ticket supply to demand to prevent higher resale prices. Face-value resale exchanges. Resale exchanges used by some artists only allow resale at face value (plus a limited amount to account for primary market fees). This allows consumers to recoup their ticket costs if their plans change, while preventing resale markups. Market-based approaches also may augment regulatory and enforcement action with regard to problems discussed earlier around transparency. In February 2018, Google’s AdWords service—which offers paid advertising alongside search results—implemented new certification requirements for businesses that resell event tickets. First, resellers using AdWords must clearly disclose on their website or mobile application that they are a secondary market company and not the primary provider of the tickets. They cannot imply they are the primary provider by using words such as “official” or by including the artist or venue name in their website’s URL— practices we noted earlier that were being used by some white-label websites. Second, resellers must prominently disclose when their ticket prices are higher than face value and disclose a price breakdown, including any fees, before the customer provides payment information. Google said in a statement that these measures were intended to protect customers from scams and prevent potential confusion. However, due to the recency of this change, it is too early to determine how it will affect the marketplace. In addition, the advertising industry’s self-regulatory organization has taken steps to address potentially misleading pricing practices in the ticket industry. The Advertising Self-Regulatory Council sets standards for truth and accuracy for national advertisers, monitors the marketplace, and holds advertisers responsible for their claims. As noted earlier, the organization recently referred a major ticket company to FTC for not following its recommendations to conspicuously disclose its fees. Although the council can play a role in monitoring deceptive advertising related to ticketing, it also faces constraints—for example, it addresses practices case-by-case and its recommendations depend on voluntary compliance by the advertiser. No matter what efforts are made to address concerns about the ticket marketplace, some of the consumer dissatisfaction with event ticketing stems from an intractable issue: demand for tickets to highly popular events exceeds supply. As such, no activity, outside of expanding the supply, is likely to effectively address one key source of consumer dissatisfaction: that tickets are not available to popular sold-out events. Agency Comments We provided a draft of this report to DOJ and FTC for review and comment. We received technical comments from FTC, which we incorporated as appropriate. We also provided relevant excerpts of the draft for technical review to selected private parties cited in our report, and included their technical comments as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to DOJ, FTC, the appropriate congressional committees and members, and others. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to examine (1) what is known about primary and secondary online ticket sales, (2) the consumer protection concerns that exist related to online ticket sales, and (3) potential advantages and disadvantages of selected approaches to address these concerns. The scope of our work generally focused on ticketing for large concert, theater, and sporting events for which there is a resale market. To develop background information on the U.S. ticketing industry, we analyzed business classification codes from the North American Industry Classification System, which assigns a 6-digit code to each industry based on its primary activity that generates the most revenue. The code we selected, “All Other Travel Arrangement and Reservation Services,” includes theatrical and sports ticket agencies, as well as automobile club road and travel services and ticket offices for airline, bus, and cruise ship travel. Because the Census data do not distinguish event ticketing from other services in particular, we determined the data do not provide a reliable count of companies in the event ticketing industry. In addition, we obtained publicly available data from private research firms and reviewed the largest publicly held ticketing companies’ annual public filings with the Securities and Exchange Commission (Form 10-K). We also collected information from firms that collect data related to the ticketing industry, such as IBISWorld and LiveAnalytics. To examine what is known about primary and secondary online ticket sales, we reviewed data related to ticket prices and sales published by Pollstar, a concert industry trade publication, and the Broadway League, a trade organization representing commercial theater. In addition, we obtained and analyzed data on ticket volume and resale prices for a nongeneralizable sample of 22 events. These events were selected because they (1) occurred in relatively large venues (more than 500 seats) that typically experience ticket resale activity; (2) represented a mix of event types (13 concerts, 3 commercial theater productions, and 6 sporting events); and (3) represented a mix of popularity, including 17 events that would be expected to be in high demand. We defined high- demand events as those that were likely to sell out, which we assessed by reviewing past attendance at other events for the same artist or theatrical event. For sports, we assessed demand by reviewing team performance and rankings. We collected data from October 16 through December 20, 2017. For each event, we analyzed: resale prices and volume, through data obtained from publicly available listings on the websites of two secondary ticket exchanges; primary market prices and availability, through data obtained from the websites of primary market ticket sellers; and event capacity, through data obtained from Billboard or Pollstar (trade publications) for concerts, the Broadway League for theater, and ESPN.com (a media company) for sporting events. To examine consumer protection concerns, we reviewed the websites of 6 primary market ticket sellers, 11 secondary ticket exchanges, and 8 “white-label” ticket websites. We collected data from June 19, 2017, through January 16, 2018. For the primary market ticket seller that represents the majority of market share, we observed the online ticket purchase process for 23 events. Three events were selected using the process described below and the remaining 20 were chosen to reflect 2 events at each of 10 venues, selected because they were among the 200 top-selling arenas or 200 top-selling theaters in the United States in 2017, according to Pollstar. For each of the 5 other primary market ticket sellers and the 11 secondary ticket exchanges, we observed the online ticket purchase process for 1–5 events. For each primary ticket seller, we selected one event per category (concert, theater, and sports). For consistency and comparability across companies, we also limited events to the same state (which did not extensively limit ticket resale) and time period. We also selected 2 events in another state because they used nontransferable tickets. For the secondary ticket exchanges, we used 3–5 events from our review of primary ticket sellers’ websites. If the event was no longer available, we selected an alternative event at the same venue. For each of the 8 white-label ticket sellers, we reviewed 1–4 events from the events described above. In some cases, the same event was not available so we selected an alternative event at the same venue. For these events—31 events in total—we documented (1) the ticket fees charged, (2) at what point in the purchase process the fees were disclosed, and (3) any restrictions to the ticket. In addition, we assessed the clarity, placement, and font size of the fees, restriction information, and—for white-label websites—disclaimers that the website was a ticket resale website. We worked with a GAO investigator to review the websites that required users to provide an e-mail address or credit card information before viewing fees. Analysts followed a protocol to help ensure consistency of observations and completed a data collection instrument for each website. A second analyst independently reviewed each website to verify the accuracy of information collected by the first analyst. Any discrepancies between the two analysts were identified, discussed, and resolved by referring to the source websites. A GAO investigator acting in an undercover capacity contacted the customer service departments of three large secondary ticket exchanges to inquire about two events for which tickets were nontransferable (not allowed to be resold) and two events for which listed tickets were speculative (not yet in-hand by the seller). The nontransferable tickets were identified through press releases and articles about popular touring artists and the speculative tickets were identified by searching for events that had been announced but were not yet for sale on the primary market. The investigator contacted customer service through 16 e-mails to one company and 8 online “live chats” with another company. For the third company, the investigator sent 8 e-mails about nontransferable tickets and did not inquire about speculative tickets because this company labeled such tickets. We also contacted the venues hosting these events to help assess the accuracy of the information provided by the ticket companies’ customer service departments. In addition, we reviewed enforcement activity by federal and state agencies related to ticketing and ticket companies. We also collected information on the number of consumer complaints by requesting the Federal Trade Commission (FTC) conduct a search of its Consumer Sentinel Network database, which includes complaints submitted to FTC, the Consumer Financial Protection Bureau, the Better Business Bureaus, and other sources. The search results covered calendar years 2014– 2016 and used the term “ticket” with terms related to events (e.g., “concert,” “sport,” “theater”), sold-out events (e.g., “sold-out”); fees; fraudulent tickets (e.g., “fake”); delayed delivery (e.g., “late,”); or nontransferable tickets (e.g. “paperless”). We selected our initial search terms by reviewing terms used in similar complaints on the Better Business Bureau website. We made modifications to our search string based on suggestions from FTC staff who reviewed the results of a preliminary search. To help ensure that results were related to event ticket sellers, we limited the search to complaints against the 6 primary ticket sellers and 11 secondary ticket exchanges in our scope. We assessed the reliability of the complaint data by interviewing agency officials. In addition, we have assessed the reliability of Consumer Sentinel Network data as part of previous studies related to consumer protection and found the data to be reliable for the purposes of gauging the extent of consumer complaints about event ticketing. However, in general, consumer complaint data have limitations as an indicator of the extent of problems. For example, not all consumers who experience problems may file a complaint, and not all complaints are necessarily legitimate or categorized appropriately. In addition, a consumer could submit a complaint more than once, or to more than one entity, potentially resulting in duplicate complaints. To examine the potential advantages and disadvantages of selected approaches to address consumer protection concerns, we reviewed federal and selected state laws related to event ticket sales. At the federal level, these included the Better Online Ticket Sales Act of 2016 and relevant provisions of the Federal Trade Commission Act. To determine which states had laws related to ticket resale or disclosure, we reviewed compilations of state ticketing laws from the National Association of Ticket Brokers, a secondary ticket seller’s website, and a law firm publication, and we conducted independent research and verification. We reviewed ticketing-related legislation—selected for its relevance to the approaches covered in our review—in Connecticut, New York, and Georgia. We reviewed state government reports and interviewed state officials to get information on the states’ experiences with these laws. We also consulted foreign government reports to obtain information on relevant laws or regulations in Canada and the United Kingdom, which have reported similar consumer protection issues as we reviewed in our report. To address all of our objectives, we conducted searches of various databases, such as ProQuest, Academic OneFile, Nexis, Scopus, and the National Bureau of Economic Research, to identify sources such as peer- reviewed academic studies; law review articles; news and trade journal articles; government reports; and hearings and transcripts related to ticketing issues. We examined summary-level information about each piece of literature, and from this review, identified articles that were germane to our report. We generally focused on articles from 2009 and later. We identified additional articles and reports through citations in literature we reviewed and from expert recommendations. For the articles we used to cite empirical findings or to support arguments on advantages and disadvantages of selected resale restrictions or disclosure requirements, we conducted a methodology and soundness review. We eliminated one study on pricing and one study on price caps because we believed the methods were not sufficiently rigorous. In addition, we identified and reviewed relevant congressional testimony on proposed ticketing legislation. We reviewed the Department of Justice’s competitive impact statement and testimonies with regard to the 2010 merger of Ticketmaster and Live Nation. We interviewed staff from the FTC’s Bureau of Consumer Protection and Bureau of Economics, the Department of Justice’s Antitrust Division, and the New York State Office of the Attorney General, and we conducted a group interview, coordinated by the National Association of Attorneys General, with staff from the offices of the attorney general of Pennsylvania and Texas. We also interviewed representatives of three consumer organizations: Consumer Action, the National Association of Consumer Advocates, and the National Consumers League; four trade associations: the Broadway League, Future of Music Coalition, National Association of Ticket Brokers, and the Recording Academy; as well as four primary ticket sellers, five secondary ticket exchanges and aggregators, one broker, five venue operators, three event promoters (who also operate venues), five artists’ managers and booking agents, three major sports leagues, and three academics who have studied the ticket marketplace. These organizations and individuals were selected based on their experience and prominence in the marketplace and to provide a range of perspectives. We conducted this performance audit from November 2016 to April 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Our investigative staff agent conducted all related investigative work in accordance with investigative standards prescribed by the Council of the Inspectors General on Integrity and Efficiency. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Jason Bromberg (Assistant Director), Lisa Reynolds (Analyst in Charge), and Miranda Berry made key contributions to this report. Also contributing were Enyinnaya David Aja, Maurice Belding, JoAnna Berry, Farrah Graham, John Karikari, Barbara Roesmann, Jena Sinkfield, and Tyler Spunaugle.
Why GAO Did This Study Tickets for concerts, theater, and sporting events can be purchased—typically online—from the original seller (primary market) or a reseller (secondary market). Some state and federal officials and others have raised issues about ticketing fees, the effect of the secondary market on ticket prices, and the transparency and business practices of some industry participants. Event ticketing is not federally regulated. However, federal legislation enacted in 2016 restricts bots (ticket-buying software). Also, the Federal Trade Commission (FTC) has taken two enforcement actions related to deceptive marketing by ticket sellers under its broad FTC Act authority. GAO was asked to review issues around online ticket sales. This report examines (1) what is known about online ticket sales, (2) consumer protection issues related to such sales, and (3) potential advantages and disadvantages of selected approaches to address these issues. GAO focused on concert, theater, and major league sporting events for which there is a resale market. GAO analyzed data on fees, ticket volume, and resale prices from a variety of sources; reviewed the largest ticket sellers' websites and purchase processes; and reviewed federal and state laws and relevant academic literature. GAO also interviewed and reviewed documentation from government agencies; consumer organizations; ticket sellers; venue operators; promoters and managers; sports leagues; and academics (selected for their experience and to provide a range of perspectives). What GAO Found Ticket pricing, resale activity, and fees for events vary. Tickets to popular events sold on the primary market sometimes are priced below the market price, partly because performers want to make tickets affordable and maintain fans' goodwill, according to industry representatives. Tickets are often resold on the secondary market at prices above face value. In a nongeneralizable sample of events GAO reviewed, primary and secondary market ticketing companies charged total fees averaging 27 percent and 31 percent, respectively, of the ticket's price. Consumer protection issues include difficulty buying tickets at face value and the fees and marketing practices of some market participants. Professional resellers, or brokers, have a competitive advantage over consumers in buying tickets as soon as they are released. Brokers can use numerous staff and software (“bots”) to rapidly buy many tickets. As a result, many consumers can buy tickets only on the resale market at a substantial markup. Some ticket websites GAO reviewed did not clearly display fees or disclosed them only after users entered payment information. “White-label” resale sites, which often appear as paid results of Internet searches for venues and events, often charged higher fees than other ticket websites—sometimes in excess of 40 percent of the ticket price—and used marketing that might mislead users to think they were buying tickets from the venue. Selected approaches GAO reviewed, such as ticket resale restrictions and disclosure requirements, would have varying effects on consumers and businesses. Nontransferable tickets. At least three states restrict nontransferable tickets—that is, tickets whose terms do not allow resale. Nontransferable tickets allow more consumers to access tickets at a face-value price. However, they also limit consumers' ability to sell tickets they cannot use, can create inconvenience by requiring identification at the venue, and according to economists, prevent efficient allocation of tickets. Price caps. Several states cap the price at which tickets can be resold. But according to some state government studies, the caps generally are not effective because they are difficult to enforce. Disclosure requirements. Stakeholders and government research GAO consulted generally supported measures to ensure clearer and earlier disclosure of ticket fees, although views varied on the best approach (for example, to include fees in an “all-in” price or disclose them separately). Some market-based approaches are being used or explored that seek to address concerns about secondary market activity. These approaches include technological tools and ticket-buyer verification to better combat bots. In addition, a major search engine recently required enhanced disclosures from ticket resellers using its advertising platform. The disclosures are intended to protect consumers from scams and prevent potential confusion about who is selling the tickets.
gao_GAO-18-645T
gao_GAO-18-645T_0
Background IT systems supporting federal agencies and our nation’s critical infrastructures are inherently at risk. These systems are highly complex and dynamic, technologically diverse, and often geographically dispersed. This complexity increases the difficulty in identifying, managing, and protecting the numerous operating systems, applications, and devices comprising the systems and networks. Compounding the risk, federal systems and networks are also often interconnected with other internal and external systems and networks, including the Internet. This increases the number of avenues of attack and expands their attack surface. As systems become more integrated, cyber threats will pose an increasing risk to national security, economic well-being, and public health and safety. Advancements in technology, such as data analytics software for searching and collecting information, have also made it easier for individuals and organizations to correlate data (including PII) and track it across large and numerous databases. For example, social media has been used as a mass communication tool where PII can be gathered in vast amounts. In addition, ubiquitous Internet and cellular connectivity makes it easier to track individuals by allowing easy access to information pinpointing their locations. These advances—combined with the increasing sophistication of hackers and others with malicious intent, and the extent to which both federal agencies and private companies collect sensitive information about individuals—have increased the risk of PII being exposed and compromised. Cybersecurity incidents continue to impact entities across various critical infrastructure sectors. For example, in its 2018 annual data breach investigations report, Verizon reported that 53,308 security incidents and 2,216 data breaches were identified across 65 countries in the 12 months since its prior report. Further, the report noted that cybercriminals can often compromise a system in just a matter of minutes—or even seconds, but that it can take an organization significantly longer to discover the breach. Specifically, the report stated nearly 90 percent of the reported breaches occurred within minutes, while nearly 70 percent went undiscovered for months. These concerns are further highlighted by the number of information security incidents reported by federal executive branch civilian agencies to DHS’s U.S. Computer Emergency Readiness Team (US-CERT). For fiscal year 2017, 35,277 such incidents were reported by the Office of Management and Budget (OMB) in its 2018 annual report to Congress, as mandated by the Federal Information Security Modernization Act (FISMA). These incidents include, for example, web-based attacks, phishing, and the loss or theft of computing equipment. Different types of incidents merit different response strategies. However, if an agency cannot identify the threat vector (or avenue of attack), it could be difficult for that agency to define more specific handling procedures to respond to the incident and take actions to minimize similar future attacks. In this regard, incidents with a threat vector categorized as “other” (which includes avenues of attacks that are unidentified) made up 31 percent of the various incidents reported to US-CERT. Figure 1 shows the percentage of the different types of incidents reported across each of the nine threat vector categories for fiscal year 2017, as reported by OMB. These incidents and others like them can pose a serious challenge to economic, national, and personal privacy and security. The following examples highlight the impact of such incidents: In March 2018, the Mayor of Atlanta, Georgia reported that the city was victimized by a ransomware cyberattack. As a result, city government officials stated that customers were not able to access multiple applications that are used to pay bills or access court related information. In response to the attack, the officials noted that they were working with numerous private and governmental partners, including DHS, to assess what occurred and determine how best to protect the city from future attacks. In March 2018, the Department of Justice reported that it had indicted nine Iranians for conducting a massive cybersecurity theft campaign on behalf of the Islamic Revolutionary Guard Corps. According to the department, the nine Iranians allegedly stole more than 31 terabytes of documents and data from more than 140 American universities, 30 U.S. companies, and five federal government agencies, among other entities. In March 2018, a joint alert from DHS and the Federal Bureau of Investigation (FBI) stated that, since at least March 2016, Russian government actors had targeted the systems of multiple U.S. government entities and critical infrastructure sectors. Specifically, the alert stated that Russian government actors had affected multiple organizations in the energy, nuclear, water, aviation, construction, and critical manufacturing sectors. In July 2017, a breach at Equifax resulted in the loss of PII for an estimated 148 million U.S. consumers. According to Equifax, the hackers accessed people’s names, Social Security numbers (SSN), birth dates, addresses and, in some instances, driver’s license numbers. In April 2017, the Commissioner of the Internal Revenue Service (IRS) testified that the IRS had disabled its data retrieval tool in early March 2017 after becoming concerned about the misuse of taxpayer data. Specifically, the agency suspected that PII obtained outside the agency’s tax system was used to access the agency’s online federal student aid application in an attempt to secure tax information through the data retrieval tool. In April 2017, the agency began notifying taxpayers who could have been affected by the breach. In June 2015, OPM reported that an intrusion into its systems had affected the personnel records of about 4.2 million current and former federal employees. Then, in July 2015, the agency reported that a separate, but related, incident had compromised its systems and the files related to background investigations for 21.5 million individuals. In total, OPM estimated 22.1 million individuals had some form of PII stolen, with 3.6 million being a victim of both breaches. Federal Information Security Included on GAO’s High-Risk List Since 1997 Safeguarding federal IT systems and the systems that support critical infrastructures has been a long-standing concern of GAO. Due to increasing cyber-based threats and the persistent nature of information security vulnerabilities, we have designated information security as a government-wide high-risk area since 1997. In 2003, we expanded the information security high-risk area to include the protection of critical cyber infrastructure. At that time, we highlighted the need to manage critical infrastructure protection activities that enhance the security of the cyber and physical public and private infrastructures that are essential to national security, national economic security, and/or national public health and safety. We further expanded the information security high-risk area in 2015 to include protecting the privacy of PII. Since then, advances in technology have enhanced the ability of government and private sector entities to collect and process extensive amounts of PII, which has posed challenges to ensuring the privacy of such information. In addition, high- profile PII breaches at commercial entities, such as Equifax, heightened concerns that personal privacy is not being adequately protected. Our experience has shown that the key elements needed to make progress toward being removed from the High-Risk List are top-level attention by the administration and agency leaders grounded in the five criteria for removal, as well as any needed congressional action. The five criteria for removal that we identified in November 2000 are as follows: Leadership Commitment. Demonstrated strong commitment and top leadership support. Capacity. The agency has the capacity (i.e., people and resources) to resolve the risk(s). Action Plan. A corrective action plan exists that defines the root cause, solutions, and provides for substantially completing corrective measures, including steps necessary to implement solutions we recommended. Monitoring. A program has been instituted to monitor and independently validate the effectiveness and sustainability of corrective measures. Demonstrated Progress. Ability to demonstrate progress in implementing corrective measures and in resolving the high-risk area. These five criteria form a road map for efforts to improve and ultimately address high-risk issues. Addressing some of the criteria leads to progress, while satisfying all of the criteria is central to removal from the list. Figure 2 shows the five criteria and illustrative actions taken by agencies to address the criteria. Importantly, the actions listed are not “stand alone” efforts taken in isolation from other actions to address high- risk issues. That is, actions taken under one criterion may be important to meeting other criteria as well. For example, top leadership can demonstrate its commitment by establishing a corrective action plan including long-term priorities and goals to address the high-risk issue and using data to gauge progress—actions which are also vital to monitoring criteria. As we reported in the February 2017 high-risk report, the federal government’s efforts to address information security deficiencies had fully met one of the five criteria for removal from the High-Risk List— leadership commitment—and partially met the other four, as shown in figure 3. We plan to update our assessment of this high-risk area against the five criteria in February 2019. Ten Critical Actions Needed to Address Major Cybersecurity Challenges Based on our prior work, we have identified four major cybersecurity challenges: (1) establishing a comprehensive cybersecurity strategy and performing effective oversight, (2) securing federal systems and information, (3) protecting cyber critical infrastructure, and (4) protecting privacy and sensitive data. To address these challenges, we have identified 10 critical actions that the federal government and other entities need to take (see figure 4). The four challenges and the 10 actions needed to address them are summarized following the table. Establishing a Comprehensive Cybersecurity Strategy and Performing Effective Oversight The federal government has been challenged in establishing a comprehensive cybersecurity strategy and in performing effective oversight as called for by federal law and policy. Specifically, we have previously reported that the federal government has faced challenges in establishing a comprehensive strategy to provide a framework for how the United States will engage both domestically and internationally on cybersecurity related matters. We have also reported on challenges in performing oversight, including monitoring the global supply chain, ensuring a highly skilled cyber workforce, and addressing risks associated with emerging technologies. The federal government can take four key actions to improve the nation’s strategic approach to, and oversight of, cybersecurity. Develop and execute a more comprehensive federal strategy for national cybersecurity and global cyberspace. In February 2013 we reported that the government had issued a variety of strategy- related documents that addressed priorities for enhancing cybersecurity within the federal government as well as for encouraging improvements in the cybersecurity of critical infrastructure within the private sector; however, no overarching cybersecurity strategy had been developed that articulated priority actions, assigned responsibilities for performing them, and set timeframes for their completion. Accordingly, we recommended that the White House Cybersecurity Coordinator in the Executive Office of the President develop an overarching federal cybersecurity strategy that included all key elements of the desirable characteristics of a national strategy including, among other things, milestones and performance measures for major activities to address stated priorities; cost and resources needed to accomplish stated priorities; and specific roles and responsibilities of federal organizations related to the strategy’s stated priorities. In response to our recommendation, in October 2015, the Director of OMB and the Federal Chief Information Officer, issued a Cybersecurity Strategy and Implementation Plan for the Federal Civilian Government. The plan directed a series of actions to improve capabilities for identifying and detecting vulnerabilities and threats, enhance protections of government assets and information, and further develop robust response and recovery capabilities to ensure readiness and resilience when incidents inevitably occur. The plan also identified key milestones for major activities, resources needed to accomplish milestones, and specific roles and responsibilities of federal organizations related to the strategy’s milestones. Since that time, the executive branch has made progress toward outlining a federal strategy for confronting cyber threats. Table 1 identifies these recent efforts and a description of their related contents. These efforts provide a good foundation toward establishing a more comprehensive strategy, but more effort is needed to address all of the desirable characteristics of a national strategy that we recommended. The recently issued executive branch strategy documents did not include key elements of desirable characteristics that can enhance the usefulness of a national strategy as guidance for decision makers in allocating resources, defining policies, and helping to ensure accountability. Specifically: Milestones and performance measures to gauge results were generally not included in strategy documents. For example, although the DHS Cybersecurity Strategy stated that its implementation would be assessed on an annual basis, it did not describe the milestones and performance measures for tracking the effectiveness of the activities intended to meet the stated goals (e.g., protecting critical infrastructure and responding effectively to cyber incidents). Without such performance measures, DHS will lack a means to ensure that the goals and objectives discussed in the document are accomplished and that responsible parties are held accountable. According to officials from DHS’s Office of Cybersecurity and Communications, the department is developing a plan for implementing the DHS Cybersecurity Strategy and expects to issue the plan by mid-August 2018. The officials stated that the plan is expected to identify milestones, roles, and responsibilities across DHS to inform the prioritization of future efforts. The strategy documents generally did not include information regarding the resources needed to carry out the goals and objectives. For example, although the DHS Cybersecurity Strategy identified a variety of actions the agency planned to take to perform their cybersecurity mission, it did not articulate the resources needed to carry out these actions and requirements. Without information on the specific resources needed, federal agencies may not be positioned to allocate such resources and investments and, therefore, may be hindered in their ability to meet national priorities. Most of the strategy documents lacked clearly defined roles and responsibilities for key agencies, such as DHS, DOD, and OMB. These agencies contribute substantially to the nation’s cybersecurity programs. For example, although the National Security Strategy discusses multiple priority actions needed to address the nation’s cybersecurity challenges (e.g. building defensible government networks and deterring and disrupting malicious cyber actors), it does not describe the roles, responsibilities, or the expected coordination of any specific federal agencies, including DHS, DOD, or OMB, or other non- federal entities needed to carry out those actions. Without this information, the federal government may not be able to foster effective coordination, particularly where there is overlap in responsibilities, or hold agencies accountable for carrying out planned activities. Ultimately, a more clearly defined, coordinated, and comprehensive approach to planning and executing an overall strategy would likely lead to significant progress in furthering strategic goals and lessening persistent weaknesses. Mitigate global supply chain risks. The global, geographically disperse nature of the producers and suppliers of IT products is a growing concern. We have previously reported on potential issues associated with IT supply chain and risks originating from foreign- manufactured equipment. For example, in July 2017, we reported that the Department of State had relied on certain device manufacturers, software developers, and contractor support which had suppliers that were reported to be headquartered in a cyber-threat nation (e.g., China and Russia). We further pointed out that the reliance on complex, global IT supply chains introduces multiple risks to federal agencies, including insertion of counterfeits, tampering, or installation of malicious software or hardware. Earlier this month, we testified that if such global IT supply chain risks are realized, they could jeopardize the confidentiality, integrity, and availability of federal information systems. Thus, the potential exists for serious adverse impact on an agency’s operations, assets, and employees. These factors highlight the importance and urgency of federal agencies appropriately assessing, managing, and monitoring IT supply chain risk as part of their agencywide information security programs. Address cybersecurity workforce management challenges. The federal government faces challenges in ensuring that the nation’s cybersecurity workforce has the appropriate skills. For example, in June 2018, we reported on federal efforts to implement the requirements of the Federal Cybersecurity Workforce Assessment Act of 2015. We determined that most of the Chief Financial Officers (CFO) Act agencies had not fully implemented all statutory requirements, such as developing procedures for assigning codes to cybersecurity positions. Further, we have previously reported that DHS and DOD had not addressed cybersecurity workforce management requirements set forth in federal laws. In addition, we have reported in the last 2 years that federal agencies (1) had not identified and closed cybersecurity skills gaps, (2) had been challenged with recruiting and retaining qualified staff, and (3) had difficulty navigating the federal hiring process. A recent executive branch report also discussed challenges associated with the cybersecurity workforce. Specifically, in response to Executive Order 13800, the Department of Commerce and DHS led an interagency working group exploring how to support the growth and sustainment of future cybersecurity employees in the public and private sectors. In May 2018, the departments issued a report that identified key findings, including: the U.S. cybersecurity workforce needs immediate and sustained improvements; the pool of cybersecurity candidates needs to be expanded through retraining and by increasing the participation of women, minorities, and veterans; a shortage exists of cybersecurity teachers at the primary and secondary levels, faculty in higher education, and training instructors; and comprehensive and reliable data about cybersecurity workforce position needs and education and training programs are lacking. The report also included recommendations and proposed actions to address the findings, including that private and public sectors should (1) align education and training with employers’ cybersecurity workforce needs by applying the National Initiative for Cybersecurity Education Cybersecurity Workforce Framework; (2) develop cybersecurity career model paths; and (3) establish a clearinghouse of information on cybersecurity workforce development education, training, and workforce development programs and initiatives. In addition, in June 2018, the executive branch issued a government reform plan and reorganization recommendations that included, among other things, proposals for solving the federal cybersecurity workforce shortage. In particular, the plan notes that the administration intends to prioritize and accelerate ongoing efforts to reform the way that the federal government recruits, evaluates, selects, pays, and places cyber talent across the enterprise. The plan further states that, by the end of the first quarter of fiscal year 2019, all CFO Act agencies, in coordination with DHS and OMB, are to develop a critical list of vacancies across their organizations. Subsequently, OMB and DHS are to analyze these lists and work with OPM to develop a government-wide approach to identifying or recruiting new employees or reskilling existing employees. Regarding cybersecurity training, the plan notes that OMB is to consult with DHS to standardize training for cybersecurity employees, and should work to develop an enterprise-wide training process for government cybersecurity employees. Ensure the security of emerging technologies. As the devices used in daily life become increasingly integrated with technology, the risk to sensitive data and PII also grows. Over the last several years, we have reported on weaknesses in addressing vulnerabilities associated with emerging technologies, including: IoT devices, such as fitness trackers, cameras, and thermostats, that continuously collect and process information are potentially vulnerable to cyber-attacks; IoT devices, such as those acquired and used by DOD employees or that DOD itself acquires (e.g., smartphones), may increase the security risks to the department; vehicles that are potentially susceptible to cyber-attack through technology, such as Bluetooth; the unknown impact of artificial intelligence cybersecurity; and advances in cryptocurrencies and blockchain technologies. Executive branch agencies have also highlighted the challenges associated with ensuring the security of emerging technologies. Specifically, in a May 2018 report issued in response to Executive Order 13800, the Department of Commerce and DHS issued a report on the opportunities and challenges in reducing the botnet threat. The opportunities and challenges are centered on six principal themes, including the global nature of automated, distributed attacks; effective tools; and awareness and education. The report also provides recommended actions, including that federal agencies should increase their understanding of what software components have been incorporated into acquired products and establish a public campaign to support awareness of IoT security. In our previously discussed reports related to this cybersecurity challenge, we made a total of 50 recommendations to federal agencies to address the weaknesses identified. As of July 2018, 48 recommendations had not been implemented. These outstanding recommendations include 8 priority recommendations, meaning that we believe that they warrant priority attention from heads of key departments and agencies. These priority recommendations include addressing weaknesses associated with, among other things, agency-specific cybersecurity workforce challenges and agency responsibilities for supporting mitigation of vehicle network attacks. Until our recommendations are fully implemented, federal agencies may be limited in their ability to provide effective oversight of critical government-wide initiatives, address challenges with cybersecurity workforce management, and better ensure the security of emerging technologies. In addition to our prior work related to the federal government’s efforts to establish key strategy documents and implement effective oversight, we also have several ongoing reviews related to this challenge. These include reviews of: the CFO Act agencies’ efforts to submit complete and reliable baseline assessment reports of their cybersecurity workforces; the extent to which DOD has established training standards for cyber mission force personnel, and efforts the department has made to achieve its goal of a trained cyber mission force; selected agencies’ ability to implement cloud service technologies and notable benefits this might have on agencies; and the federal approach and strategy to securing agency information systems, to include federal intrusion detection and prevention capabilities and the intrusion assessment plan. Securing Federal Systems and Information The federal government has been challenged in securing federal systems and information. Specifically, we have reported that federal agencies have experienced challenges in implementing government-wide cybersecurity initiatives, addressing weaknesses in their information systems and responding to cyber incidents on their systems. This is particularly concerning given that the emergence of increasingly sophisticated threats and continuous reporting of cyber incidents underscores the continuing and urgent need for effective information security. As such, it is important that federal agencies take appropriate steps to better ensure they have effectively implemented programs to protect their information and systems. We have identified three actions that the agencies can take. Improve implementation of government-wide cybersecurity initiatives. Specifically, in January 2016, we reported that DHS had not ensured that the National Cybersecurity Protection System (NCPS) had fully satisfied all intended system objectives related to intrusion detection and prevention, information sharing, and analytics. In addition, in February 2017, we reported that the DHS National Cybersecurity and Communications Integration Center’s (NCCIC) functions were not being performed in adherence with the principles set forth in federal laws. We noted that, although NCCIC was sharing information about cyber threats in the way it should, the center did not have metrics to measure that the information was timely, relevant and actionable, as prescribed by law. Address weaknesses in federal information security programs. We have previously identified a number of weaknesses in agencies’ protection of their information and information systems. For example, over the past 2 years, we have reported that: most of the 24 agencies covered by the CFO Act had weaknesses in each of the five major categories of information system controls (i.e., access controls, configuration management controls, segregation of duties, contingency planning, and agency-wide security management); three agencies—the Securities Exchange Commission, the Federal Deposit Insurance Corporation, and the Food and Drug Administration—had not effectively implemented aspects of their information security programs, which resulted in weaknesses in these agencies’ security controls; information security weaknesses in selected high-impact systems at four agencies—the National Aeronautics and Space Administration, the Nuclear Regulatory Commission, OPM, and the Department of Veterans Affairs—were cited as a key reason that the agencies had not effectively implemented elements of their information security programs; DOD’s process for monitoring the implementation of cybersecurity guidance had weaknesses and resulted in the closure of certain tasks (such as completing cyber risk assessments) before they were fully implemented; and agencies had not fully defined the role of their Chief Information Security Officers, as required by FISMA. We also recently testified that, although the government had acted to protect federal information systems, additional work was needed to improve agency security programs and cyber capabilities. In particular, we noted that further efforts were needed by agencies to implement our prior recommendations in order to strengthen their information security programs and technical controls over their computer networks and systems. Enhance the federal response to cyber incidents. We have reported that certain agencies have had weaknesses in responding to cyber incidents. For example, as of August 2017, OPM had not fully implemented controls to address deficiencies identified as a result of its 2015 cyber incidents; DOD had not identified the National Guard’s cyber capabilities (e.g., computer network defense teams) or addressed challenges in its exercises. as of April 2016, DOD had not identified, clarified, or implemented all components of its support of civil authorities during cyber incidents; and as of January 2016, DHS’s NCPS had limited capabilities for detecting and preventing intrusions, conducting analytics, and sharing information. In the public versions of the reports previously discussed for this challenge area, we made a total of 101 recommendations to federal agencies to address the weaknesses identified. As of July 2018, 61 recommendations had not been implemented. These outstanding recommendations include 14 priority recommendations to address weaknesses associated with, among other things, the information security programs at the National Aeronautics and Space Administration, OPM, and the Security Exchange Commission. Until these recommendations are implemented, these federal agencies will be limited in their ability to ensure the effectiveness of their programs for protecting information and systems. In addition to our prior work, we also have several ongoing reviews related to the federal government’s efforts to protect its information and systems. These include reviews of: Federal Risk and Authorization Management Program (FedRAMP) implementation, including an assessment of the implementation of the program’s authorization process for protecting federal data in cloud environments; the Equifax data breach, including an assessment of federal oversight of credit reporting agencies’ collection, use, and protection of consumer PII; the Federal Communication Commission’s Electronic Comment Filing System security, to include a review of the agency’s detection of and response to a May 2017 incident that reportedly impacted the system; DOD’s efforts to improve the cybersecurity of its major weapon DOD’s whistleblower program, including an assessment of the policies, procedures, and controls related to the access and storage of sensitive and classified information needed for the program; IRS’s efforts to (1) implement security controls and the agency’s information security program, (2) authenticate taxpayers, and (3) secure tax information; and federal intrusion detection and prevention capabilities. Protecting Cyber Critical Infrastructure The federal government has been challenged in working with the private sector to protect critical infrastructure. This infrastructure includes both public and private systems vital to national security and other efforts, such as providing the essential services that underpin American society. As the cybersecurity threat to these systems continues to grow, federal agencies have millions of sensitive records that must be protected. Specifically, this critical infrastructure threat could have national security implications and more efforts should be made to ensure that it is not breached. To help address this issue, NIST developed the cybersecurity framework—a voluntary set of cybersecurity standards and procedures for industry to adopt as a means of taking a risk-based approach to managing cybersecurity. However, additional action is needed to strengthen the federal role in protecting the critical infrastructure. Specifically, we have reported on other critical infrastructure protection issues that need to be addressed. For example: Entities within the 16 critical infrastructure sectors reported encountering four challenges to adopting the cybersecurity framework, such as being limited in their ability to commit necessary resources towards framework adoption and not having the necessary knowledge and skills to effectively implement the framework. Major challenges existed to securing the electricity grid against cyber threats. These challenges included monitoring implementation of cybersecurity standards, ensuring security features are built into smart grid systems, and establishing metrics for cybersecurity. DHS and other agencies needed to enhance cybersecurity in the maritime environment. Specifically, DHS did not include cyber risks in its risk assessments that were already in place nor did it address cyber risks in guidance for port security plans. Sector-specific agencies were not properly addressing progress or metrics to measure their progress in cybersecurity. DOD and the Federal Aviation Administration identified a variety of operations and physical security risks that could adversely affect DOD missions. We made a total of 19 recommendations to federal agencies to address these weaknesses and others. These recommendations include, for example, a total of 9 recommendations to 9 sector-specific agencies to develop methods to determine the level and type of cybersecurity framework adoption across their respective sectors. As of July 2018, all 19 recommendations had not been implemented. Until these recommendations are implemented, the federal government will continue to be challenged in fulfilling its role in protecting the nation’s critical infrastructure. In addition to our prior work related to the federal government’s efforts to protect critical infrastructure, we also have several ongoing reviews focusing on: the physical and cybersecurity risks to pipelines across the country responsible for transmitting oil, natural gas, and other hazardous liquids; the cybersecurity risks to the electric grid; and the privatization of utilities at DOD installations. Protecting Privacy and Sensitive Data The federal government has been challenged in protecting privacy and sensitive data. Advances in technology, including powerful search technology and data analytics software, have made it easy to correlate information about individuals across large and numerous databases, which have become very inexpensive to maintain. In addition, ubiquitous Internet connectivity has facilitated sophisticated tracking of individuals and their activities through mobile devices such as smartphones and fitness trackers. Given that access to data is so pervasive, personal privacy hinges on ensuring that databases of PII maintained by government agencies or on their behalf are protected both from inappropriate access (i.e., data breaches) as well as inappropriate use (i.e., for purposes not originally specified when the information was collected). Likewise, the trend in the private sector of collecting extensive and detailed information about individuals needs appropriate limits. The vast number of individuals potentially affected by data breaches at federal agencies and private sector entities in recent years increases concerns that PII is not being properly protected. Federal agencies should take two types of actions to address this challenge area. In addition, we have previously proposed two matters for congressional consideration aimed toward better protecting PII. Improve federal efforts to protect privacy and sensitive data. We have issued several reports noting that agencies had deficiencies in protecting privacy and sensitive data that needed to be addressed. For example: The Department of Health and Human Services’ (HHS) Centers for Medicare and Medicaid Services (CMS) and external entities were at risk of compromising Medicare Beneficiary Data due to a lack of guidance and proper oversight. The Department of Education’s Office of Federal Student Aid had not properly overseen its school partners’ records or information security programs. HHS had not fully addressed key security elements in its guidance for protecting the security and privacy of electronic health information. CMS had not fully protected the privacy of users’ data on state- based marketplaces. Poor planning and ineffective monitoring had resulted in the unsuccessful implementation of government initiatives aimed at eliminating the unnecessary collection, use, and display of SSNs. Appropriately limit the collection and use of personal information and ensure that it is obtained with appropriate knowledge or consent. We have issued a series of reports that highlight a number of the key concerns in this area. For example: The emergence of IoT devices can facilitate the collection of information about individuals without their knowledge or consent; Federal laws for smartphone tracking applications have not generally been well enforced. The FBI has not fully ensured privacy and accuracy related to the use of face recognition technology. We have previously suggested that Congress consider amending laws, such as the Privacy Act of 1974 and the E-Government Act of 2002, because they may not consistently protect PII. Specifically, we found that while these laws and guidance set minimum requirements for agencies, they may not consistently protect PII in all circumstances of its collection and use throughout the federal government and may not fully adhere to key privacy principles. However, revisions to the Privacy Act and the E-Government Act have not yet been enacted. Further, we also suggested that Congress consider strengthening the consumer privacy framework and review issues such as the adequacy of consumers’ ability to access, correct, and control their personal information; and privacy controls related to new technologies such as web tracking and mobile devices. However, these suggested changes have not yet been enacted. We also made a total of 29 recommendations to federal agencies to address the weaknesses identified. As of July 2018, 28 recommendations had not been implemented. These outstanding recommendations include 6 priority recommendations to address weaknesses associated with, among other things, publishing privacy impact assessments and improving the accuracy of the FBI’s face recognition services. Until these recommendations are implemented, federal agencies will be challenged in their ability to protect privacy and sensitive data and ensure that its collection and use is appropriately limited. In addition to our prior work, we have several ongoing reviews related to protecting privacy and sensitive data. These include reviews of: IRS’s taxpayer authentication efforts, including what steps the agency is taking to monitor and improve its authentication methods; the extent to which the Department of Education’s Office of Federal Student Aid’s policies and procedures for overseeing non-school partners’ protection of federal student aid data align with federal requirements and guidance; data security issues related to credit reporting agencies, including a review of the causes and impacts of the August 2017 Equifax data breach; the extent to which Equifax assessed, responded to, and recovered from its August 2017 data breach; federal agencies’ efforts to remove PII from shared cyber threat indicators; and how the federal government has overseen Internet privacy, including the roles of the Federal Communications Commission and the Federal Trade Commission, and strengths and weaknesses of the current oversight authorities. In summary, since 2010, we have made over 3,000 recommendations to agencies aimed at addressing the four cybersecurity challenges. Nevertheless, many agencies continue to be challenged in safeguarding their information systems and information, in part because many of these recommendations have not been implemented. Of the roughly 3,000 recommendations made since 2010, nearly 1,000 had not been implemented as of July 2018. We have also designated 35 as priority recommendations, and as of July 2018, 31 had not been implemented. The federal government and the nation’s critical infrastructure are dependent on IT systems and electronic data, which make them highly vulnerable to a wide and evolving array of cyber-based threats. Securing these systems and data is vital to the nation’s security, prosperity, and well-being. Nevertheless, the security over these systems and data is inconsistent and urgent actions are needed to address ongoing cybersecurity and privacy challenges. Specifically, the federal government needs to implement a more comprehensive cybersecurity strategy and improve its oversight, including maintaining a qualified cybersecurity workforce; address security weaknesses in federal systems and information and enhance cyber incident response efforts; bolster the protection of cyber critical infrastructure; and prioritize efforts to protect individual’s privacy and PII. Until our recommendations are addressed and actions are taken to address the four challenges we identified, the federal government, the national critical infrastructure, and the personal information of U.S. citizens will be increasingly susceptible to the multitude of cyber-related threats that exist. Chairmen Meadows and Hurd, Ranking Members Connolly and Kelly, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contacts and Staff Acknowledgments Questions about this testimony can be directed to Nick Marinos, Director, Cybersecurity and Data Protection Issues, at (202) 512-9342 or [email protected]; and Gregory C. Wilshusen, Director, Information Security Issues, at (202) 512-6244 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Jon Ticehurst, Assistant Director; Kush K. Malhotra, Analyst-In-Charge; Chris Businsky; Alan Daigle; Rebecca Eyler; Chaz Hubbard; David Plocher; Bradley Roach; Sukhjoot Singh; Di’Mond Spencer; and Umesh Thakkar. Related GAO Reports Information Security: Supply Chain Risks Affecting Federal Agencies. GAO-18-667T. Washington, D.C.: July 12, 2018. Information Technology: Continued Implementation of High-Risk Recommendations Is Needed to Better Manage Acquisitions, Operations, and Cybersecurity. GAO-18-566T. Washington, D.C.: May 23, 2018. Electronic Health Information: CMS Oversight of Medicare Beneficiary Data Security Needs Improvement. GAO-18-210. Washington, D.C.: April 5, 2018. Technology Assessment: Artificial Intelligence, Emerging Opportunities, Challenges, and Implications. GAO-18-142SP. Washington, D.C.: March 28, 2018. GAO Strategic Plan 2018-2023: Trends Affecting Government and Society. GAO-18-396SP. Washington, D.C.: February 22, 2018. Critical Infrastructure Protection: Additional Actions are Essential for Assessing Cybersecurity Framework Adoption. GAO-18-211. Washington, D.C.: February 15, 2018. Cybersecurity Workforce: Urgent Need for DHS to Take Actions to Identify Its Position and Critical Skill Requirements. GAO-18-175. Washington, D.C.: February 6, 2018. Homeland Defense: Urgent Need for DOD and FAA to Address Risks and Improve Planning for Technology That Tracks Military Aircraft. GAO-18-177. Washington, D.C.: January 18, 2018. Federal Student Aid: Better Program Management and Oversight of Postsecondary Schools Needed to Protect Student Information. GAO-18-121. Washington, D.C.: December 15, 2017. Defense Civil Support: DOD Needs to Address Cyber Incident Training Requirements. GAO-18-47. Washington, D.C.: November 30, 2017. Federal Information Security: Weaknesses Continue to Indicate Need for Effective Implementation of Policies and Practices. GAO-17-549. Washington, D.C.: September 28, 2017. Information Security: OPM Has Improved Controls, but Further Efforts Are Needed. GAO-17-614. Washington, D.C.: August 3, 2017. Defense Cybersecurity: DOD’s Monitoring of Progress in Implementing Cyber Strategies Can Be Strengthened. GAO-17-512. Washington, D.C.: August 1, 2017. State Department Telecommunications: Information on Vendors and Cyber-Threat Nations. GAO-17-688R. Washington, D.C.: July 27, 2017. Internet of Things: Enhanced Assessments and Guidance Are Needed to Address Security Risks in DOD. GAO-17-668. Washington, D.C.: July 27, 2017. Information Security: SEC Improved Control of Financial Systems but Needs to Take Additional Actions. GAO-17-469. Washington, D.C.: July 27, 2017. Information Security: Control Deficiencies Continue to Limit IRS’s Effectiveness in Protecting Sensitive Financial and Taxpayer Data. GAO-17-395. Washington, D.C.: July 26, 2017. Social Security Numbers: OMB Actions Needed to Strengthen Federal Efforts to Limit Identity Theft Risks by Reducing Collection, Use, and Display. GAO-17-553. Washington, D.C.: July 25, 2017. Information Security: FDIC Needs to Improve Controls over Financial Systems and Information. GAO-17-436. Washington, D.C.: May 31, 2017. Technology Assessment: Internet of Things: Status and Implications of an Increasingly Connected World. GAO-17-75. Washington, D.C.: May 15, 2017. Cybersecurity: DHS’s National Integration Center Generally Performs Required Functions but Needs to Evaluate Its Activities More Completely. GAO-17-163. Washington, D.C.: February 1, 2017. High-Risk Series: An Update. GAO-17-317. Washington, D.C.: February 2017. IT Workforce: Key Practices Help Ensure Strong Integrated Program Teams; Selected Departments Need to Assess Skill Gaps. GAO-17-8. Washington, D.C.: November 30, 2016. Electronic Health Information: HHS Needs to Strengthen Security and Privacy Guidance and Oversight. GAO-16-771. Washington, D.C.: September 26, 2016. Defense Civil Support: DOD Needs to Identify National Guard’s Cyber Capabilities and Address Challenges in Its Exercises. GAO-16-574. Washington, D.C.: September 6, 2016. Information Security: FDA Needs to Rectify Control Weaknesses That Place Industry and Public Health Data at Risk. GAO-16-513. Washington, D.C.: August 30, 2016. Federal Chief Information Security Officers: Opportunities Exist to Improve Roles and Address Challenges to Authority. GAO-16-686. Washington, D.C.: August 26, 2016. Federal Hiring: OPM Needs to Improve Management and Oversight of Hiring Authorities. GAO-16-521. Washington, D.C.: August 2, 2016. Information Security: Agencies Need to Improve Controls over Selected High-Impact Systems. GAO-16-501. Washington, D.C.: May 18, 2016. Face Recognition Technology: FBI Should Better Ensure Privacy and Accuracy. GAO-16-267. Washington, D.C.: May 16, 2016. Smartphone Data: Information and Issues Regarding Surreptitious Tracking Apps That Can Facilitate Stalking. GAO-16-317. Washington, D.C.: May 9, 2016. Vehicle Cybersecurity: DOT and Industry Have Efforts Under Way, but DOT Needs to Define Its Role in Responding to a Real-world Attack. GAO-16-350. Washington, D.C.: April 25, 2016. Civil Support: DOD Needs to Clarify Its Roles and Responsibilities for Defense Support of Civil Authorities during Cyber Incidents. GAO-16-332. Washington, D.C.: April 4, 2016. Healthcare.gov: Actions Needed to Enhance Information Security and Privacy Controls. GAO-16-265. Washington, D.C.: March 23, 2016. Information Security: DHS Needs to Enhance Capabilities, Improve Planning, and Support Greater Adoption of Its National Cybersecurity Protection System. GAO-16-294. Washington, D.C.: January 28, 2016. Critical Infrastructure Protection: Sector-Specific Agencies Need to Better Measure Cybersecurity Progress. GAO-16-79. Washington, D.C.: November 19, 2015. Critical Infrastructure Protection: Cybersecurity of the Nation’s Electricity Grid Requires Continued Attention. GAO-16-174T. Washington, D.C.: October 21, 2015. Maritime Critical Infrastructure Protection: DHS Needs to Enhance Efforts to Address Port Cybersecurity. GAO-16-116T. Washington, D.C.: October 8, 2015. Cybersecurity: National Strategy, Roles, and Responsibilities Need to Be Better Defined and More Effectively Implemented. GAO-13-187. Washington, D.C.: February 14, 2014. Information Resellers: Consumer Privacy Framework Needs to Reflect Changes in Technology and the Marketplace. GAO-13-663. Washington, D.C.: September 25, 2013. Cyberspace: United States Faces Challenges in Addressing Global Cybersecurity and Governance. GAO-10-606. Washington, D.C.: July 2, 2010. Privacy: Alternatives Exist for Enhancing Protection of Personally Identifiable Information. GAO-08-536. Washington, D.C.: May 19, 2008. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Federal agencies and the nation's critical infrastructures—such as energy, transportation systems, communications, and financial services—are dependent on information technology systems to carry out operations. The security of these systems and the data they use is vital to public confidence and national security, prosperity, and well-being. The risks to these systems are increasing as security threats evolve and become more sophisticated. GAO first designated information security as a government-wide high-risk area in 1997. This was expanded to include protecting cyber critical infrastructure in 2003 and protecting the privacy of personally identifiable information in 2015. GAO was asked to update its information security high-risk area. To do so, GAO identified the actions the federal government and other entities need to take to address cybersecurity challenges. GAO primarily reviewed prior work issued since the start of fiscal year 2016 related to privacy, critical federal functions, and cybersecurity incidents, among other areas. GAO also reviewed recent cybersecurity policy and strategy documents, as well as information security industry reports of recent cyberattacks and security breaches. What GAO Found GAO has identified four major cybersecurity challenges and 10 critical actions that the federal government and other entities need to take to address them. GAO continues to designate information security as a government-wide high-risk area due to increasing cyber-based threats and the persistent nature of security vulnerabilities. GAO has made over 3,000 recommendations to agencies aimed at addressing cybersecurity shortcomings in each of these action areas, including protecting cyber critical infrastructure, managing the cybersecurity workforce, and responding to cybersecurity incidents. Although many recommendations have been addressed, about 1,000 have not yet been implemented. Until these shortcomings are addressed, federal agencies' information and systems will be increasingly susceptible to the multitude of cyber-related threats that exist. What GAO Recommends GAO has made over 3,000 recommendations to agencies since 2010 aimed at addressing cybersecurity shortcomings. As of July 2018, about 1,000 still needed to be implemented.
gao_GAO-17-805T
gao_GAO-17-805T_0
FCC Has Not Evaluated Lifeline’s Performance in Meeting Program Goals but Has Taken Recent Steps toward Evaluation FCC has not evaluated Lifeline’s performance in meeting program goals but, as we found in May 2017, has taken recent steps toward evaluation. According to GAO’s Cost Estimating and Assessment Guide, to use public funds effectively the government must meet the demands of today’s changing world by employing effective management practices and processes, including the measurement of government program performance. In the past, FCC has called for program evaluations to review the administration of universal service generally, including Lifeline, but has not completed such evaluations. For example, FCC specified that it would review USAC 1 year after USAC was appointed as the permanent administrator to determine whether the universal service programs were being administered effectively. This review, which was planned to have been completed by 1999, was never done. In 2005, FCC awarded a contract to the National Academy of Public Administration to study the administration of the USF programs generally, examine the tradeoffs of continuing with the current structure, and identify ways to improve the oversight and operation of universal service programs. However, we reported in May 2017 that FCC officials stated FCC subsequently terminated the contract and the study was not conducted. In March 2015, we found that FCC had not evaluated Lifeline’s effectiveness in achieving its performance goals of ensuring the availability of voice service for low-income Americans, while minimizing the burden on those who contribute to the USF. We recommended, and FCC agreed, to conduct a program evaluation to determine the extent to which Lifeline is efficiently and effectively reaching its performance goals. Our May 2017 report raised additional questions about Lifeline’s effectiveness in meeting its program goals. For example, we reported that: FCC did not know how many of the 12.3 million households receiving Lifeline as of December 2016 also have non-Lifeline phone service (for which they pay out of pocket) along with their Lifeline benefit. Without knowing whether participants are using Lifeline as a primary or secondary phone service, we concluded that it is difficult for FCC to determine whether it is achieving the program’s goal of increasing telephone subscribership among low-income consumers while minimizing the USF contribution burden. FCC revamped Lifeline in March 2016 to focus on broadband adoption and generally phase out phone service, in part because FCC recognized that most eligible consumers have phones without Lifeline and to also close the “digital divide” of broadband adoption between low-income households and the rest of the country. However, broadband adoption rates have steadily increased for the low-income population absent a Lifeline subsidy for broadband. We found that at least two companies operating in a total of at least 21 states had begun offering in-home non-Lifeline broadband wireline support for less than $10 per month to individuals that participate in public- assistance programs, such as SNAP or public housing. The offered rate of these providers’ own low-income broadband service of $10 per month was less expensive than FCC’s broadband reasonable- comparability cost benchmark of approximately $55 per month, which Lifeline subscribers would be paying for a similar level of service. Our May 2017 report also found that FCC has recently taken some steps toward evaluating Lifeline’s performance in meeting program goals. Specifically, in the 2016 Lifeline Modernization Order, FCC instructed USAC to hire an outside, independent, third-party evaluator to complete a program evaluation of Lifeline’s design, function, and administration. The order stipulated the outside evaluator must complete the evaluation and USAC must submit the findings to FCC by December 2020. As FCC expects Lifeline enrollment to increase as the program is expanded to include broadband service, this expansion could carry with it increased risks for fraud, waste, and abuse, as was the case with past expansions of the program. Completing the program evaluation as planned, and as we recommended in 2015, would help FCC determine whether Lifeline is meeting its stated goals of increasing telephone and broadband subscribership among low-income consumers, while minimizing the burden on those who contribute to the USF. Financial Controls Exist, with Others Planned, for the Lifeline Program, but Weaknesses Remain In our May 2017 report we found that FCC and USAC have established financial controls for Lifeline, including obtaining and reviewing information about billing, collecting, and disbursing funds. They have also developed plans to establish other controls, such as establishing a national eligibility verifier (National Verifier) for Lifeline providers to determine the eligibility of applicants seeking Lifeline service. However, as discussed in our May 2017 report, we found that weaknesses remain, including the lack of requirements to effectively control program expenditures above approved levels, concerns about the transparency of fees on customers’ telephone bills, and a lack of FCC guidance that could result in Lifeline and other providers paying inconsistent USF contributions. To address these concerns, we recommended the Chairman of FCC (1) require Commissioners to review and approve, as appropriate, spending above the budget in a timely manner; (2) require a review of customer bills as part of the contribution audit to include an assessment of whether the charges, including USF fees, meet FCC Truth-in-billing rules with regard to labeling, so customer bills are transparent, and appropriately labeled and described, to help consumers detect and prevent unauthorized changes; and (3) respond to USAC requests for guidance and address pending requests concerning USF contribution requirements to ensure the contribution factor is based on complete information and that USF pass-through charges are equitable. FCC generally agreed with those recommendations. In addition, we found that USAC’s banking practices for the USF result in oversight and accountability risks that FCC has plans to mitigate. Specifically, FCC maintains USF funds—whose net assets as of September 2016 exceeded $9 billion—outside of the U.S. Treasury pursuant to Office of Management and Budget (OMB) advice provided in April 2000. OMB had concluded that the USF does not constitute public money subject to the Miscellaneous Receipts Statute, 31 U.S.C. § 3302, a statute that requires that money received for the use of the United States be deposited in the Treasury unless otherwise authorized by law. As such, USF balances are held in a private bank account. However, subsequent to this OMB advice, in February 2005 we reported that FCC should reconsider this determination in light of the status of universal service monies as federal funds. As discussed in our May report, according to correspondence we received from the FCC Chairman’s Senior Legal Counsel, as of March 2017, FCC had decided to move the funds to the Treasury. FCC identified potential benefits of moving the funds to the Treasury. For example, FCC explained that having the funds in the Treasury would provide USAC with better tools for fiscal management of the funds, including access to real- time data and more accurate and transparent data. According to FCC, until the USF is moved into the Treasury, there are also some oversight risks associated with holding the fund in a private account. For example, the contract governing the account does not provide FCC with authority to direct bank activities with respect to the funds in the event USAC ceases to be administrator of the USF. After we raised this matter with FCC officials during the course of our review, beginning in November 2016, FCC sought to amend the contract between USAC and the bank to enable the bank to act on FCC instructions independently of USAC in the event USAC ceases to be the administrator. However, as of May 2017, the amended contract had not yet been signed. While FCC has put in place a preliminary plan to move the USF funds to the Treasury, as well as plans to amend the existing contract with the bank as an interim measure, several years have passed since this issue was brought to FCC’s attention without corrective actions being implemented. Further, under FCC’s preliminary plan, it would not be until next year, at the earliest, that the funds would be moved to the Treasury. In May 2017, while reviewing a draft of this report, a senior FCC official informed us that FCC experienced some challenges associated with moving the funds to the Treasury, such as coordinating across the various entities involved, which raised some questions as to when and perhaps whether the funds would be moved. Until FCC finalizes and implements its plan and moves the USF funds, the risks that FCC identified will persist and the benefits of having the funds in the Treasury will not be realized. As a result, in our May 2017 report, we recommended that the Chairman of FCC take action to ensure that the preliminary plans to transfer the USF funds from the private bank to the Treasury are finalized and implemented as expeditiously as possible. FCC agreed with this recommendation. FCC and USAC Have Implemented Some Controls to Improve Subscriber Eligibility Verification, but Weaknesses Remain FCC and USAC have implemented controls to improve subscriber eligibility verification, such as implementing the NLAD database in 2014, which helps carriers identify and resolve duplicate claims for Lifeline- supported services. However, as discussed in our May 2017 report, our analysis of data from 2014, as well as our undercover attempts to obtain Lifeline service, revealed significant weaknesses in subscriber eligibility verification. Lifeline providers are generally responsible for verifying the eligibility of potential subscribers, but we found that their ability to do so is hindered by a lack of access to, or awareness of, state eligibility databases that can be used to confirm eligibility prior to enrollment. For example, not all states have databases that Lifeline providers can use to confirm eligibility and some providers with whom we spoke were unaware of databases that were potentially available to them. These challenges might be overcome if FCC establishes a National Verifier, as it plans to do nationwide by the end of 2019, to remove responsibility for verifying eligibility from the providers. Additionally, since USAC was not maintaining and providing information to providers about these databases, we recommended they maintain and disseminate an updated list of state eligibility databases available to Lifeline providers that includes the qualifying programs those databases access to confirm eligibility, to help ensure Lifeline providers are aware of state eligibility databases and USAC audits of Lifeline providers can verify that available state databases are being utilized to verify subscriber eligibility. FCC agreed with the recommendation. For our May 2017 report, to identify Lifeline subscribers who were potentially ineligible to participate in the program, we tested the eligibility of subscribers who claimed participation in Medicaid, SNAP, and Supplemental Security Income (SSI) using NLAD data as of November 2014. We focused our analysis on these three programs because FCC reported in 2012 that these were the three qualifying programs through which most subscribers qualify for Lifeline. We compared approximately 3.4 million subscribers who, according to information entered in NLAD, were eligible for Lifeline due to enrollment in one of these three programs to eligibility data for these programs. On the basis of our analysis of NLAD and public-assistance data, we could not confirm that a substantial portion of selected Lifeline beneficiaries were enrolled in the Medicaid, SNAP, and SSI programs, even though, according to the data, they qualified for Lifeline by stating on their applications that they participated in one of these programs. In total, we were unable to confirm whether 1,234,929 subscribers out of the 3,474,672 who we reviewed, or about 36 percent, participated in the qualifying benefit programs they stated on their Lifeline enrollment applications or were recorded as such by Lifeline providers. If providers claimed and received reimbursement for each of the 1.2 million subscribers, then the subsidy amount associated with these individuals equals $11.4 million per month, or $137 million annually, at the current subsidy rate of $9.25 per subscriber. Because Lifeline disbursements are based on providers’ reimbursement claims, not the number of subscribers a provider has in NLAD, our analysis of NLAD data could not confirm actual disbursements associated with these individuals. Given that our review was limited to those enrolled in SNAP or Medicaid in selected case-study states, and SSI in states that participated in NLAD at the time of our analysis, our data results are likely understated compared to the entire population of Lifeline subscribers. These results indicate that potential improper payments have occurred and have gone undetected. We plan to refer potentially ineligible subscribers identified through our analysis for appropriate action as warranted. Our undercover testing, as discussed in our May 2017 report, also found that Lifeline may be vulnerable to ineligible subscribers obtaining service and the testing found examples of Lifeline providers being nonresponsive, or providing inaccurate information. To conduct our 21 tests, we contacted 19 separate providers to apply for Lifeline service. We applied using documentation fictitiously stating that we were enrolled in an eligible public-assistance program or met the Lifeline income requirements. We were approved to receive Lifeline services by 12 of the 19 Lifeline providers using fictitious eligibility documentation. We also experienced instances during our undercover tests where our calls to providers were disconnected, and where Lifeline provider representatives transmitted erroneous information, or were unable to provide assistance on questions about the status of our application. For example, one Lifeline provider told us that our application was not accepted by the company because our signature had eraser marks; however our application had been submitted via an electronic form on the provider’s website and was not physically signed. While our tests are illustrative and not representative of all Lifeline providers or applications submitted, these results suggest that Lifeline providers do not always properly verify eligibility and that applicants may potentially encounter similar difficulties when applying for Lifeline benefits. As described above, these challenges might be overcome if FCC establishes a National Verifier, as it plans to do nationwide by the end of 2019, to remove responsibility for verifying eligibility from the providers. FCC and USAC Have Taken Some Steps to Improve Oversight of Lifeline Providers, but Remaining Gaps Could Allow Noncompliance with Program Rules FCC and USAC have implemented some mechanisms to enhance oversight of Lifeline providers, as discussed in our May 2017 report, but we found that remaining gaps could allow noncompliance with program rules. For example, in July 2014, FCC took additional measures to combat fraud, waste, and abuse by creating a strike force to investigate violations of USF program rules and laws. According to FCC, the creation of the strike force is part of the agency’s commitment to stopping fraud, waste, and abuse and policing the integrity of USF programs and funds. Similarly, in June 2015, FCC adopted a rule requiring Lifeline providers to retain eligibility documentation used to qualify consumers for Lifeline support to improve the auditability and enforcement of FCC rules. However, we found FCC and USAC have limited oversight of Lifeline provider operations and the internal controls used to manage those operations. The current structure of the program relied throughout 2015 and 2016 on over 2,000 Eligible Telecommunication Carriers (ETC) to provide Lifeline service to eligible beneficiaries. These companies are relied on to not only provide telephone service, but also to create Lifeline applications, train employees and subcontractors, and make eligibility determinations for millions of applicants. USAC’s reliance on Lifeline providers to determine eligibility and subsequently submit accurate and factual invoices is a significant risk for allowing potentially improper payments to occur, and under current reporting guidelines these occurrences would likely go undetected and unreported. Federal internal control standards state that management retains responsibility for the performance and processes assigned to service organizations performing operational functions. Consistent with internal control standards, FCC and USAC would need to understand the extent to which a sample of these internal controls are designed and implemented effectively to ensure these controls are sufficient to address program risks and achieve the program’s objectives. We identified key Lifeline functions for which FCC and USAC had limited visibility. For example, we found instances of Lifeline providers utilizing domestic or foreign-operated call centers for Lifeline enrollment. When we asked FCC officials about Lifeline providers that outsource program functions to call centers, including those overseas, they told us that such information is not tracked by FCC or USAC. With no visibility over these call centers, FCC and USAC do not have a way to verify whether such call centers comply with Lifeline rules. FCC and USAC have limited knowledge about potentially adverse incentives that providers might offer employees to enroll subscribers. For example, some Lifeline providers pay commissions to third-party agents to enroll subscribers, creating a financial incentive to enroll as many subscribers as possible. Companies responsible for distributing Lifeline phones and service that use incentives for employees to enroll subscribers for monetary benefit increase the possibility of fictitious or ineligible individuals being enrolled into Lifeline. Highlighting the extent of the potential risk for companies, in April 2016 FCC announced approximately $51 million in proposed fines against one Lifeline provider, due to, among other things, its sales agents purposely enrolling tens of thousands of ineligible and duplicate subscribers in Lifeline using shared or improper eligibility documentation. To test internal controls over employees associated with Lifeline for our May 2017 report, we sought employment with a company that enrolls individuals to Lifeline. We were hired by a company and were allowed to enroll individuals in Lifeline without ever meeting any company representatives, conducting an employment interview, or completing a background check. After we were hired, we completed two fictitious Lifeline applications as an employee of the company, successfully enrolled both of these fictitious subscribers into Lifeline using fabricated eligibility documentation, and received compensation for these enrollments. The results of these tests are illustrative and cannot be generalized to any other Lifeline provider. We plan to refer this company for appropriate action as warranted. As stated above, these challenges might be overcome if FCC establishes a National Verifier, as it plans to do nationwide by the end of 2019, to remove responsibility for verifying eligibility from the providers. In addition, in May 2017, we made two recommendations to help address control weaknesses and related program-integrity risks. Specifically, we recommended that FCC establish time frames to evaluate compliance plans and develop instructions with criteria for FCC reviewers how to evaluate these plans to meet Lifeline’s program goals. We also recommended that FCC develop an enforcement strategy that details what violations lead to penalties and apply this as consistently as possible to all Lifeline providers to ensure consistent enforcement of program violations. FCC generally agreed with these recommendations. In conclusion, Lifeline’s large and diffuse administrative structure creates a complex internal control environment susceptible to significant risk of fraud, waste, and abuse. FCC’s and USAC’s limited oversight of important aspects of program operations further complicates the control environment—heightening program risk. We are encouraged by FCC’s recent steps to address weaknesses we identified, such as the 2016 order establishing a National Verifier, which, if implemented as planned, could further help to address weaknesses in the eligibility-determination process. We also plan to monitor the implementation status of the recommendations we made in May 2017. Chairman Johnson, Ranking Member McCaskill, and Members of the Committee, this concludes my prepared remarks. I would be happy to answer any questions that you may have at this time. GAO Contact and Staff Acknowledgments For further information regarding this testimony, please contact Seto J. Bagdoyan at (202) 512-6722 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Dave Bruno (Assistant Director), Scott Clayton (Analyst-in-Charge), and Daniel Silva. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Created in the mid-1980s, FCC's Lifeline program provides discounts to eligible low-income households for home or wireless telephone and, as of December 2016, broadband service. Lifeline reimburses telephone companies that offer discounts through the USF, which in turn is generally supported by consumers by means of a fee charged on their telephone bills. This testimony is based on GAO's May 2017 report and discusses steps FCC has taken to measure Lifeline's performance in meeting goals; steps FCC and USAC have taken to enhance controls over finances, subscribers, and providers; and any weaknesses that might remain. For the May 2017 report, GAO analyzed documents and interviewed officials from FCC and USAC. GAO also analyzed subscriber data from 2014 and performed undercover tests to identify potential improper payment vulnerabilities. The results of this analysis and testing are illustrative, not generalizable. What GAO Found In its May 2017 report GAO found the Federal Communications Commission (FCC) has not evaluated the Lifeline program's (Lifeline) performance in meeting its goals of increasing telephone and broadband subscribership among low-income households by providing financial support, but it has recently taken steps to begin to do so. FCC does not know how many of the 12.3 million households receiving Lifeline as of December 2016 also have non-Lifeline phone service, or whether participants are using Lifeline as a secondary phone service. FCC revamped Lifeline in March 2016 to focus on broadband adoption; however, broadband adoption rates have steadily increased for the low-income population absent a Lifeline subsidy for broadband. Without an evaluation, which GAO recommended in March 2015, FCC is limited in its ability to demonstrate whether Lifeline is efficiently and effectively meeting its program goals. In a March 2016 Order, FCC announced plans for an independent third party to evaluate Lifeline design, function, and administration by December 2020. FCC and the Universal Service Administrative Company (USAC)—the not-for-profit organization that administers the Lifeline program—have taken some steps to enhance controls over finances and subscriber enrollment. For example, FCC and USAC established some financial and management controls regarding billing, collection, and disbursement of funds for Lifeline. To enhance the program's ability to detect and prevent ineligible subscribers from enrolling, FCC oversaw completion in 2014 of an enrollment database and, in June 2015, FCC adopted a rule requiring Lifeline providers to retain eligibility documentation used to qualify consumers for Lifeline support to improve the auditability and enforcement of FCC rules. Nevertheless, in its May 2017 report, GAO found weaknesses in several areas. For example, Lifeline's structure relies on over 2,000 Eligible Telecommunication Carriers that are Lifeline providers to implement key program functions, such as verifying subscriber eligibility. This complex internal control environment is susceptible to risk of fraud, waste, and abuse as companies may have financial incentives to enroll as many customers as possible. On the basis of its matching of subscriber to benefit data, GAO was unable to confirm whether about 1.2 million individuals of the 3.5 million it reviewed, or 36 percent, participated in a qualifying benefit program, such as Medicaid, as stated on their Lifeline enrollment application. FCC's 2016 Order calls for the creation of a third-party national eligibility verifier by the end of 2019 to determine subscriber eligibility. Further, FCC maintains the Universal Service Fund (USF)—with net assets of $9 billion, as of September 2016—outside the Department of the Treasury in a private bank account. In 2005, GAO recommended that FCC reconsider this arrangement given that the USF consists of federal funds. In addition to addressing any risks associated with having the funds outside the Treasury, FCC identified potential benefits of moving the funds. For example, by having the funds in the Treasury, USAC would have better tools for fiscal management of the funds. In March 2017, FCC developed a preliminary plan to move the USF to the Treasury. Until FCC finalizes and implements its plan and actually moves the USF funds, the risks that FCC identified will persist and the benefits of having the funds in the Treasury will not be realized. What GAO Recommends In its May 2017 report, GAO made seven recommendations, including that FCC ensure plans to transfer the USF from the private bank to the Treasury are finalized and implemented expeditiously. FCC generally agreed with all the recommendations.
gao_GAO-17-784T
gao_GAO-17-784T_0
Background Overview of IRS Administration of LIHTC Program IRS administration of the LIHTC program involves overseeing compliance on the part of allocating agencies and taxpayers and developing and publishing regulations and guidance. IRS is responsible for reviewing LIHTC information on three IRS forms that are the basis of LIHTC program reporting and then determining whether program requirements have been met. Taxpayer noncompliance with LIHTC requirements may result in IRS denying claims for the credit in the current year or recapturing—taking back—credits claimed in prior years. Published guidance may include revenue rulings and procedures, notices, and announcements. Other guidance for the program includes an Audit Technique Guide for Completing Form 8823 that includes specific instructions for allocating agencies, including when site visits and file reviews are to be performed, and guidelines for determining noncompliance in areas such as health and safety standards, rent ceilings, income limits, and tenant qualifications. Role of Allocating Agencies State and local allocating agencies are responsible for day-to-day administration of the LIHTC program based on Section 42 of the Internal Revenue Code and Treasury regulations. More specifically, allocating agencies are responsible for Awarding tax credits. Each state receives an annual allocation of LIHTCs, determined by statutory formula. Allocating agencies then competitively award the tax credits to owners of qualified rental housing projects that reserve all or a portion of their units for low-income tenants, consistent with the agencies’ QAPs. Developers typically attempt to obtain funding for their projects by attracting third-party investors willing to contribute equity to the projects; the project investors then can claim the tax credits. Monitoring costs. Section 42 states that allocating agencies must consider the reasonableness of costs and their uses for proposed LIHTC projects, allows for agency discretion in making this determination, and also states that credits allocated to a project may not exceed the amount necessary to assure its feasibility and its viability as a low-income housing project. However, Section 42 does not provide a definition or offer guidance on determining how to calculate these amounts. Monitoring compliance. After credits are awarded, Treasury regulations state that allocating agencies must conduct regular site visits to physically inspect units and review tenant files for eligibility information. The agencies also have reporting and notification requirements. For example, allocating agencies must notify IRS of any noncompliance found during inspections and ensure that owners of LIHTC properties annually certify they met certain requirements for the preceding 12-month period. Role of Investors and Syndicators Developers of awarded projects typically attempt to obtain funding for their projects by attracting third-parties willing to invest in the project in exchange for the ability to claim tax credits. The developer sells an ownership interest in the project to one or more investors, or in many instances, to a fund managed by a syndicator who acts as an intermediary between the developer and investors. Investors and syndicators play several roles in the LIHTC market. For example, syndicators help initially connect investors and developers and oversee acquisition of projects. Once a project is acquired, syndicators perform ongoing monitoring and asset management to help ensure the project complies with LIHTC requirements and is financially sound. Syndicators attempt to identify potential problems and intercede if necessary, such as replacing under- or nonperforming general partners, and may use their own reserves to help resolve problems. In exchange for these services, syndicators typically are compensated through an initial acquisition fee—usually a percentage of the gross equity raised— and an annual asset management fee. Syndicators that we surveyed for our 2017 report were nonprofit or for- profit entities, generally had multistate operations, and averaged more than 20 years of experience with the LIHTC program. Of the 32 syndicators we surveyed, the syndicators collectively had raised more than $100 billion in LIHTC equity since 1986, helping to fund more than 20,000 properties and about 1.4 million units placed-in-service through 2014. Projects for which these syndicators raised equity in 2005–2014 represented an estimated 75 percent of all LIHTC properties placed-in- service in that period. Selected Allocating Agencies Implemented Differing Practices for Key LIHTC Requirements As we reported in 2016, allocating agencies implemented requirements for QAPs in varying ways and had processes in place to meet requirements for credit awards. Allocating agencies also had procedures to assess costs, but determined award amounts for projects differently, used various cost limits and benchmarks to determine reasonableness of costs, and used widely varying criteria for basis boosts. Agencies also had processes in place to monitor compliance. However, some of these practices raised concerns. Agencies Implemented Requirements for Allocation Plans and Award Credits in Varying Ways In our 2016 report, we generally found that allocating agencies implemented requirements for QAPs in varying ways and had processes in place to meet requirements for awarding the tax credit. Based on our 2016 review of 58 QAPs and our nine site visits, we found the QAPs did not always contain, address, or mention preferences and selection criteria required in Section 42. Rather, some allocating agencies incorporated the information into other LIHTC program documents, or implemented the requirements in practice. While Section 42 specifies some selection criteria (such as project location or tenant populations with special housing needs), it also more broadly states that a QAP set forth selection criteria “appropriate to local conditions.” As a result, allocating agencies have the flexibility to create their own methods and rating systems for evaluating applicants. We found that nearly all the allocating agencies that we reviewed used points or a threshold system for evaluating applicants. They used criteria such as qualifications of the development team, cost effectiveness, or leveraging of funds from other federal or state programs. According to Section 42, allocating agencies must notify the chief executive officer (or the equivalent) of the local jurisdiction in which the project is to be located. However, some agencies imposed an additional requirement of letters of support from local officials. Specifically, as of 2013, we found that of the 58 agencies in our review,12 agencies noted that their review or approval of applications was contingent on letters of support, and another 10 agencies awarded points for letters of local support. HUD officials have cited fair housing concerns in relation to any preferences or requirements for local approval or support because of the discriminatory influence these factors could have on where affordable housing is built. In December 2016, IRS issued a revenue ruling that clarified that Section 42 neither requires nor encourages allocating agencies to reject all proposals that do not obtain the approval of the locality where the project developer proposes to place the project. Allocating agencies we visited for our 2016 report had processes in place to meet other Section 42 requirements, including awarding credit to nonprofits and long-term affordability of projects. Allocating agencies must allocate at least 10 percent of the state housing credit ceiling to projects involving qualified nonprofit organizations. All nine allocating agencies we visited had a set-aside of at least 10 percent of credits to be awarded to projects involving nonprofits. Section 42 also requires allocating agencies to execute an extended low-income housing commitment of at least 30 years before a building can receive credits. For example, one allocating agency we visited required developers to sign agreements for longer extended-use periods, while some agencies awarded points to applications whose developers elect longer periods. Agencies We Reviewed Had Procedures to Assess Costs and Used Widely Varying Criteria for Basis Boosts Allocating agencies we reviewed for our 2016 report had procedures to assess costs, but determined award amounts for projects differently and used various cost limits and benchmarks to determine reasonableness of costs. All nine allocating agencies we visited required applicants to submit detailed cost and funding estimates, an explanation of sources and uses, and expected revenues as part of their applications. These costs were then evaluated to determine a project’s eligible basis (total allowable costs associated with depreciable costs in the project), which in turn determined the qualified basis and ultimately the amount of tax credits to be awarded. Reasonableness of costs. We found that allocating agencies had different ways for determining the reasonableness of project costs. Based on our analysis of 58 QAPs and our nine site visits, agencies had established various limits against which to evaluate the reasonableness of submitted costs, such as applying limits on development costs, total credit awards, developer fees, and builder’s fees. Section 42 does not provide a definition of reasonableness of costs, giving allocating agencies discretion on how best to determine what costs are appropriate for their respective localities. Discretionary basis boosts. Allocating agencies commonly “boosted” the basis for projects, but used widely varying criteria for doing so. Section 42 notes that an increase or “boost” of up to 130 percent in the eligible basis can be awarded by an allocating agency to a housing development in a qualified census tract or difficult development area. According to our QAP analysis, 44 of 58 plans we reviewed included criteria for awarding discretionary basis boosts, with 16 plans explicitly specifying the use of basis boosts for projects as needed for financial or economic feasibility. The discretionary boosts were applied to different types of projects and on different scales (for example, statewide or citywide). For example, we found one development that received a boost to the eligible basis for having received certain green building certifications, although the applicant did not demonstrate financial need or request the boost. The allocating agency told us that all projects with specified green building certifications received the boost automatically, as laid out in its QAP. At the time of our review, agency officials said that the agency had changed its practices to prevent automatic basis boosts from being applied and required additional checks for financial need. In another QAP we reviewed, one agency described an automatic 130 percent statewide boost for all LIHTC developments. According to the officials, the automatic statewide boost remained in effect because officials made the determination that nearly all projects would need it for financial feasibility. Section 42 requires that allocating agencies determine that “discretionary basis boosts” were necessary for buildings to be financially feasible before granting them to developers. Section 42 does not require allocating agencies to document their analysis for financial feasibility (with or without the basis boost). However, legislative history for the Housing and Economic Recovery Act of 2008 included expectations that allocating agencies would set standards in their QAPs for which projects would be allocated additional credits, communicate the reasons for designating such criteria, and publicly express the basis for allocating additional credits to a project. In addition, NCSHA (a nonprofit advocating for state allocating agencies) recommends that allocating agencies set standards in their QAPs to determine eligibility for discretionary basis boosts and make the determinations publicly available. Agencies We Visited Had Processes for Monitoring Compliance In our 2016 report we found that the allocating agencies we visited had processes for and conducted compliance monitoring of projects consistent with Section 42 and Treasury regulations. Treasury regulations require allocating agencies to conduct on-site physical inspections for at least 20 percent of the project’s low-income units and file reviews for the tenants in these units at least once every 3 years. In addition, allocating agencies must annually review owner certifications that affirm that properties continue to meet LIHTC program requirements. Allocating agencies we visited followed regulatory requirements on when to conduct physical inspections and tenant file reviews. Allocating agencies we visited generally used electronic databases to track the frequency of inspections, file reviews, and certifications, although most of these agencies documented these reviews on paper. All the allocating agencies we visited had inspection and review processes in place to monitor projects following the 15-year compliance period, as required under Section 42. Allocating agencies must execute an extended low-income housing commitment to remain affordable for a minimum of 30 years before a tax credit project can receive credits. After the compliance period is over, the obligation for allocating agencies to report to IRS on compliance issues ends and investors are no longer at risk for tax credit recapture. IRS Oversight of LIHTC Has Been Minimal Our prior reports found IRS conducted few reviews of allocating agencies and had not reviewed how agencies determined basis boosts. Data on noncompliance were not reliable and IRS used little of the reported program information. IRS had not directly participated in an interagency initiative to augment HUD’s databases with LIHTC property inspection data. Both our 2015 and 2016 reports concluded that opportunities existed to enhance oversight of the LIHTC program, specifically by leveraging the knowledge and experience of HUD. IRS Conducted Few Reviews of Allocating Agencies and Had Not Reviewed How Agencies Determined Basis Boosts Few reviews of allocating agencies. In our 2015 report, we found that IRS had conducted seven audits (reviews) of allocating agencies from 1986 (inception of the program) through May 2015. In the audits, IRS found issues related to QAPs, including missing preferences and selection criteria. But in both our 2015 and 2016 reports, IRS officials stated that they did not regard a regular review of QAPs as part of their responsibilities as outlined in Section 42 and therefore did not regularly review the plans. IRS officials said that allocating agencies have primary responsibility to ensure that the plans meet Section 42 preferences and selection criteria. IRS officials noted that review of a QAP to determine if the plan incorporated the elements specified in Section 42 could occur if IRS were to audit an allocating agency. No review of agencies’ discretionary basis boosts. In our 2016 report, we found IRS had not reviewed the criteria allocating agencies used to award discretionary basis boosts. The use of basis boosts has implications for LIHTC housing production because of the risk of oversubsidizing projects, which would reduce the amount of the remaining allocable subsidies and yield fewer LIHTC projects overall within a state. IRS also had not provided guidance to agencies on how to determine the need for the additional basis to make projects financially feasible. IRS officials told us that Section 42 gives allocating agencies the discretion to determine if projects receive a basis boost and does not require documentation of financial feasibility. Additionally, IRS officials explained that because the overall amount of subsidies allocated to a state is limited, the inherent structure of the program discourages states from oversubsidizing projects. However, during our 2016 review, we observed a range of practices for awarding discretionary basis boosts, including a blanket basis boost that could result in fewer projects being subsidized and provide more credits than necessary for financial feasibility. We concluded that because IRS did not regularly review QAPs, many of which list criteria for discretionary basis boosts, IRS was unable to determine the extent to which agency policies could result in oversubsidizing of projects. Some Program Data Were Not Reliable and IRS Used Little of Reported Program Information Unreliable data. We reported in 2015 that IRS had not comprehensively captured information reported for the program in its Low-Income Housing Credit database and the existing data were not complete and reliable. IRS guidance requires the collection of data on the LIHTC program in an IRS database, which records information submitted by allocating agencies and taxpayers on three forms. The forms include Credit allocation and certification (Form 8609). The two-part form is completed by the allocating agency and the taxpayer. Agencies report the allocated amount of tax credits available over a 10-year period for each building in a project. The taxpayer reports the date on which the building was placed-in-service (suitable for occupancy). Noncompliance or building disposition (Form 8823). Allocating agencies must complete and submit this form to IRS if an on-site physical inspection of a LIHTC project finds any noncompliance. The form records any findings (and corrections of previous findings) based on the inspection of units and review of the low-income tenant certifications. Annual report (Form 8610). IRS staff review the reports to ensure allocations do not exceed a statutorily prescribed ceiling for that year. Based on our analysis of the information in the database, we found in 2015 that the data on credit allocation and certification information were not sufficiently reliable to determine if basic requirements for the LIHTC program were being achieved. For example, we could not determine how often LIHTC projects were placed-in-service within required time frames. We concluded that without improvements to the data quality of credit allocation and certification information, it was difficult to determine if credit allocation and placed-in-service requirements had been met by allocating agencies and taxpayers, respectively. Thus, we recommended that IRS should address weaknesses identified in data entry and programming controls to ensure reliable data are collected on credit allocations. At the time of our 2015 report, IRS acknowledged the need for improvements in its controls and procedures (including data entry and quality reviews). IRS officials agreed that these problems should be corrected and data quality reviews should be conducted on an ongoing basis. As of March 2017, in response to our recommendation, IRS officials said that they had explored possibilities to improve the database, which not only houses credit allocation information, but also data from noncompliance and building disposition forms. Specifically, IRS is working to move the database to a new and updated server, which will address weaknesses identified in data entry and programming controls. IRS expects to complete the data migration step by early fall of 2017. Until IRS implements its plan to improve the data, this recommendation will remain open. Limited noncompliance data, analysis, and guidance on reporting. We found in our 2015 and 2016 reports that IRS had done little with the information it collects on noncompliance. IRS had captured little information from the Form 8823 submissions in its database and had not tracked the resolution of noncompliance issues or analyzed trends in noncompliance. As of April 2016, the database included information from about 4,200 of the nearly 214,000 Form 8823s IRS received since 2009 (less than 2 percent of forms received). For our 2015 report, officials told us the decision was made during the 2008–2009 timeframe to input information only from forms that indicated a change in building disposition, such as a foreclosure. IRS focused on forms indicating this change for reasons including the serious nature of the occurrence for the program and impacts on taxpayers’ ability to receive credit. Officials also stated it was not cost effective to input all the form information and trend analysis on all types of noncompliance was not useful for purposes of ensuring compliance with the tax code. In addition, as we reported in both 2015 and 2016, IRS had assessed little of the noncompliance information collected on the Form 8823 or routinely used it to determine trends in noncompliance. Because little information was captured in the Low-Income Housing Credit database, IRS was unable to provide us with program-wide information on the most common types of noncompliance. Furthermore, IRS had no method to determine if issues reported as uncorrected had been resolved or if properties had recurring noncompliance issues. In our 2016 report, we also found inconsistent reporting on the noncompliance forms, the reasons for which included conflicting IRS guidance, different interpretations of the guidance by allocating agencies, and lack of IRS feedback about agency submissions. IRS developed guidelines for allocating agencies to use when completing the Form 8823, the “fundamental purpose” of which was identified as providing standardized operational definitions for the noncompliance categories listed on the form. The IRS guide adds that it is important that noncompliance be consistently identified, categorized, and reported and notes that the benefits of consistency included enhanced program administration by IRS. Allocating agencies we visited had various practices for submitting Form 8823 to IRS, including different timing of submissions, reporting on all violations (whether minor or corrected during inspections) or not, and amounts of additional detail provided. Partly because of these different practices, the number of forms each of the nine agencies told us they sent to IRS in 2013 varied from 1 to more than 1,700. We concluded that without IRS clarification of when to send in the Form 8823, allocating agencies will continue to submit inconsistent noncompliance data to IRS, which will make it difficult for IRS to efficiently distinguish between minor violations and severe noncompliance, such as properties with health and safety issues. We recommended that IRS should clarify what to submit and when—in collaboration with the allocating agencies and Treasury—to help IRS improve the quality of the noncompliance information it receives and help ensure that any new guidance is consistent with Treasury regulations. In August 2016, IRS stated it would review the Form 8823 Audit Technique Guide to determine whether additional guidance and clarification were needed for allocating agencies to report noncompliance information on the form. If published legal guidance is required, IRS stated that it will submit a proposal for such guidance for prioritization. IRS indicated an expected implementation date by November 2017. In addition, in March 2017, officials stated that IRS Counsel attended an industry conference with allocating agencies at which issues related to the Form 8823 were discussed. Lack of participation in data initiative. Moreover, in our 2016 report we found IRS had not taken advantage of the important progress HUD made through the Rental Policy Working Group (working group)—which was established to better align the operation of federal rental policies across the administration—to augment its databases with LIHTC property inspection data. This data collection effort created opportunities for HUD to share inspection data with IRS that could improve the effectiveness of reviews for LIHTC noncompliance. However, the IRS Small Business/Self-Employed Division managing the LIHTC program had not been involved in the working group. We concluded that such involvement would allow IRS to leverage existing resources, augment its information on noncompliance, and better understand the prevalence of noncompliance. We recommended that staff from the division participate in the physical inspection initiative of the working group and also recommended that the IRS Commissioner evaluate how IRS could use HUD’s real estate database, including how the information might be used to reassess reporting categories on Form 8823 and reassess which categories of noncompliance information to review for audit potential. As of March 2017, IRS had implemented our recommendation to include the appropriate staff at the working group meetings. However, IRS officials stated that since HUD’s database with property inspection data was not complete as of March 2017 and contained data from 30 states, it was unclear how the database could be used. IRS officials said they would continue exploring the HUD database if the data for all LIHTC properties were included and it was possible to isolate the LIHTC property data from other rental properties in the HUD database. Leveraging Experience of HUD May Augment IRS’s Capacity to Oversee Program Both our 2015 and 2016 reports found that opportunities existed to enhance oversight of the LIHTC program, specifically by leveraging the knowledge and experience of HUD. We found in 2015 that while LIHTC is the largest federal program for increasing the supply of affordable rental housing, LIHTC is a peripheral program in IRS in terms of resources and mission. Oversight responsibilities for the program include monitoring allocating agencies and taxpayer compliance. However, as we have discussed previously, IRS oversight has been minimal and IRS has captured and used little program information. As we previously stated, such information could help program managers and congressional decision makers assess the program’s effectiveness. HUD─which has a housing mission─collects and analyzes information on low-income rental housing, including LIHTC-funded projects. As we reported in 2015, HUD’s role in the LIHTC program is generally limited to the collection of information on tenant characteristics (mandated by the Housing and Economic Recovery Act of 2008). However, it has voluntarily collected project-level information on the program since 1996 because of the importance of LIHTC as a source of funding for affordable housing. HUD also has sponsored studies of the LIHTC program that use these data. HUD’s LIHTC databases, the largest federal source of information on the LIHTC program, aggregates project-level data that allocating agencies voluntarily submit and information on tenant characteristics that HUD must collect. Since 2014, HUD also has published annual reports analyzing data it must collect on tenants residing in LIHTC properties. As part of this report, HUD compares property information in its tenant database to the information in its property database to help assess the completeness of both databases. In our 2015 report, we also discussed HUD’s experience in working with allocating agencies. While multiple federal agencies administer housing- related programs, HUD is the lead federal agency for providing affordable rental housing. Much like LIHTC, HUD’s rental housing programs rely on state and local agencies to implement programs. HUD is responsible for overseeing these agencies, including reviewing state and local consolidated plans for the HOME Investment Partnership and Community Development Block Grant programs—large grant programs that also are used to fund LIHTC projects. HUD also has experience in directly overseeing allocating agencies in their roles as contract administrators for project-based Section 8 rental assistance. HUD has processes, procedures, and staff in place for program evaluation and oversight of state and local agencies that could be built upon and strengthened. In our 2015 report, we concluded that significant resource constraints affected IRS’s ability to oversee taxpayer compliance and precluded wide-ranging improvement to such functions, but that IRS still had an opportunity to enhance oversight of LIHTC. We also concluded that leveraging the experience and expertise of another agency with a housing mission, such as HUD, might help offset some of IRS’s limitations in relation to program oversight. HUD’s existing processes and procedures for overseeing allocating agencies could constitute a framework on which further changes and improvements in LIHTC could be effected. However, enhancing HUD’s role could involve additional staff and other resources. An estimate of potential costs and funding options for financing enhanced federal oversight of the LIHTC program would be integral to determining an appropriate funding mechanism. We asked that Congress consider designating HUD as a joint administrator of the program responsible for oversight. As part of the deliberation, we suggested that Congress direct HUD to estimate the costs to monitor and perform the additional oversight responsibilities, including a discussion of funding options. Treasury agreed that it would be useful for HUD to receive ongoing responsibility for, and resources to perform, research and analysis on the effectiveness of LIHTCs in increasing the availability of affordable rental housing. Treasury noted that such research and analysis are not part of IRS’s responsibilities or consistent with its expertise in interpreting and enforcing tax laws. However, Treasury stated that responsibility for interpreting and enforcing the code should remain entirely with IRS. Our report noted that if program administration were changed, IRS could retain certain key responsibilities consistent with its tax administration mission. In our 2016 report, we concluded that IRS oversight of allocating agencies continued to be minimal, particularly in reviewing QAPs and allocating agencies’ practices for awarding discretionary basis boosts. As a result, we reiterated the recommendation from our 2015 report that Congress should consider designating HUD as a joint administrator of the program responsible for oversight due to its experience and expertise as an agency with a housing mission. In response to our 2016 report, HUD stated it remains supportive of mechanisms to use its significant expertise and experience administering housing programs for enhanced effectiveness of LIHTC. HUD also stated that enhanced interagency coordination could better ensure compliance with fair housing requirements and improve alignment of LIHTC with national housing priorities. As of July 2017, Congress had not enacted legislation to give HUD an oversight role for LIHTC. Chairman Hatch, Ranking Member Wyden, and Members of the Committee, this concludes my prepared statement. I would be happy to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments For further information about this testimony, please contact me at 202-512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Nadine Garrick Raidbard, Assistant Director; Anar N. Jessani, Analyst in Charge; William R. Chatlos; Farrah Graham; Daniel Newman; John McGrail; Barbara Roesmann; and MaryLynn Sergent. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The LIHTC program, established under the Tax Reform Act of 1986, is the largest source of federal assistance for developing affordable rental housing and will represent an estimated $8.5 billion in forgone revenue in 2017. LIHTC encourages private-equity investment in low-income rental housing through tax credits. The program is administered by IRS and allocating agencies, which are typically state or local housing finance agencies established to meet affordable housing needs of their jurisdictions. Responsibilities of allocating agencies (in Section 42 of the Internal Revenue Code and regulations of the Department of the Treasury) encompass awarding credits, assessing the reasonableness of project costs, and monitoring projects. In this testimony, GAO discusses (1) how allocating agencies implement federal requirements for awarding LIHTCs, assess reasonableness of property costs, and monitor properties' ongoing compliance; and (2) IRS oversight of the LIHTC program. This statement is based primarily on three reports GAO issued in July 2015 ( GAO-15-330 ), May 2016 ( GAO-16-360 ), and February 2017 ( GAO-17-285R ). GAO also updated the status of recommendations made in these reports by reviewing new or revised IRS policies, procedures, and reports and interviewing IRS officials. What GAO Found In its May 2016 report on the Low-Income Housing Tax Credit (LIHTC) program of the Internal Revenue Service (IRS), GAO found that state and local housing finance agencies (allocating agencies) implemented requirements for allocating credits, reviewing costs, and monitoring projects in varying ways. Moreover, some allocating agencies' day-to-day practices to administer LIHTCs also raised concerns. For example, qualified allocation plans (developed by 58 allocating agencies) that GAO analyzed did not always mention all selection criteria and preferences that Section 42 of the Internal Revenue Code requires; and allocating agencies could increase (boost) the eligible basis used to determine allocation amounts for certain buildings if needed for financial feasibility. However, they were not required to document the justification for the increases. The criteria used to award boosts varied, with some allocating agencies allowing boosts for specific types of projects and one allowing boosts for all projects in its state. In its 2015 and 2016 reports, GAO found IRS oversight of the LIHTC program was minimal. Additionally, IRS collected little data on or performed limited analysis of compliance in the program. Specifically, GAO found that Since 1986, IRS conducted seven audits of the 58 allocating agencies we reviewed. Reasons for the minimal oversight may include LIHTC being viewed as a peripheral program in IRS in terms of its mission and priorities for resources and staffing. IRS had not reviewed the criteria allocating agencies used to award discretionary basis “boosts,” which raised concerns about oversubsidizing projects (and reducing the number of projects funded). IRS guidance to allocating agencies on reporting noncompliance was conflicting. As a result, allocating agencies' reporting of property noncompliance was inconsistent. IRS had not participated in and leveraged the work of the physical inspection initiative of the Rental Policy Working Group—established to better align the operations of federal rental assistance programs—to augment its databases with physical inspection data on LIHTC properties that the Department of Housing and Urban Development (HUD) maintains. In its prior reports, GAO made a total of four recommendations to IRS. As of July 2017, IRS had implemented one recommendation to include relevant IRS staff in the working group. IRS has not implemented the remaining three recommendations, including improving the data quality of its LIHTC database, clarifying guidance to agencies on reporting noncompliance, and evaluating how the information HUD collects could be used for identifying noncompliance issues. In addition, because of the limited oversight of LIHTC, in its 2015 report GAO asked that Congress consider designating certain oversight responsibilities to HUD because the agency has experience working with allocating agencies and has processes in place to oversee the agencies. As of July 2017, Congress had not enacted legislation to give HUD an oversight role for LIHTC.
gao_GAO-18-341
gao_GAO-18-341_0
Background Since September 2012, CMS has subjected selected items and services to prior authorization and pre-claim reviews—a process similar to prior authorization where review takes place after services have begun— through four fixed-length demonstrations and a permanent program. The prior authorization demonstrations are for certain power mobility devices, repetitive scheduled non-emergency ambulance services, non- emergency hyperbaric oxygen therapy, and home health services; while the permanent program is for certain durable medical equipment, prosthetics, orthotics, and supplies (DMEPOS) items. By using prior authorization, CMS generally seeks to reduce expenditures, unnecessary utilization, and improper payments, although specific objectives for the programs vary based on the statutory authority CMS used to initiate each. Medicare Prior Authorization Programs Power mobility devices demonstration: In September 2012, CMS implemented prior authorization for certain scooters and power wheelchairs, items the agency has identified with historically high levels of fraud and improper payments, for Medicare beneficiaries in seven states: California, Florida, Illinois, Michigan, New York, North Carolina, and Texas. The demonstration, established under Section 402(a) of the Social Security Amendments of 1967, is intended to develop or demonstrate improved methods for the investigation and prosecution of fraud in providing care or services under Medicare. In October 2014, CMS expanded the demonstration to 12 additional states: Arizona, Georgia, Indiana, Kentucky, Louisiana, Maryland, Missouri, New Jersey, Ohio, Pennsylvania, Tennessee, and Washington. CMS also extended the program, which was originally scheduled to end in 2015, until August 2018. CMS officials reported that since the prior authorization programs’ implementation, the agency made more than 100 referrals to its contractors that investigate fraud. However, due to the length of time fraud investigations typically take, results from these referrals are not yet available. extended the program, which was originally scheduled to end in 2017, through November 2018. Non-emergency hyperbaric oxygen therapy demonstration: In March 2015, CMS implemented prior authorization for non-emergency hyperbaric oxygen therapy in three states the agency has identified with high utilization and improper payment rates, based on the therapy facility’s location: Illinois, Michigan, and New Jersey. Medicare covers hyperbaric oxygen therapy for certain conditions, such as diabetic wounds of the lower extremities, after there have been 30 days of no measurable signs of healing during standard wound care treatment. According to CMS, previous experience indicates that hyperbaric oxygen therapy has a high potential for improper payments and raises concerns about beneficiaries receiving medically unnecessary care. The demonstration, established under Section 1115A of the Social Security Act, is intended to reduce expenditures while preserving or enhancing quality of care. The demonstration ended in February 2018. Home health services demonstration: In August 2016, CMS implemented prior authorization for home health services in Illinois. The demonstration, established under Section 402(a) of the Social Security Amendments of 1967, is intended to develop or demonstrate improved methods for the investigation and prosecution of fraud in providing care or services under Medicare. The demonstration was originally scheduled to incorporate other states the agency has identified with high rates of fraud and abuse: Florida, Massachusetts, Michigan, and Texas. However, as of April 2017, CMS paused the demonstration while it considered making improvements. As of February 2018, the demonstration has not resumed. Permanent DMEPOS program: In December 2015, CMS established a permanent prior authorization program for certain DMEPOS items under Section 1834(a)(15) of the Social Security Act. This program aims to reduce unnecessary utilization for certain DMEPOS items. To select the items that would be subject to prior authorization, CMS compiled a Master List of items that 1) appear on the DMEPOS Fee Schedule list, 2) have an average purchase fee of $1,000 or greater (adjusted annually for inflation) or an average rental fee schedule of $100 or greater (adjusted annually for inflation), and 3) meet one of these two criteria: the item was identified in a GAO or HHS Office of Inspector General report that is national in scope and published in 2007 or later as having a high rate of fraud or unnecessary utilization, or the item is listed in the 2011 or later published Comprehensive Error Rate Testing program’s annual report. CMS may choose specific items from this Master List to include on the required prior authorization list, and there is no set end date for requiring prior authorization for those items. CMS may suspend prior authorization for those items at any time. (See app. I for the items on the Master List.) In March 2017, CMS began requiring prior authorization for two types of group 3 power wheelchairs from the Master List for beneficiaries with a permanent address in selected states (Illinois, Missouri, New York, and West Virginia) and expanded the program nationwide in July 2017. As of February 2018, CMS has not identified any other items from the Master List for prior authorization. See figure 1 for each prior authorization program’s implementation and end dates. Medicare Prior Authorization Process MACs that administer the prior authorization programs review prior authorization requests for items and services, along with supporting documentation, to determine whether all applicable Medicare coverage and payment rules have been met. CMS expects requests for prior authorization to include all documentation necessary to show that coverage requirements have been met, for example face-to-face examination documentation or the detailed product description. The referring physician—or the physician who conducts the face-to-face examination of the beneficiary and orders the item or service—provides this documentation to a provider or supplier who subsequently furnishes the item or service. According to multiple MACs’ officials, the provider or supplier who furnishes the item or service typically submits the prior authorization request. CMS has specified that MACs review initial prior authorization requests and make a determination within 10 business days. MACs make one of the following decisions: Provisionally affirm (approve) – Documentation submitted meets Medicare’s coverage and payment rules. A prior authorization provisional affirmation is a preliminary finding that a future claim submitted to Medicare for the item or service meets Medicare’s coverage and payment requirements and will likely be paid. Non-affirm (deny) – Documentation submitted does not meet Medicare rules or the item or service is not medically necessary. However, a non-affirmed request may be revised and resubmitted for review an unlimited number of times prior to the submission of the claim for payment. CMS has specified that MACs make a determination on a resubmission within 20 business days. For the demonstrations, claims that are submitted without a prior authorization provisional affirmation are subject to prepayment review, which is medical review before the claim is paid. In addition, for the home health services and power mobility devices demonstrations, claims submitted without a prior authorization provisional affirmation that are determined payable during the medical review will be subject to a 25 percent reduction in payment. For the permanent program, claims that are submitted without a prior authorization provisional affirmation are denied. (See fig. 2 for the prior authorization process.) As of March 31, 2017, MACs had processed over 337,000 prior authorization requests—about 264,000 initial requests and about 73,000 resubmissions, as shown in table 1. MACs’ provisional affirmation rates for both initial and resubmitted prior authorization requests rose in each demonstration between their implementation and March 2017, often by 10 percentage points or more. For example, the provisional affirmation rate for initial submissions for repetitive scheduled non-emergency ambulance services rose from 28 percent in the first 6 months after implementation (December 2014 through May 2015) to 66 percent in the most recent 6 months for which data are available (October 2016 through March 2017). Some MAC officials attributed this rise in part to provider and supplier education, which improved documentation being submitted. Medicare Expenditures Decreased After Prior Authorization Began in Four Demonstrations Expenditures Decreased After Prior Authorization Began and Estimated Savings May be as High as About $1.1 to $1.9 Billion, with Most Occurring Soon After Implementation According to our analysis, expenditures decreased for items and services subject to prior authorization in four demonstrations. For example, expenditure decreases in initial demonstration states from implementation through March 2017 ranged from 17 percent to 74 percent. Figure 3 shows the average monthly expenditures per state from 6 months prior to the start of each demonstration through March 2017 for each of three groups of states: states that were part of the initial demonstration, states that were part of the demonstration expansion, and non-demonstration states. (See app. II for monthly expenditures for items and services covered under each demonstration from their implementation through March 2017.) Our analysis also shows potential savings for items and services subject to prior authorization, based on the difference between actual expenditures and estimates of what expenditures would have been in the absence of the demonstrations. For each demonstration, we estimated expenditures had the demonstration not been implemented by assuming that expenditures would have remained at their average for the 6 months prior to the demonstration starting in each state. We then compared actual expenditures to these estimated expenditures and found that potential savings could be as high as about $1.1 to $1.9 billion. Estimated potential savings in states that were part of the demonstrations since either their initial implementation or expansion may be as high as $1.1 billion. For items and services subject to prior authorization in these states, estimated expenditures in the absence of the demonstrations would have been over $2.1 billion, while actual expenditures were about $1.0 billion. Estimated potential savings may be as high as about $1.9 billion if, for the power mobility device demonstration, we estimate savings in both demonstration states and non-demonstration states since implementation. With this method, estimated savings since the power mobility device demonstration’s implementation change from over $600 million in demonstration states since each state’s implementation to about $1.4 billion in all states since the demonstration began in September 2012, a nearly $800 million increase. This increase is due to including non-demonstration states in the analysis and changing the assumptions for expanded demonstration states in the analysis. CMS officials have reported that certain power mobility device expenditures have declined significantly in both demonstration states and non-demonstration states in part because they think that larger nationwide suppliers improved their compliance with CMS policies in all states based on their experiences with prior authorization. CMS did not make a similar statement for the other demonstrations, and in December 2017, CMS officials said that the agency has not analyzed expenditures in non- demonstration states for the other demonstrations. See table 2 for estimated potential savings for prior authorization demonstrations from implementation through March 2017. According to our analysis, more than half of the reduction in monthly expenditures took place within the first 6 months of each demonstration. We calculated the average monthly expenditures for the 6 months prior to the start of each demonstration, the monthly expenditures in the 6th month after implementation, and the monthly expenditures in March 2017 for initial demonstration states in the power mobility device, repetitive scheduled non-emergency ambulance services, and non-emergency hyperbaric oxygen therapy demonstrations. We compared these expenditures and found that 58, 99, and 91 percent of the reduction in monthly expenditures during this time took place during the first 6 months of each demonstration, respectively. Other CMS Efforts May Have Contributed to Expenditure Reductions CMS had other program integrity efforts underway before implementing prior authorization, and these efforts may have also contributed to the reduction in expenditures for items and services subject to prior authorization in these demonstrations. CMS officials said that it is likely that prior authorization played a large role in the expenditure reduction for those select items and services. However, CMS officials also reported that it is difficult to separate the effects of prior authorization from other program integrity efforts, and the agency has not developed a methodology for determining the independent effect of prior authorization on expenditures. We found that some of these other program integrity efforts have addressed provider screening and enrollment and certain durable medical equipment, and these may have contributed to the reductions in Medicare expenditures. Provider screening and enrollment: CMS has taken steps to keep potentially fraudulent providers and suppliers from billing Medicare. For example, in September 2011, CMS began revalidating providers’ and suppliers’ enrollment in Medicare to ensure that they continue to be eligible to bill Medicare. Revalidation involves confirming that the provider or supplier continues to meet Medicaid program requirements, including ensuring that a provider or supplier does not employ or contract with individuals who have been excluded from participation in federal health care programs. We previously reported that screening all providers and suppliers—not just the ones subject to prior authorization—resulted in over 23,000 new applications being denied or rejected and over 703,000 existing enrollment records being deactivated or revoked from March 2011 through December 2015. We also reported that CMS estimated the revised process avoided $2.4 billion in total Medicare payments to ineligible providers and suppliers from March 2011 to May 2015, some of which may have been payments for items and services subject to prior authorization. in July 2013, CMS implemented moratoria on enrollment of new providers for home health services and for repetitive, scheduled non- emergency ambulance transport in select counties. As of January 2018, CMS had extended the home health services moratoria statewide to Florida, Illinois, Michigan, and Texas and the repetitive, scheduled non-emergency ambulance transport moratoria statewide to Pennsylvania and New Jersey. During a moratorium, no new applications to enroll as a billing provider of the affected service types are reviewed or approved. In October 2017, CMS officials said that home health and non-emergency ambulance services’ expenditures may have been affected by provider enrollment moratoria. Certain durable medical equipment pricing, payments, and education and outreach: CMS has taken steps to change how certain durable medical equipment is paid for and to provide ongoing durable medical equipment education and outreach. For example, in January 2011, CMS implemented a DMEPOS competitive bidding program required by the Medicare Prescription Drug, Improvement, and Modernization Act of 2003. Under the program, only competitively selected contract suppliers can furnish certain durable medical equipment items at competitively determined prices to Medicare beneficiaries in designated areas. CMS began the program in 9 of the largest metropolitan areas, and in July 2013 expanded to an additional 100 large metropolitan areas. In January 2016, CMS implemented competitive bidding program-based adjusted prices for non-designated areas for durable medical equipment items that were previously, or are currently, included in the competitive bidding program. According to CMS, the program—which generally results in lower competitively bid prices—is reducing expenditures for approximately half of the beneficiaries receiving power mobility devices nationwide. We have previously reported that prices decreased for power mobility devices in the competitive bidding program; some of these devices are also subject to prior authorization. in January 2011, CMS eliminated the lump sum purchase option for standard power wheelchairs. This change reduced expenditures for power wheelchairs used on a short-term basis because payments for short-term rentals are lower than for the purchase of these items. durable medical equipment MACs and CMS provide continuous DMEPOS education and outreach. According to CMS, the education and outreach may have contributed to reducing expenditures for power mobility devices by helping providers and suppliers to understand how to bill correctly and to submit fewer claims that do not meet Medicare coverage and payment requirements. Providers and Suppliers Reported that Prior Authorization Is an Effective Tool, but They Face Difficulty Obtaining Documentation, and Concerns Exist for One Program Many Providers and Suppliers Reported Prior Authorization Benefits, and CMS Has Addressed Some of Their Initial Concerns Many of the officials we interviewed representing provider, supplier, and beneficiary groups, as well as CMS and MACs, reported benefits to prior authorization. Officials from some of these groups said that prior authorization is an effective tool to reduce unnecessary utilization and improper payments. Some officials who reported benefits said that prior authorization helps educate providers and suppliers about allowable items and services under Medicare and improves providers’ and suppliers’ documentation. Some officials also said that providers and suppliers appreciate the assurance of knowing that Medicare is likely to pay for these items and services. Officials from three provider and supplier groups said that by getting provisional prior authorization, their claims will likely not be denied, and they can thus avoid the appeals process, for which there are significant delays. In addition, officials from two provider and supplier groups believe that prior authorization may deter fraudulent suppliers from participating in Medicare. Because of these benefits, these provider and supplier group officials recommended that CMS expand its use of prior authorization. In addition, CMS has improved the prior authorization programs by responding to some of the providers’ and suppliers’ initial concerns. For example, for the power mobility device demonstration, CMS and MAC officials that process DMEPOS claims reported that providers and suppliers were initially confused about whether beneficiaries with representative payees—persons or organizations authorized to accept payment on a beneficiary’s behalf—were exempt from the prior authorization program. To address this issue, CMS revised and clarified its guidance related to representative payees. In addition, for the non- emergency hyperbaric oxygen therapy demonstration, officials from CMS and a MAC administering the demonstration said that providers and suppliers raised concerns that a Medicare-covered condition (compromised skin grafts) included in the demonstration required immediate care and therefore should not be subject to prior authorization. In response, CMS removed the condition from the list of conditions subject to prior authorization. Providers and Suppliers Report Difficulty Obtaining Documentation for Prior Authorization Requests, and CMS Is Taking Steps to Address This Challenge Some provider and supplier group officials we interviewed reported that obtaining the documentation necessary to submit a prior authorization request can be difficult. For example, some of these officials told us that providers and suppliers may spend 3 to 7 weeks obtaining necessary documentation from referring physicians and other relevant parties before submitting a prior authorization request. While CMS’s documentation requirements did not change under prior authorization, officials from a provider and supplier group we spoke with said that prior authorization exacerbates existing documentation challenges because they must obtain all required documentation before providing items and services to beneficiaries. As we noted in a previous report, two durable medical equipment MACs said that referring physicians may lack financial incentives to submit proper documentation since they are unaffected if a durable medical equipment or home health claim is denied due to insufficient documentation, while the provider or supplier submitting the claim loses the payment. Furthermore, according to some provider and supplier group representatives, CMS’s documentation requirements can be difficult to meet. Representatives from one supplier and provider group said that there is a high standard of proof to meet the information needed to support their medical necessity requirements. For example, documentation in the medical record is required to show whether the referring physician considered other options. Also, representatives from another provider and suppler group said that, unlike private insurers, CMS has more requirements that providers and suppliers consider administrative. For instance, MACs deny prior authorization requests for missing physician signatures. In addition, representatives from a provider and supplier group said it may be necessary to collect documentation from multiple providers that treated the beneficiary in order to meet CMS’s medical necessity requirements. However, officials from one private insurer said that their medical necessity requirements for certain items and services may necessitate receiving documentation from several providers as well, although this does not occur often. CMS officials acknowledged that the agency’s requirements may be more difficult to meet than those of private health insurers. However, this scrutiny may be beneficial because, unlike private insurers, Medicare must pay for health care delivered by any eligible physician willing to accept Medicare payment and follow Medicare requirements. We found that CMS and the MACs have taken some steps to assist providers and suppliers in obtaining documentation from referring physicians. For example, CMS has created e-clinical templates for home health services and power mobility devices that can be incorporated into progress notes to help ensure physicians meet medical necessity requirements. CMS and the MACs have also created documentation checklists, prior authorization coversheets, and other tools to assist providers and suppliers in verifying that they have obtained the documentation necessary to meet CMS’s documentation requirements. Agency officials have stated that they are working on additional changes to reduce provider and supplier burden, for example, developing e-clinical templates for additional items and services. Furthermore, representatives from each of the MACs said that they call providers and suppliers that receive certain prior authorization non- affirmations to ensure suppliers and providers understand what information is required to obtain a provisional affirmation. Some MAC representatives said that having a phone conversation with suppliers allows them to resolve non-affirmations more expediently and reduces the number of resubmissions. Representatives from one MAC estimated that when they call providers and suppliers, they are able to resolve 50 to 80 percent of the issues that led to the non-affirmations. Several MAC representatives also said calling helps providers and suppliers gain a better understanding of CMS’s documentation requirements, which will increase their likelihood of having future requests provisionally affirmed. In addition, CMS officials said that the agency encourages MACs to call referring physicians directly, when necessary, to remedy curable errors or obtain additional documentation needed to affirm a request because non- affirmation may be resolved faster without providers and suppliers serving as intermediaries. Providers and Suppliers Report Concerns about Whether the Permanent DMEPOS Program Includes Essential Accessories Providers and suppliers reported concerns about whether accessories deemed essential to group 3 power wheelchairs are subject to prior authorization and can be provisionally affirmed under the permanent DMEPOS program. According to CMS, the permanent DMEPOS program requires prior authorization for power wheelchair bases, but not for their accessories. CMS officials said this is because accessories do not meet the criteria for inclusion on the Master List. However, according to CMS, the MACs must review these accessories when they make prior authorization determinations because their decision to provisionally affirm a wheelchair base is based in part on their view of the medical necessity of the accessories. Therefore, if an essential accessory does not meet medical necessity requirements, a MAC will deny a prior authorization request for a power wheelchair base. In other words, in practice these accessories are subject to prior authorization, even though they are not technically included in the permanent DMEPOS program and therefore cannot be provisionally affirmed. As a result, providers and suppliers lack assurance about whether Medicare is likely to pay for these accessories. In December 2017, CMS officials stated that there have been preliminary discussions regarding the feasibility and effect of subjecting accessories essential to the group 3 power wheelchairs in the permanent DMEPOS program to prior authorization. However, CMS officials did not provide a timeframe for reaching a decision about whether they would do so. Federal internal control standards state that agencies should design control activities that enable an agency to achieve its objectives and should respond to any risks related to achieving those objectives. By not including essential accessories in prior authorization so they can be provisionally affirmed as appropriate, CMS may hinder its ability to achieve one of the stated benefits of the prior authorization program—to allow providers and suppliers to know prior to providing the items whether Medicare will likely pay for them. CMS Monitors Prior Authorization But Has Not Made Plans for Prior Authorization in the Future CMS Monitors Prior Authorization and Has Contracted for Evaluations of the Demonstrations We found that CMS monitoring includes reviewing MAC reports of the results of prior authorization requests, examining MAC timeliness and accuracy, and contracting for independent evaluations of the prior authorization demonstrations. CMS officials told us that they review weekly, monthly, and annual MAC reports that include information such as numbers of requests received, completed, approved, denied, and resubmitted. According to CMS officials, they monitor MAC timeliness through these reports and separately have a contractor review MAC accuracy in processing requests. According to these officials, they have not identified any issues with MAC timeliness, as the MACs currently meet the standards for processing initial requests within 10 business days and resubmissions within 20 business days. In addition, CMS officials said that a sample of MACs’ prior authorization decisions is reviewed each month for accuracy for each of the prior authorization demonstrations, and the reviews have not identified any issues with these decisions. CMS officials said that they meet with providers and supplier groups regularly to solicit feedback, to identify issues that need to be addressed, and to determine whether there are any problems, such as reduced beneficiary access to care. According to CMS officials, they have not identified any negative impact on beneficiary access to care as a result of implementing prior authorization. CMS has contracted for independent evaluations of the power mobility device, repetitive scheduled non-emergency ambulance services, and non-emergency hyperbaric oxygen demonstrations. In December 2017, CMS officials told us that evaluations will be completed and results available after the demonstrations end. In December 2017, officials told us that they also plan to contract for an evaluation of the permanent program after more time has passed. Although Most Prior Authorization Is Scheduled to End in 2018, CMS Does Not Have Plans to Continue Efforts Most prior authorization programs are scheduled to end in 2018, with all the demonstrations concluding and only the limited permanent program remaining. The non-emergency hyperbaric oxygen demonstration ended in February 2018, the power mobility device demonstration in August 2018, and the repetitive scheduled non-emergency ambulance services demonstration in November 2018. The home health services demonstration has been on pause since April 2017 with no plans to resume as of February 2018, although CMS stated that they are considering improvements to the demonstration. The permanent program, which currently consists of two group 3 power wheelchairs, is the only prior authorization program that will remain. According to CMS officials, these wheelchairs are very low volume, and the HHS Office of the Inspector General reported that these wheelchairs represent just a small percentage of all durable medical equipment claims. CMS has not made plans for continuing expiring or paused prior authorization programs or expanding prior authorization. However, officials told us that they would like to see prior authorization for additional items. For example, CMS officials said that they have considered prior authorization for items such as hospital beds and oxygen concentrators, because these have high utilization or improper payment rates. In addition, in December 2017, CMS officials said that the agency is evaluating whether it has met the requirements for nationwide expansion of the repetitive scheduled non-emergency ambulance services demonstration established by the Medicare Access and CHIP Reauthorization Act of 2015. However, CMS officials also said that have not yet determined the next steps for the use of prior authorization. Federal internal control standards state that agencies should identify, analyze, and respond to risks related to achieving objectives. By not taking steps, based on results from the evaluations, to continue prior authorization, CMS risks missed opportunities for achieving its stated goals of reducing costs and realizing program savings by reducing unnecessary utilization and improper payments. Conclusions Since September 2012, CMS has begun using prior authorization to ensure that Medicare coverage and payment rules have been met before the agency pays for selected items and services. During this time, expenditures for items and services subject to prior authorization have been reduced. We estimate potential savings may be as high as about $1.1 to $1.9 billion, although other CMS program integrity efforts may have contributed to these reductions. Many stakeholders, including providers, suppliers, and MAC officials, support prior authorization, citing benefits such as reduced unnecessary utilization. However, providers and suppliers report concerns about whether accessories deemed essential to group 3 power wheelchairs are subject to prior authorization and can be provisionally affirmed. By not including essential accessories in prior authorization, CMS may hinder its ability to achieve one of the stated benefits of the prior authorization program—to allow providers and suppliers to know prior to providing the items whether Medicare will likely pay for them. All four prior authorization demonstrations are either paused or will end in 2018, and CMS does not have plans to extend these programs or expand the use of prior authorization to additional items and services with high rates of unnecessary utilization or improper payments. By not taking steps, based on results from the evaluations, to continue prior authorization, CMS risks missed opportunities for achieving its stated goals of reducing costs and realizing program savings by reducing unnecessary utilization and improper payments. Recommendations for Executive Action We are making the following two recommendations to CMS. The Administrator of CMS should subject accessories essential to the group 3 power wheelchairs in the permanent DMEPOS program to prior authorization. (Recommendation 1) The Administrator of CMS should take steps, based on results from evaluations, to continue prior authorization. These steps could include: resuming the paused home health services demonstration; extending current demonstrations; or, identifying new opportunities for expanding prior authorization to additional items and services with high unnecessary utilization and high improper payment rates. (Recommendation 2) Agency Comments We provided a draft of this report to HHS for comment, and its comments are reprinted in appendix III. HHS also provided technical comments, which we incorporated as appropriate. HHS neither agreed nor disagreed with the recommendations but said it would continue to evaluate prior authorization programs and take our findings and recommendations into consideration in developing plans or determining appropriate next steps. In addition, in response to our recommendation to take steps to continue prior authorization, HHS noted that the President’s fiscal year 2019 budget for HHS included a legislative proposal to extend its statutory authority to permanently require prior authorization for specified Medicare fee-for-service items and services to all Medicare fee-for-service items and services. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact A. Nicole Clowers at (202) 512-7114 or [email protected] or Kathleen M. King at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. Appendix I: List of Items That May Be Selected for Prior Authorization In December 2015, the Centers for Medicare & Medicaid Services (CMS) established a permanent prior authorization program for certain durable medical equipment, prosthetics, orthotics, and supplies (DMEPOS). To select the items subject to prior authorization, CMS compiled a Master List of items that 1) appear on the DMEPOS Fee Schedule list, 2) have an average purchase fee of $1,000 or greater (adjusted annually for inflation) or an average rental fee schedule of $100 or greater (adjusted annually for inflation), and 3) meet one of these two criteria: the item was identified in a GAO or Department of Health and Human Services Office of Inspector General report that is national in scope and published in 2007 or later as having a high rate of fraud or unnecessary utilization, or the item is listed in the 2011 or later published Comprehensive Error Rate Testing program’s annual report. The information presented in this appendix was reprinted from information in a December 2015 final rule. We did not edit it in any way, such as to spell out abbreviations. (See table 3 for the Master List.) Appendix II: Expenditure Data for Items and Services Subject to Prior Authorization Tables 4 through 7 present monthly expenditures for items and services subject to prior authorization in initial demonstration states, expansion demonstration states, and non-demonstration states from 6 months prior to each demonstration’s implementation through March 2017, the most recent month for which reliable data is available. Appendix III: Comments from the Department of Health and Human Services Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Martin T. Gahart (Assistant Director), Lori Achman (Assistant Director), Peter Mangano (Analyst-in- Charge), Sylvia Diaz Jones, and Mandy Pusey made key contributions to this report. Also contributing were Sam Amrhein, Muriel Brown, Eric Wedum, and Jennifer Whitworth.
Why GAO Did This Study CMS required prior authorization as a demonstration in 2012 for certain power mobility devices, such as power wheelchairs, in seven states. Under the prior authorization process, MACs review prior authorization requests and make determinations to approve or deny them based on Medicare coverage and payment rules. Approved requests will be paid as long as all other Medicare payment requirements are met. GAO was asked to examine CMS's prior authorization programs. GAO examined 1) the changes in expenditures and the potential savings for items and services subject to prior authorization demonstrations, 2) reported benefits and challenges of prior authorization, and 3) CMS's monitoring of the programs and plans for future prior authorization. To do this, GAO examined prior authorization program data, CMS documentation, and federal internal control standards. GAO also interviewed CMS and MAC officials, as well as selected provider, supplier, and beneficiary groups. What GAO Found Prior authorization is a payment approach used by private insurers that generally requires health care providers and suppliers to first demonstrate compliance with coverage and payment rules before certain items or services are provided to patients, rather than after the items or services have been provided. This approach may be used to reduce expenditures, unnecessary utilization, and improper payments. The Centers for Medicare & Medicaid Services (CMS) has begun using prior authorization in Medicare through a series of fixed-length demonstrations designed to measure their effectiveness, and one permanent program. According to GAO's analyses, expenditures decreased for items and services subject to a demonstration. GAO's analyses of actual expenditures and estimated expenditures in the absence of the demonstrations found that estimated savings from all demonstrations through March 2017 could be as high as about $1.1 to $1.9 billion. While CMS officials said that prior authorization likely played a large role in reducing expenditures, it is difficult to separate the effects of prior authorization from other program integrity efforts. For example, CMS implemented a durable medical equipment competitive bidding program in January 2011, and according to the agency, it resulted in lower expenditures. Many provider, supplier, and beneficiary group officials GAO spoke with reported benefits of prior authorization, such as reducing unnecessary utilization. However, provider and supplier group officials reported that providers and suppliers experienced some challenges. These include difficulty obtaining the necessary documentation from referring physicians to submit a prior authorization request, although CMS has created templates and other tools to address this concern. In addition, providers and suppliers reported concerns about whether accessories deemed essential to the power wheelchairs under the permanent durable medical equipment, prosthetics, orthotics, and supplies (DMEPOS) program are subject to prior authorization. In practice, Medicare Administrative Contractors (MAC) that administer prior authorization programs review these accessories when making prior authorization determinations, even though they are not technically included in the program and therefore cannot be provisionally affirmed. As a result, providers and suppliers lack assurance about whether Medicare is likely to pay for these accessories. This is contrary to a CMS stated benefit of prior authorization—to provide assurance about whether Medicare is likely to pay for an item or service—and to federal internal control standards, which call for agencies to design control activities that enable an agency to achieve its objectives. CMS monitors prior authorization through various MAC reports. CMS also reviews MAC accuracy and timeliness in processing prior authorization requests and has contracted for independent evaluations of the demonstrations. Currently, prior authorization demonstrations are scheduled to end in 2018. Despite its interest in using prior authorization for additional items, CMS has not made plans to continue its efforts. Federal internal control standards state that agencies should identify, analyze, and respond to risks related to achieving objectives. CMS risks missed opportunities for achieving its stated goals of reducing costs and realizing program savings by reducing unnecessary utilization and improper payments. What GAO Recommends GAO recommends that CMS (1) subject accessories essential to the power wheelchairs in the permanent DMEPOS program to prior authorization and (2) take steps, based on results from evaluations, to continue prior authorization. The Department of Health and Human Services neither agreed nor disagreed with GAO's recommendations but said it would continue to evaluate prior authorization programs and take GAO's findings and recommendations into consideration in developing plans or determining appropriate next steps.
gao_GAO-17-809T
gao_GAO-17-809T_0
Background Since January 2017, the Navy has suffered four significant mishaps at sea that have resulted in serious damage to Navy ships and the loss of 17 sailors (see figure 1). Three of the four at sea mishaps that have occurred—two collisions and one grounding—have involved ships homeported overseas in Yokosuka, Japan. Appendix II provides a summary of major mishaps for Navy ships at sea in fiscal years 2009 through 2017. The Navy currently has 277 ships, a 17 percent reduction from the 333 ships it had in 1998. Over the past two decades, as the number of Navy ships has decreased, the number of ships deployed overseas has remained roughly constant at about 100 ships; consequently, each ship is being deployed more to maintain the same level of presence. We reported in September 2016 that the Navy, along with the other military services, had been reporting persistently low readiness levels. The Navy attributes these, in part, to the increased deployment lengths needed to meet the continuing high demand for its aircraft carriers, cruisers, destroyers, and amphibious ships. For example, the deployment lengths for carrier strike groups had increased from an average of 6.4 months during the period of 2008 through 2011 to a less sustainable 9 months for three carrier strike groups that were deployed in 2015. In 2016, the Navy extended the deployments of the Harry S Truman and Theodore Roosevelt Carrier Strike Groups to 8 and 8.5 months, respectively. In addition, the Navy has had to shorten, eliminate, or defer training and maintenance periods to support these high deployment rates. These decisions have resulted in declining ship conditions across the fleet and have increased the amount of time required for the shipyards to complete maintenance on these ships. Lengthened maintenance periods, in turn, compress the time that ships are available for training and operations. Ships Homeported Overseas Provide Increased Forward Presence but Train Less, Defer More Maintenance, Degrade Faster, and Cost More to Operate As we previously reported, to help meet the operational demands using its existing inventory of ships, the Navy has assigned more of its surface combatants and amphibious ships to overseas homeports. Since 2006, the Navy has doubled the percentage of the fleet assigned to overseas homeports. In 2006, 20 ships were homeported overseas (7 percent of the fleet); today, 40 ships are homeported overseas (14 percent of the fleet) in Japan, Spain, Bahrain, and Italy; and an additional destroyer will be homeported in Yokosuka, Japan in 2018 (see figure 2). According to the Navy, homeporting ships overseas is an efficient method for providing forward presence and rapid crisis response. Our prior work confirms that having ships homeported overseas provides additional presence, but it comes at a cost. For example, we found in May 2015 that homeporting ships overseas results in higher operations and support costs than homeporting ships in the United States. In addition, the operational schedules the Navy uses for overseas-homeported ships limit dedicated training and maintenance periods, resulting in difficulty keeping crews fully trained and ships maintained. In fact, the primary reason that Navy ships homeported overseas provide more deployed time than ships homeported in the United States is that the Navy reduces their training and maintenance periods in order to maximize their operational availability. Ships homeported overseas do not operate within the traditional fleet response plan cycles that apply to U.S.-based ships. Since the ships are in permanent deployment status during their time homeported overseas, they do not have designated ramp-up and ramp- down maintenance and training periods built into their operational schedules (see figure 3). Navy officials told us that because the Navy expects these ships to be operationally available for the maximum amount of time, their intermediate and depot-level maintenance are executed through more frequent, shorter maintenance periods or deferred until after they return to a U.S. homeport—generally after 7 to 10 years overseas. In May 2015, we also found that high operational tempo for ships homeported overseas limits the time for crew training when compared with training time for ships homeported in the United States. Navy officials told us that U.S.-based crews are completely qualified and certified prior to deploying from their U.S. homeports, with few exceptions. In contrast, the high operational tempo of ships homeported overseas had resulted in what Navy personnel called a “train on the margins” approach, a shorthand way to say there was no dedicated training time set aside for the ships so crews trained while underway or in the limited time between underway periods. We found that, at the time of our 2015 review, there were no dedicated training periods built into the operational schedules of the cruisers, destroyers, and amphibious ships homeported in Yokosuka and Sasebo, Japan. As a result, these crews did not have all of their needed training and certifications. We recommended that the Navy develop and implement a sustainable operational schedule for all ships homeported overseas. DOD concurred with this recommendation and reported in 2015 that it had developed revised operational schedules for all ships homeported overseas. However, when we contacted DOD to obtain updated information in August 2017, U.S. Pacific Fleet officials stated that the revised operational schedules for the cruisers and destroyers homeported in Japan were still under review and had not been employed. As of June 2017, 37 percent of the warfare certifications for cruiser and destroyer crews homeported in Japan had expired, and over two-thirds of the expired certifications—including mobility-seamanship and air warfare—had been expired for 5 months or more. This represents more than a fivefold increase in the percentage of expired warfare certifications for these ships since our May 2015 report. The Navy’s Surface Force Readiness Manual states that the high operational tempo and frequent tasking of ships homeported overseas requires that these ships always be prepared to execute complex operations and notes that this demand for continuous readiness also means that ships homeported overseas should maintain maximum training, material condition, and manning readiness. With respect to the material condition of the ships, we found in May 2015 that casualty reports—incidents of degraded or out-of-service equipment—nearly doubled over the 2009 through 2014 time frame, and the condition of overseas-homeported ships decreased even faster than that of U.S.-based ships (see figure 4). The Navy uses casualty reports to provide information on the material condition of ships in order to determine current readiness. For example, casualty report data provide information on equipment or systems that are degraded or out of service, the lack of which will affect a ship’s ability to support required mission areas. In 2015, Navy officials acknowledged an increasing number of casualty reports on Navy ships and a worsening trend in material ship condition. They stated that equipment casualties require unscheduled maintenance and have a negative effect on fleet operations, because there is an associated capability or capacity loss. In our May 2015 report, we recommended that the Navy develop a comprehensive assessment of the long-term costs and risks to its fleet associated with the Navy’s increasing reliance on overseas homeporting to meet presence requirements; make any necessary adjustments to its overseas presence based on this assessment; and reassess these risks when making future overseas homeporting decisions. DOD concurred with this recommendation, but, as of August 2017, it has not conducted an assessment, even though it has continued to increase the number of ships homeported overseas. Size and Composition of Ship Crews May Contribute to Sailor Overwork and Create Readiness and Safety Risks In the early 2000s, the Navy made several changes to its process for determining the size and composition of ship crews that may contribute to sailor overwork and create readiness and safety risks. These changes were intended to drive down crew sizes in order to save on personnel costs. However, as we reported in May 2017, these changes were not substantiated with analysis and may be creating readiness and safety risks. With fewer sailors operating and maintaining surface ships, the material condition of the ships declined, and we found that this decline ultimately contributed to an increase in operating and support costs that outweighed any savings on personnel (see figure 5). The Navy eventually reassessed and reversed some of the changes it had made during this period—known as “optimal manning”—but it continued to use a workweek standard that does not reflect the actual time sailors spend working and does not account for in-port workload—both of which may be leading to sailors being overworked. Additionally, we found that heavy workload does not end after ships return to port. Crews typically operate with fewer sailors while in port, so those crew members remaining must cover the workload of multiple sailors, causing additional strain and potential overwork. In 2014, the Navy conducted a study of the standard workweek and identified significant issues that could negatively affect a crew’s capabilities to accomplish tasks and maintain the material readiness of ships, as well as crew safety issues that might result if crews slept less to accommodate workload that was not accounted for. The Navy study found that sailors were on duty 108 hours a week, exceeding their weekly on-duty allocation of 81 hours. This on-duty time included 90 hours of productive work—20 hours per week more than the 70 hours that are allotted in the standard workweek. This, in turn, reduced the time available for rest and resulted in sailors spending less time sleeping than was allotted, a situation that the study noted could encourage a poor safety culture. Moving forward, the Navy will likely face manning challenges, especially given its current difficulty in filling authorized positions, as it seeks to increase the size of its fleet by as much as 30 percent over its current size. Navy officials stated that even with manpower requirements that accurately capture all workload, the Navy will be challenged to fund these positions and fill them with adequately trained sailors at current personnel levels. Figure 6 shows the Navy’s projected end strength and fleet size. In our May 2017 report, we found that the Navy’s guidance does not require that the factors it uses to calculate manpower requirements be reassessed periodically or when conditions change, to ensure that these factors remain valid and that crews are appropriately sized. We made several recommendations to address this issue, including that the Navy should (1) reassess the standard workweek, (2) require examination of in- port workload, (3) develop criteria to reassess the factors used in its manpower requirements process, and (4) update its ship manpower requirements. DOD concurred with our recommendations, stating that it is committed to ensuring that the Navy’s manpower requirements are current and analytically based and will meet the needs of the existing and future surface fleet. As of August 2017, DOD had not yet taken any actions to implement these recommendations. We believe that, until the Navy makes the needed changes, its ships may not have the right number and skill mix of sailors to maintain readiness and prevent overworking its sailors. The Navy’s Inability to Complete Ship Maintenance on Time Hampers Its Efforts to Rebuild Readiness To address its persistently low readiness levels, the Navy began implementing a revised operational schedule in November 2014, which it referred to as the optimized fleet response plan. This plan seeks to maximize the employability of the existing fleet while preserving adequate time for maintenance and training, providing continuity in ship leadership and carrier strike group assignments, and restoring operational and personnel tempos to acceptable levels. The Navy’s implementation of the optimized fleet response plan—and readiness recovery more broadly—is premised on adherence to deployment, training, and maintenance schedules. However, in May 2016, we found that the Navy was having difficulty in implementing its new schedule as intended. Both the public and private shipyards were having difficulty completing maintenance on time, owing primarily to the poor condition of the ships after more than a decade of heavy use, deferred maintenance, and the Navy’s inability to accurately predict how much maintenance they would need. We reported that in 2011 through 2014 only 28 percent of scheduled maintenance for surface combatants was completed on time and just 11 percent was completed on time for aircraft carriers. We updated these data as of August 2017 to include maintenance availabilities completed through the end of fiscal year 2016 and found continued difficulty completing maintenance on time for key portions of the Navy fleet (see figure 7): Aircraft Carriers (CVNs): In fiscal years 2011 through 2016, maintenance overruns on 18 of 21 (86 percent) aircraft carriers resulted in a total of 1,103 lost operational days—days that ships were not available for operations—the equivalent of losing the use of 0.5 aircraft carriers each year. Surface Combatants (DDGs and CGs): In fiscal years 2011 through 2016, maintenance overruns on 107 of 169 (63 percent) surface combatants resulted in a total of 6,603 lost operational days—the equivalent of losing the use of 3.0 surface combatants each year. Submarines (SSNs, SSBNs, and SSGNs): In fiscal years 2011 through 2016, maintenance overruns on 39 of 47 (83 percent) submarines resulted in a total of 6,220 lost operational days—the equivalent of losing the use of 2.8 submarines each year. Navy officials are aware of the challenges faced by both the public and private shipyards and have taken steps to address the risks these pose to maintenance schedules, including hiring additional shipyard workers and improving their maintenance planning processes. However, Navy officials have told us that it will take time for these changes to bring about a positive effect. For example, as of May 2016, data on the public shipyards’ workforce showed that 32 percent of all employees had fewer than 5 years of experience. According to Navy officials, this workforce inexperience negatively affects the productivity of the shipyards, and it will take several years for them to attain full productivity. Just last week, we issued another report, prepared in response to direction from this committee, examining the ability of the Navy’s public shipyards to support the Navy’s readiness needs. We found that capacity limitations as well as the poor condition of the shipyards’ facilities and equipment contributed to the maintenance delays we discussed earlier and were hindering the shipyards’ ability to support the Navy. Specifically, we found that the shipyards will be unable to support 73—or about one-third—of 218 maintenance periods planned over the next 23 years. In addition, this estimate did not factor in planned increases to the fleet. We made three recommendations, with which the Navy agreed to take steps to improve its management of capital investment in the shipyards. However, we noted that at current average funding levels it would take at least 19 years and a Navy-estimated $4.86 billion to clear the backlog of restoration and modernization projects at the shipyards. Furthermore, this estimate does not include the $9 billion that the Navy estimates it will need for capacity and capability upgrades over the next 12 years to support maintenance operations for the current fleet. Navy Readiness Rebuilding is Part of a Broader DOD Effort In September 2016, we found that although DOD has stated that readiness rebuilding is a priority, implementation and oversight of department-wide readiness rebuilding efforts did not fully include key elements of sound planning, and the lack of these elements puts the overall rebuilding efforts at risk. The Navy states that its overall goal for readiness recovery is to reach a predictable and sustainable level of global presence and surge capacity from year to year. The Navy identified carrier strike groups and amphibious ready groups as key force elements in its plan for readiness recovery and had set 2020 for reaching a predictable and sustainable level of global presence and surge capacity by implementing the optimized fleet response plan. However, we found in 2016 that the Navy faced significant challenges, such as delays in completing maintenance and emerging demands, in achieving its readiness recovery goals for carrier strike groups and amphibious ready groups, and projections show that the Navy will not meet its time frames for achieving readiness recovery. As a result, we recommended that DOD and the services establish comprehensive readiness goals, strategies for implementing them, and associated metrics that can be used to evaluate whether readiness recovery efforts are achieving intended outcomes. DOD generally concurred with our recommendations and, in November 2016, issued limited guidance to the military services on rebuilding readiness; it has also started to design a framework to guide the military services in achieving readiness recovery but has not yet implemented our recommendations. The Navy has since extended its time frame for readiness recovery to at least 2021, but it still has not developed specific benchmarks or interim goals for tracking and reporting on readiness recovery. Navy officials cited several challenges to rebuilding readiness, chief among them the continued high demand for its forces, the unpredictability of funding, and the current difficulty with beginning and completing ship maintenance on time. In January 2017, the President directed the Secretary of Defense to conduct a readiness review and identify actions that can be implemented in fiscal year 2017 to improve readiness. DOD and Navy officials told us that, as part of this readiness review, the Navy prioritized immediate readiness gaps and shortfalls. These officials added that this review would guide the Navy’s investment decisions in future budget cycles, with the intention to rebuild readiness and prepare the force for future conflicts. However, high demand for naval presence will continue to put pressure on a fleet that is already stretched thin across the globe. Looking to the future, the Navy has plans to grow its fleet by as much as 30 percent, but it has not yet shown the ability to adequately man, maintain, and operate the current fleet. These readiness problems need to be addressed and will require the Navy to implement our recommendations—particularly in the areas of assessing the risks associated with overseas basing, reassessing sailor workload and the factors used to size ship crews, managing investments in its shipyards, and applying sound planning and sustained management attention to its readiness rebuilding efforts. In addition, continued congressional oversight will be needed to ensure that the Navy demonstrates progress in addressing its maintenance, training, and other challenges. Chairmen McCain, Ranking Member Reed, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time. GAO Contact and Staff Acknowledgements If you or your staff have questions about this testimony, please contact John Pendleton, Director, Defense Capabilities and Management at (202) 512-3489 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Suzanne Wren, Assistant Director; Steven Banovac, Chris Cronin, Kerri Eisenbach, Joanne Landesman, Amie Lesser, Felicia Lopez, Tobin McMurdie, Shari Nikoo, Cody Raysinger, Michael Silver, Grant Sutton, and Chris Watson. Appendix I: Implementation Status of Prior GAO Recommendations Cited in this Testimony Over the past three years, we issued several reports related to Navy readiness cited in this statement. Table 1 summarizes the status of recommendations made in these reports, which contained a total of 14 recommendations. The Department of Defense generally concurred with all of these recommendations but has implemented only one of them to date. For each of the reports, the specific recommendations and their implementation status are summarized in tables 2 through 5. Appendix II: Summary of Major Mishaps for Navy Ships at Sea for Fiscal Years 2009 Through 2017, as of August 2017 The Navy defines a class A mishap as one that results in $2 million or more in damages to government or other property, or a mishap that resulted in a fatality or permanent total disability. We analyzed data compiled by the Naval Safety Center for fiscal years 2009 through 2017 to provide a summary of major Navy mishaps at sea (see table 6). Appendix III: Related GAO Products Report numbers with a C or RC suffix are Classified. Classified reports are available to personnel with the proper clearances and need to know, upon request. Naval Shipyards: Actions Needed to Improve Poor Conditions that Affect Operation. GAO-17-548. Washington, D.C.: September 12, 2017. Navy Readiness: Actions Needed to Address Persistent Maintenance, Training, and Other Challenges Facing the Fleet. GAO-17-798T. Washington, D.C.: September 7, 2017. Department of Defense: Actions Needed to Address Five Key Mission Challenges. GAO-17-369. Washington, D.C.: June 13, 2017. Military Readiness: Coastal Riverine Force Challenges. GAO-17-462C. Washington, D.C.: June 13, 2017. (SECRET) Navy Force Structure: Actions Needed to Ensure Proper Size and Composition of Ship Crews. GAO-17-413. Washington, D.C.: May 18, 2017. Military Readiness: DOD’s Readiness Rebuilding Efforts May Be at Risk without a Comprehensive Plan. GAO-16-841. Washington, D.C.: September 7, 2016. Navy and Marine Corps: Services Face Challenges to Rebuilding Readiness. GAO-16-481RC. Washington, D.C.: May 25, 2016. (SECRET//NOFORN) Military Readiness: Progress and Challenges in Implementing the Navy’s Optimized Fleet Response Plan. GAO-16-466R. Washington, D.C.: May 2, 2016. Navy Force Structure: Sustainable Plan and Comprehensive Assessment Needed to Mitigate Long-Term Risks to Ships Assigned to Overseas Homeports. GAO-15-329. Washington, D.C.: May 29, 2015. Military Readiness: Navy Needs to Assess Risks to Its Strategy to Improve Ship Readiness. GAO-12-887. Washington, D.C.: September 21, 2012. Force Structure: Improved Cost Information and Analysis Needed to Guide Overseas Military Posture Decisions. GAO-12-711. Washington, D.C.: June 6, 2012. Military Readiness: Navy Needs to Reassess Its Metrics and Assumptions for Ship Crewing Requirements and Training. GAO-10-592. Washington, D.C.: June 9, 2010. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Since January 2017, the Navy has suffered four significant mishaps at sea that resulted in serious damage to its ships and the loss of 17 sailors. Three of these incidents involved ships homeported in Japan. In response to these incidents, the Chief of Naval Operations ordered an operational pause for all fleets worldwide, and the Vice Chief of Naval Operations directed a comprehensive review of surface fleet operations, stating that these tragic incidents are not limited occurrences but part of a disturbing trend in mishaps involving U.S. ships. This statement provides information on the effects of homeporting ships overseas, reducing crew size on ships, and not completing maintenance on time on the readiness of the Navy and summarizes GAO recommendations to address the Navy's maintenance, training, and other challenges. In preparing this statement, GAO relied on work it has published since 2015 related to the readiness of ships homeported overseas, sailor training and workload issues, maintenance challenges, and other issues. GAO updated this information, as appropriate, based on Navy data. What GAO Found GAO's prior work shows that the Navy has increased deployment lengths, shortened training periods, and reduced or deferred maintenance to meet high operational demands, which has resulted in declining ship conditions and a worsening trend in overall readiness. The Navy has stated that high demand for presence has put pressure on a fleet that is stretched thin across the globe. Some of the concerns that GAO has highlighted include: Degraded readiness of ships homeported overseas: Since 2006, the Navy has doubled the number of ships based overseas. Overseas basing provides additional forward presence and rapid crisis response, but GAO found in May 2015 that there were no dedicated training periods built into the operational schedules of the cruisers and destroyers based in Japan. As a result, the crews of these ships did not have all of their needed training and certifications. Based on updated data, GAO found that, as of June 2017, 37 percent of the warfare certifications for cruiser and destroyer crews based in Japan—including certifications for seamanship—had expired. This represents more than a fivefold increase in the percentage of expired warfare certifications for these ships since GAO's May 2015 report. The Navy has made plans to revise operational schedules to provide dedicated training time for overseas-based ships, but this schedule has not yet been implemented. Crew size reductions contribute to sailor overwork and safety risks: GAO found in May 2017 that reductions to crew sizes the Navy made in the early 2000s were not analytically supported and may now be creating safety risks. The Navy has reversed some of those changes but continues to use a workweek standard that does not reflect the actual time sailors spend working and does not account for in-port workload—both of which have contributed to some sailors working over 100 hours a week. Inability to complete maintenance on time: Navy recovery from persistently low readiness levels is premised on adherence to maintenance schedules. However, in May 2016, GAO found that the Navy was having difficulty completing maintenance on time. Based on updated data, GAO found that, in fiscal years 2011 through 2016, maintenance overruns on 107 of 169 surface ships (63 percent) resulted in 6,603 lost operational days (i.e., the ships were not available for training and operations). Looking to the future, the Navy wants to grow its fleet by as much as 30 percent but continues to face challenges with manning, training, and maintaining its existing fleet. These readiness problems need to be addressed and will require the Navy to implement GAO's recommendations—particularly in the areas of assessing the risks associated with overseas basing, reassessing sailor workload and the factors used to size ship crews, managing investments to modernize and improve the efficiency of the naval shipyards, and applying sound planning and sustained management attention to its readiness rebuilding efforts. In addition, continued congressional oversight will be needed to ensure that the Navy demonstrates progress in addressing its maintenance, training, and other challenges. What GAO Recommends GAO made 14 recommendations in prior work cited in this statement. The Department of Defense generally concurred with all of them but has implemented only 1. Continued attention is needed to ensure that these recommendations are addressed, such as the Navy assessing the risks associated with overseas basing and reassessing sailor workload and factors used in its manpower requirements process.
gao_GAO-17-796T
gao_GAO-17-796T_0
CBP Seizes a Variety of Inbound Items That May Pose Threats to U.S. Safety and Security In our report, we found that, according to data from CBP’s Seized Asset and Case Tracking System (SEACATS), during fiscal years 2012 through 2016 CBP conducted about 308,000 seizures of inbound international items that may pose a threat to U.S. security, health and safety, business, and ecology. Of those, CBP seized about 70 percent from mail and 30 percent from express cargo. Seized items are categorized in SEACATS as either drugs or merchandise. Among the approximately 308,000 seizures, illegal or inadmissible drugs accounted for about 47 percent of total seizures and merchandise accounted for about 53 percent. According to testimony by a U.S. Immigration and Customs Enforcement official, a recent increase in deaths related to the synthetic opioid fentanyl has resulted in an increased focus on identifying methods by which traffickers bring fentanyl into the United States. In fiscal years 2012 through 2015, CBP’s seizure data reflect zero seizures of fentanyl, but according to CBP, fentanyl seizures would have been captured under other categories in SEACATS. According to CBP, a specific category code for fentanyl was added to SEACATS in fiscal year 2016. SEACATS reflects 53 seizures of fentanyl in fiscal year 2016 via both mail and express cargo. USPS’s and CBP’s Pilot Programs Lack Performance Targets As mail and express cargo arrive in the United States, both USPS and express consignment operators provide items to CBP for inspection. Express consignment operators accept items for delivery to the United States at points of sale in foreign countries and provide EAD to CBP prior to the items’ scheduled arrival in the United States. CBP then analyzes the EAD and provides lists of targeted items to express consignment operators. However, unlike express consignment operators, USPS is not currently required to provide CBP with EAD for inbound international mail and does not have control over mail prior to its arrival in the United States. Thus, USPS relies on foreign postal operators to collect and provide EAD voluntarily or by mutual agreement. According to USPS data, USPS received EAD for about one third of all inbound international mail (excluding letters, flats, and military/diplomatic mail) for the period from April 2016 through March 2017. For the month of March 2017 (the most recent data available at the time of our review), USPS data indicate that EAD was available for roughly half of all inbound international mail (excluding letters, flats, and military/diplomatic mail). In 2014 and 2015, USPS and CBP initiated two pilot programs at the New York International Service Center (ISC) to target certain mail for inspection using some of the EAD obtained under data-sharing agreements with foreign postal operators. At the time of our review, CBP did not use EAD to target mail for inspection outside of these pilots. According to USPS documents, the goal of these pilots is to test the effectiveness of placing holds on mail that has been targeted by CBP based on EAD. Under the pilots, CBP uses EAD to target a small number of pieces of mail each day. According to USPS officials, when USPS employees scan either individual targeted pieces or larger sacks containing this targeted mail, they are alerted that CBP has targeted the item and set the item or sack aside for inspection. Since the pilots began, USPS has made efforts to locate and provide CBP with the targeted mail and CBP has collected performance data on the percentage of targeted mail USPS has provided for inspection: about 82 percent for one pilot, and about 58 percent for the other. In our report we note that, according to USPS and CBP, USPS has been unable to provide some targeted mail for inspection because locating targeted mail once it arrives at an ISC has been a challenge. Specifically, USPS ISCs may receive thousands of large sacks of mail per day that are scanned as they are accepted. Each sack may contain hundreds of pieces of mail that are not individually scanned upon arrival. As a result, locating a targeted item requires manually sorting through the entire sack, and USPS employees may overlook the item while sorting through the larger sack to locate targeted mail. According to USPS officials, at the time of our review they were testing an automated method to identify targeted mail within these larger sacks. Standards for internal control in the federal government state that defining program goals in specific and measurable terms allows for the assessment of performance toward achieving objectives. However, while USPS and CBP have collected some performance information for these pilots (including the percentage of targeted mail provided for inspection), this information is not linked to a specific performance target agreed upon by USPS and CBP—such as a specific percentage of targeted mail provided to CBP for inspection. Further, the agencies have not conducted an analysis to determine if the pilot programs are achieving desired outcomes. In our report, we concluded that, because CBP and USPS lack clear performance goals for these pilots, they risk spending additional time and resources expanding them prior to fully assessing the pilots’ success or failure. As such, we recommended that CBP, in conjunction with USPS, (1) establish measureable performance goals for pilot programs and (2) assess the performance of the pilots in achieving these goals. The Department of Homeland Security concurred with this recommendation and plans to implement it by February 28, 2018. USPS and CBP Have Not Evaluated Relative Costs and Benefits of Increased Use of Electronic Advance Data In our report we found that the costs and benefits of using EAD to target mail for inspection are unclear. For example, according to USPS and CBP officials, increasing the use of EAD to target mail for inspection may have benefits, such as reducing the volume of inspected mail and increasing the percentage of inspections that result in identification of a threatening or illegal item. This potential outcome could decrease time and resources needed for the screening process—potentially decreasing costs—and may increase the security of inbound mail. However, the costs of collecting and implementing the use of EAD are not yet known, and neither USPS nor CPB currently collect the data necessary to know whether using EAD might increase the security of inbound mail or decrease the time and costs associated with screening. Specifically, regarding the costs of collecting EAD, USPS has not calculated the current costs of collecting EAD from countries with which it has data-sharing agreements, but officials stated that USPS does not incur significant additional costs for each new designated postal operator or type of mail for which it begins collecting EAD. While some of the costs of obtaining EAD may be borne by designated postal operators in other countries, rather than directly by USPS, costs to USPS to use EAD to target mail for inspection may include: equipment and personnel required to identify targeted mail (such as equipment required to sort through hundreds of pieces of mail to identify a single piece of mail), and software upgrades required to exchange data with foreign postal operators and with CBP. In our report we found that an analysis of the costs associated with planned efforts is particularly critical given USPS’s financial challenges. As we recently found, USPS reported a net loss of $5.6 billion in fiscal year 2016—its 10th consecutive year of net losses. In light of this situation, any expenditure of financial resources to make any additional infrastructure and information technology upgrades necessary to implement the use of EAD for targeting merit careful consideration. Beyond costs, in our report we also determined that USPS and CBP have not performed an analysis of the benefits of using EAD to target mail for inspection, including the effectiveness of targeted inspection based on EAD relative to other methods of selecting mail for inspection. Thus, the extent to which targeting based on EAD might result in an increased ability to identify threats or other benefits over current methods is unknown. For example, CBP has collected data on the percentage of inspections resulting in a seizure for mail inspected as a result of targeting in the pilot programs at the New York ISC. However, CBP does not collect comparable data for seizures resulting from inspections conducted based on current methods of choosing mail for inspection. Moreover, USPS and CBP experience challenges related to inspecting mail that may limit their ability to effectively use EAD to target mail for screening and, thus, to experience EAD’s possible benefits. For example, USPS depends on foreign postal operators to make EAD available. According to USPS and State Department officials, however, those operators may not share the same security priorities as USPS and CBP and may not make EAD available. If the amount of available EAD remains limited for inbound mail, this may reduce the effectiveness of CBP’s targeting efforts or could constrain CBP’s ability to reduce the volume of mail it inspects. Our prior work has found that in designing preventive measures—such as the screening of inbound mail to identify potential threats—it is helpful to conduct a thorough assessment of vulnerabilities as well as cost-benefit analyses of alternative strategies. In the absence of information on the relative costs of various methods of selecting mail for inspection as well as their effectiveness at identifying potential threats in inbound mail, USPS and CBP are unable to fully understand whether obtaining additional EAD for targeting purposes will provide security or resource benefits. In our report, we therefore concluded that, particularly in light of the challenges that collecting and using these data present, it is important that CBP and USPS carefully consider actions to enhance inbound international mail security to avoid wasting time and money on potentially ineffective and costly endeavors. As such, we recommended that CBP, in conjunction with USPS, evaluate the relative costs and benefits of collecting EAD for targeting mail for inspection in comparison to other methods. The Department of Homeland Security concurred with this recommendation and plans to implement it by February 28, 2018. In conclusion, existing pilots could be used as an opportunity for CBP and USPS to: (1) articulate performance goals for the pilots, (2) collect data and assess the pilots on their success in enabling USPS to provide targeted mail to CBP for inspection, and (3) assess the costs and benefits of various methods of choosing mail for inspection. We are encouraged that USPS and the Department of Homeland Security agreed with our findings and recommendations. Effective implementation of our recommendations should help CBP and USPS ensure that efforts to collect and use EAD to target mail for inspection achieve the desired security and resource benefits. Chairman Meadows, Ranking Member Connolly, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgements If you or your staff have any questions about this testimony, please contact Lori Rectanus, Director, Physical Infrastructure Issues at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Derrick Collins and Katie Hamer. Other staff who made contributions to the report cited in this testimony are identified in the source product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study This testimony summarizes information contained in GAO's August 2017 report, entitled International Mail Security: Costs and Benefits of Using Electronic Data to Screen Mail Need to Be Assessed ( GAO-17-606 ). What GAO Found U.S. Customs and Border Protection (CBP) is the primary federal agency tasked with targeting and inspecting inbound international items and seizing illegal goods, including illegal or inadmissible drugs and merchandise. As mail and express cargo arrive in the United States, both the U.S. Postal Service (USPS) and express consignment operators (such as FedEx and DHL) provide items to CBP for inspection. However, unlike express consignment operators, USPS is not currently required to provide CBP with electronic advance data (EAD), such as the shipper's and recipient's name and address, for inbound international mail and does not have control over mail prior to its arrival in the United States. Thus, USPS relies on foreign postal operators to collect and provide EAD voluntarily or by mutual agreement. In 2014 and 2015, USPS and CBP initiated two pilot programs at the New York International Service Center (ISC) to target certain mail for inspection using some of the EAD obtained under data-sharing agreements with foreign postal operators. Under the pilots, CBP uses EAD to target a small number of pieces of mail each day. According to USPS officials, when USPS employees scan either individual targeted pieces or larger sacks containing this targeted mail, they are alerted that CBP has targeted the item and set the item or sack aside for inspection. According to USPS and CBP, USPS has been unable to provide some targeted mail for inspection because locating targeted mail once it arrives at an ISC has been a challenge. Since the pilots began, USPS has provided CBP with about 82 percent of targeted mail for one pilot, and about 58 percent of targeted mail for the other. However, while USPS and CBP have collected some performance information for these pilots (including the percentage of targeted mail provided for inspection), this information is not linked to a specific performance target agreed upon by USPS and CBP--such as a specific percentage of targeted mail provided to CBP for inspection. Further, the agencies have not conducted an analysis to determine if the pilot programs are achieving desired outcomes. Because CBP and USPS lack clear performance goals for these pilots, they risk spending additional time and resources expanding them prior to fully assessing the pilots' success or failure. In our report we found that the costs and benefits of using EAD to target mail for inspection are unclear. For example, according to USPS and CBP officials, increasing the use of EAD to target mail for inspection may have benefits, such as reducing time and resources needed for the screening process--potentially decreasing costs--and may increase the security of inbound mail. However, the costs of collecting and implementing the use of EAD are not yet known, and neither USPS nor CPB currently collect the data necessary to know whether using EAD might increase the security of inbound mail or decrease the time and costs associated with screening. For example, CBP has collected data on the percentage of inspections resulting in a seizure for mail inspected as a result of targeting in the pilot programs at the New York ISC. However, CBP does not collect comparable data for seizures resulting from inspections conducted based on current methods of choosing mail for inspection. In light of the challenges that collecting and using these data present, it is important that CBP and USPS carefully consider actions to enhance inbound international mail security to avoid wasting time and money on potentially ineffective and costly endeavors. What GAO Recommends In our report, we recommended that CBP, in coordination with USPS: (1) establish measureable performance goals to assess pilot programs and (2) evaluate the costs and benefits of using EAD to target mail for inspection compared with other targeting methods. CBP and USPS agreed with these recommendations and CBP plans to implement them by February 28, 2018.
gao_GAO-18-374T
gao_GAO-18-374T_0
Misalignment between NNSA’s Modernization Budget Estimates and Plans Raises Affordability Concerns In April 2017, we issued our latest report on NNSA’s 25-year plans to modernize the nation’s nuclear weapons stockpile and its supporting infrastructure. In that report, we identified two areas of misalignment between NNSA’s modernization plans and the estimated budgetary resources needed to carry out those plans, which could result in challenges to NNSA in affording its planned portfolio of modernization programs. First, we found that NNSA’s estimates of funding needed for its modernization plans sometimes exceeded the budgetary projections included in the President’s planned near- and long-term modernization budgets. In the near-term (fiscal years 2018 through 2021), we found that NNSA may have to defer certain modernization work beyond that time period to execute its program within the planned budget, which could increase modernization costs and schedule risks. This is a pattern we have previously identified as a “bow wave”—an increase in future years’ estimated budget needs that occurs when agencies are undertaking more programs than their resources can support. In the long-term (fiscal years 2022 through 2026), we found that NNSA’s modernization program budget estimates sometimes exceeded the projected budgetary resources planned for inclusion in the President’s budget, raising additional questions about whether NNSA will be able to afford the scope of its modernization program. Second, the costs of some major modernization programs—such as for nuclear weapon refurbishments— may also increase and further strain future modernization budgets. We are currently reviewing NNSA’s Fiscal Year 2018 Stockpile Stewardship and Management Plan. Misalignment between Estimates and Plans May Result in Increased Cost and Schedule Risks and Raises Affordability Concerns As we reported in April 2017, NNSA estimates of funding needed for its modernization plans sometimes exceeded the budgetary projections included in the President’s planned near- and long-term modernization budgets. Near-term Misalignment between Modernization Plans and Estimated Budgetary Resources We found that NNSA may have to defer certain modernization work planned for fiscal years 2018 through 2021 beyond its current 5-year planning period, called the Future-Years Nuclear Security Program (FYNSP). As we reported in April 2017, this is caused by a misalignment between NNSA’s budget estimates for certain nuclear modernization programs and the President’s budgets for that period. We concluded that this deferral could exacerbate a significant bow wave of modernization funding needs that NNSA projects for the out-years beyond the FYNSP and could potentially increase modernization costs and schedule risks. As we have previously reported, such bow waves occur when agencies defer costs of their programs to the future, beyond their programming periods, and they often occur when agencies are undertaking more programs than their resources can support. As NNSA’s fiscal year 2017 budget materials show, its modernization budget estimates for fiscal years 2022 through 2026—the first 5 years beyond the FYNSP—may require significant funding increases. For example, in fiscal year 2022, NNSA’s estimates of its modernization budget needs are projected to rise about 7 percent compared with the budget estimates for fiscal year 2021, the last year of the FYNSP, as shown in figure 1. The analysis in our April 2017 report showed that NNSA has shifted this modernization bow wave to the period beyond the FYNSP time frame in each of the past four versions of the annual Stockpile Stewardship and Management Plan. For example, in the Fiscal Year 2014 Stockpile Stewardship and Management Plan, NNSA’s budget estimates for its modernization programs increased from a total of about $9.3 billion in fiscal year 2018, the last year of the FYNSP, to about $10.5 billion in fiscal year 2019, the first year after the FYNSP—an increase of about 13 percent. Similar patterns showing a jump in funding needs immediately after the last year of the FYNSP are repeated in the funding profiles contained in the fiscal year 2015, 2016, and 2017 plans. As we have previously reported, deferring more work to future years can increase cost and schedule risks and can put programs in the position of potentially facing a backlog of deferred work that grows beyond what can be accommodated in future years. Long-term Misalignment between Modernization Plans and Estimated Budgetary Resources The Fiscal Year 2017 Stockpile Stewardship and Management Plan shows that NNSA’s overall modernization budget estimates for fiscal years 2022 through 2026—the out-years beyond the FYNSP—may exceed the projected funding levels in the President’s budgets for that period, raising further questions about the affordability of NNSA’s nuclear modernization plans. According to NNSA’s data, the agency’s estimated budget needed to support modernization totals about $58.4 billion for fiscal years 2022 through 2026, and the out-year funding projections contained in the President’s fiscal year 2017 budget for the same period total about $55.5 billion. The President’s out-year funding projections, therefore, are approximately $2.9 billion, or about 5.2 percent, less than NNSA estimates it will need over the same period. Despite this potential shortfall, NNSA’s Fiscal Year 2017 Stockpile Stewardship and Management Plan concludes that the modernization program is generally affordable in the years beyond the FYNSP for two reasons. First, the President’s out-year funding projections are sufficient to support NNSA’s low-range cost estimates for its modernization programs for fiscal years 2022 through 2026. Based on NNSA data, the low-range cost estimates for fiscal years 2022 through 2026 total approximately $54.4 billion and the President’s out-year funding projections total about $55.5 billion. Figure 2 illustrates data from the 2017 plan showing NNSA’s budget estimates in nominal dollars, including high- and low-range cost estimates for its modernization program, along with the out-year funding projections from the President’s fiscal year 2017 budget, for fiscal years 2022 to 2026. Second, NNSA concludes that its modernization programs are generally affordable beyond the FYNSP because the agency’s estimated modernization budget needs will begin to decrease in fiscal year 2027. In our April 2017 report, we noted that NNSA’s conclusion—that its modernization program is affordable because the President’s out-year funding projections fall within NNSA’s modernization cost ranges—is overly optimistic. This is because the conclusion is predicated on optimistic assumptions regarding the cost of the modernization program beyond the FYNSP, particularly for fiscal years 2022 through 2026. For the program to be affordable, NNSA’s modernization programs would need to be collectively executed at the low end of their estimated cost ranges. The plan does not discuss any options NNSA would pursue to support or modify its modernization program if costs exceeded its low- range cost estimates. In addition, the Fiscal Year 2017 Stockpile Stewardship and Management Plan states that the nominal cost of NNSA’s modernization program is expected to decrease by approximately $1 billion in fiscal year 2027. In that year, according to the 2017 plan, it is anticipated that NNSA’s estimated budgets for its modernization program will begin to fall in line with projections of future presidential budgets. However, as we noted in our April 2017 report, the decrease that NNSA anticipates in its modernization funding needs beginning in fiscal year 2027 may not be achievable if the projected mismatch between NNSA’s estimates of its modernization budget needs and the projections of the President’s modernization budget for fiscal years 2022 through 2026 is not resolved. This mismatch creates concerns that NNSA will not be able to afford planned modernization costs during fiscal years 2022 through 2026 and will be forced to defer them to fiscal year 2027 and beyond, continuing the bow wave patterns discussed above. Potential Rising Costs of Some Modernization Programs May Further Strain NNSA’s Modernization Budgets Our April 2017 report identified misalignment between NNSA’s estimate of its budget needs and NNSA’s internal cost range estimates for several of its major modernization programs. Further, we found that the costs of some major life extension programs (LEP) may increase in the future, which may further strain NNSA’s planned modernization budgets. With respect to the alignment of NNSA’s estimate of its budget needs and NNSA’s internal cost range estimates, in April 2017 we found that NNSA’s budget estimates were generally consistent with NNSA’s high- and low-range cost estimates. However, for some years, NNSA’s low- range cost estimates exceeded the budget estimates for some of the programs, suggesting the potential for a funding shortfall for those programs in those years. Specifically, we found that the low-range cost estimates for the W88 Alteration 370 program and all LEPs discussed in our April 2017 report exceeded their budget estimates for some fiscal years within the 10-year time period from fiscal year 2017 to 2026. As we reported in 2013 and 2016, this misalignment indicates that NNSA’s estimated budgets may not be sufficient to fully execute program plans and that NNSA may need to increase funding for these programs in the future. Additionally, in April 2017 we found that the costs of two ongoing nuclear weapon LEPs and the W88 Alteration 370 program may increase in the future, based on NNSA information that was produced after the release of the fiscal year 2017 budget materials. These potential cost increases could further challenge the extent to which NNSA’s budget estimates support the scope of modernization efforts. The LEPs facing potential cost increases include: B61-12 LEP. An independent cost estimate for the program completed in October 2016 exceeded the program’s self-conducted cost estimate from June 2016 by $2.6 billion. W80-4 LEP. Officials from NNSA’s Office of Cost Policy and Analysis told us that this program may be underfunded by at least $1 billion to meet the program’s existing schedule. W88 Alteration 370. According to officials from NNSA’s Office of Cost Policy and Analysis, this program’s expanded scope of work may result in about $1 billion in additional costs. To help NNSA put forth more credible modernization plans, we recommended in our April 2017 report that the NNSA Administrator include an assessment of the affordability of NNSA’s portfolio of modernization programs in future versions of the Stockpile Stewardship and Management Plan, such as by presenting options (e.g., potentially deferring the start of or canceling specific modernization programs) that NNSA could consider taking to bring its estimates of modernization funding needs into alignment with potential future budgets. In commenting on our report, NNSA neither agreed nor disagreed with our recommendation. DOE Annually Spends Billions on Cleanup, but the Cost of Its Environmental Liabilities Continues to Increase DOE also faces challenges with addressing its environmental liabilities and its cleanup mission. In February 2017, we added the federal government’s environmental liabilities to our High-Risk List. Specifically, we found that the federal government’s environmental liability has been growing for the past 20 years—and is likely to continue to increase—and that DOE is responsible for over 80 percent ($372 billion) of the nearly $450 billion reported environmental liability. Notably, this estimate does not reflect all of the future cleanup responsibilities that DOE may face. In addition, DOE has not consistently taken a risk-informed approach to decision-making for environmental cleanup, and DOE may therefore be missing opportunities to reduce costs while also reducing environmental risks more quickly. Our recent work in this area has also identified opportunities where DOE may be able to save tens of billions of dollars. As we have previously reported, DOE’s total reported environmental liability has generally increased over time. Since 1989, EM has spent over $164 billion to retrieve, treat, and dispose of nuclear and hazardous waste and, as of 2017, it had completed cleanup at 91 of 107 sites across the country (the 91 sites were generally viewed by DOE as the smallest and least contaminated sites to address). Despite billions spent on environmental cleanup, DOE’s environmental liability has roughly doubled from $176 billion in fiscal year 1997 to the fiscal year 2016 estimate of $372 billion. Between 2011 and 2016, EM spent $35 billion, primarily to treat and dispose of nuclear and hazardous waste and construct capital asset projects to treat the waste (see fig. 3 for EM’s annual spending and growing environmental liability). According to documents related to DOE’s fiscal year 2016 financial statements, half of DOE’s environmental liability resides at two cleanup sites: the Hanford Site in Washington State and the Savannah River Site in South Carolina. In its fiscal year 2016 financial statement, DOE attributed recent environmental liability increases to (1) inflation adjustments for the current year; (2) improved and updated estimates for the same scope of work, including changes resulting from deferral or acceleration of work; (3) revisions in technical approach or scope for cleanup activities; and (4) regulatory and legal changes. Notably, in recent annual financial reports, DOE has cited other significant causes for increases in its liability. Other causes have included the lack of a disposal path for high-level radioactive waste—because of the termination of the Yucca Mountain repository program—and delays and scope changes for major construction projects at the Hanford and Savannah River sites. We also reported in February 2017 that DOE’s estimated liability does not include billions in expected costs. According to federal accounting standards, environmental liability estimates should include costs that are probable and reasonably estimable, meaning that costs that cannot yet be reasonably estimated should not be included in total environmental liability. Examples of costs that DOE cannot yet estimate include the following: DOE has not yet developed a cleanup plan or cost estimate for the Nevada National Security Site and, as a result, the cost of future cleanup of this site was not included in DOE’s fiscal year 2015 reported environmental liability. The nearly 1,400-square-mile site has been used for hundreds of nuclear weapons tests since 1951. These activities have resulted in more than 45 million cubic feet of radioactive waste at the site. According to DOE’s financial statement, since DOE is not yet required to establish a plan to clean up the site, the costs for this work are excluded from DOE’s annually reported environmental liability. DOE’s reported environmental liability includes an estimate for the cost of a permanent nuclear waste repository, but these estimates are highly uncertain and likely to increase. In March 2015, in response to the termination of the Yucca Mountain repository program, DOE proposed separate repositories for defense high-level and commercial waste. In January 2017, we reported that the cost estimate for DOE’s new approach excluded the costs and time frames for site selection and site characterization. As a result, the full cost of these activities is likely billions of dollars more than what is reflected in DOE’s environmental liability. In our annual report on Fragmentation, Overlap, and Duplication in the federal government that we issued in May 2017, we reported that DOE may be able to save billions of dollars by reassessing the rationale for its March 2015 proposal. In June 2017, a bill that could result in renewed efforts to open the Yucca Mountain repository was introduced in the House of Representatives. In addition, according to the DOE Inspector General, DOE may have insufficient controls in place to accurately account for its environmental liabilities. In November 2016, the DOE Inspector General reported a significant deficiency in internal controls related to the reconciliation of environmental liabilities. Moreover, DOE does not consistently take a risk-informed decision- making approach to its environmental cleanup mission to more efficiently use resources. As our reports and those by other organizations issued over the last 2 decades have found, DOE’s environmental cleanup decisions have not been risk-based, and there have been inconsistencies in the regulatory approaches followed at different sites. We and others have pointed out that DOE needs to take a nation-wide, risk-based approach to cleaning up these sites, which could reduce costs while also reducing environmental risks more quickly. In 2006, the National Research Council reported that the nation’s approach to cleaning up nuclear waste—primarily carried out by DOE—was complex, inconsistent, and not systematically risk- based. For example, the National Research Council noted that the current regulatory structure for low-activity waste is based primarily on the waste’s origins rather than on its actual radiological risks. The National Research Council concluded that by working with regulators, public authorities, and local citizens to implement risk-informed practices, waste cleanup efforts can be done more cost-effectively. The report also suggested that statutory changes were likely needed. In 2015, a review organized by the Consortium for Risk Evaluation with Stakeholder Participation reported that DOE was not optimally using available resources to reduce risk. According to the report, factors such as inconsistent regulatory approaches and certain requirements in federal facility agreements caused disproportionate resources to be directed at lower-priority risks. The report called for a more systematic effort to assess and rank risks within and among sites, including through headquarters guidance to sites, and to allocate federal taxpayer monies to remedy the highest priority risks through the most efficient means. In May 2017, we reported on DOE’s efforts to treat a significant portion of the waste in underground tanks at the Hanford Site. We found that DOE chose different approaches to treat the less radioactive portion of its tank waste—which DOE refers to as “low- activity waste” (LAW)—at the Hanford and Savannah River Sites. At the Savannah River Site, DOE has grouted about 4 million gallons of LAW since 2007. DOE plans to treat a portion of the Hanford Site’s LAW with vitrification, but it has not yet treated any of Hanford’s LAW and faces significant unresolved technical challenges in doing so. In addition, we found that the best available information indicates that DOE’s estimated costs to grout LAW at the Savannah River Site are substantially lower than its estimated costs to vitrify LAW at Hanford, and DOE may be able to save tens of billions of dollars by reconsidering its waste treatment approach for a portion of the LAW at Hanford. Moreover, according to experts that attended a meeting we convened with the National Academies of Sciences, Engineering, and Medicine, both vitrification and grout could effectively treat Hanford’s LAW. Experts at our meeting also stated that developing updated information on the effectiveness of treating a portion of Hanford’s waste, called supplemental LAW, with other methods, such as grout, may enable DOE to consider waste treatment approaches that would accelerate DOE’s tank waste treatment mission, thereby potentially reducing certain risks and lifecycle treatment costs. We recommended that DOE (1) develop updated information on the performance of treating supplemental LAW with alternate methods, such as grout, before it selects an approach for treating supplemental LAW; and (2) have an independent entity develop updated information on the lifecycle costs of treating Hanford’s supplemental LAW with alternate methods. DOE agreed with both recommendations. Since 1994, we have made at least 28 recommendations related to addressing the federal government’s environmental liability to DOE and others and 4 suggestions to Congress to consider changes to the laws governing cleanup activities. Of these, 13 recommendations remain unimplemented. If implemented, these steps would improve the completeness and reliability of the estimated costs of future federal cleanup responsibilities and lead to more risk-based management of the cleanup work. We believe these recommendations are as relevant, if not more so, today. DOE Has Taken Steps to Improve Management of Contracts, Projects, and Programs, but Challenges Remain The Secretary of Energy has taken several important steps that demonstrate DOE’s commitment to improving management of contracts and projects. However, our recent work indicates that, even with these efforts, NNSA and EM continue to face long-standing challenges in several areas. DOE Has Made Progress in Managing Contracts and Projects As we noted in our 2017 high-risk report, DOE has made progress in its contract and project management. DOE continued to meet the criterion for demonstrating a strong commitment and top leadership support for improving project management. The Secretary of Energy issued two memorandums, in December 2014 and June 2015, that lay out a series of changes to policies and procedures to improve project management. These changes were included in DOE’s revised project management order, DOE Order 413.3B, issued in May 2016. As noted in the memorandums, some of these changes are in response to recommendations we made in prior years, such as requiring that projects develop cost estimates and analyses of alternatives according to our best practices. DOE also made significant efforts to improve its performance in monitoring and independently validating the effectiveness and sustainability of corrective measures and now partially meets our monitoring criterion for removing agencies and program areas from our High-Risk List. For example, the Secretary improved the department’s senior-level monitoring capability. The Secretary strengthened the Energy Systems Acquisition Advisory Board by changing it from an ad hoc body to an institutionalized board responsible for reviewing all capital asset projects with a total project cost of $100 million or more. The Secretary also created the Project Management Risk Committee, which includes senior DOE officials and is chaired by a new departmental position—the Chief Risk Officer. The committee is chartered to assess the risks of projects across DOE and advise DOE senior leaders on cost, schedule, and technical issues for projects. Challenges Persist in Several Areas DOE’s recent efforts do not address several areas where it continues to have challenges including (1) acquisition planning for its major contracts, (2) the quality of enterprise-wide cost information available to DOE managers and key stakeholders, (3) program and project management, and (4) major legacy projects. Acquisition Planning for Major Contracts As we have previously reported, during the acquisition-planning phase for contracts, DOE makes critical decisions that have significant implications for the cost and overall success of an acquisition. The size and duration of DOE’s management and operating (M&O) contracts—22 M&O contracts with an average potential duration of 17 years, representing almost three-quarters of DOE’s spending in fiscal year 2015—underscore the importance of planning for every M&O acquisition. In August 2016, we examined DOE’s use of M&O contracts. According to DOE officials we interviewed at that time, one of the primary reasons DOE uses M&O contracts is because they are easier to manage with fewer DOE personnel because they are less frequently competed and have broadly written scopes of work, among other attributes. We found that DOE did not consider acquisition alternatives beyond continuing its long-standing M&O contract approach for 16 of its 22 M&O contracts. We concluded that without considering broader alternatives in the acquisition planning phase, DOE cannot ensure that it is selecting the most effective scope and form of contract, raising risks for both contract cost and performance. We recommended in our August 2016 report that DOE establish a process to analyze and apply its experience with contracting alternatives. DOE generally concurred with our recommendation, and, in November 2016, issued updated guidance requiring acquisition planning documents to contain a thorough discussion of alternatives beyond simply extending or competing M&O contracts. Quality of Enterprise-Wide Cost Information We have previously reported that the effectiveness of DOE’s monitoring of its contracts, projects, and programs depends upon the availability of reliable enterprise-wide cost information on which to base oversight activities. For example, reliable enterprise-wide cost information is needed to identify the cost of activities, ensure the validity of cost estimates, and provide information to Congress to make budgetary decisions. However, we have found that meaningful cost analyses across programs, contractors, and sites are not usually possible because NNSA’s contractors use different methods of accounting for and tracking costs. NNSA developed a plan to improve and integrate its cost reporting structures; however, we found in January 2017 that this plan did not provide a useful road map for guiding NNSA’s effort. For example, we found that NNSA did not define strategies and identify resources needed to achieve its goals, which is a leading practice for strategic planning. NNSA’s plan contained few details on the elements it must include, such as its feasibility assessment, estimated costs, expected results, and an implementation timeline. We concluded that, until a plan is in place that incorporates leading strategic planning practices, NNSA cannot be assured that its efforts will result in a cost collection tool that produces reliable enterprise-wide cost information that satisfies the information needs of Congress and program managers. We recommended that NNSA develop a plan for producing cost information that fully incorporates leading planning practices. NNSA agreed with our recommendation. In addition, as we have previously noted, quality data are needed for DOE to manage its risk of fraud. The Fraud Reduction and Data Analytics Act of 2015 establishes requirements aimed at improving federal agencies’ controls and procedures for assessing and mitigating fraud risks through the use of data analytics. In a March 2017 report, however, we found that because DOE does not require its contractors to maintain sufficiently detailed transaction-level cost data that are reconcilable with amounts charged to DOE, it is not well positioned to employ data analytics as a fraud detection tool. We found that the data were not suitable either because they were not for a complete universe of transactions that was reconcilable with amounts billed to DOE or because they were not sufficiently detailed to determine the nature of costs charged to DOE. We concluded that, without requiring contractors to maintain such data, DOE will not be well positioned to meet the requirements of the Fraud Reduction and Data Analytics Act of 2015 and manage its risk of fraud and other improper payments. We recommended that DOE require contractors to maintain sufficiently detailed transaction-level cost data that are reconcilable with amounts charged to the government. DOE did not concur with our recommendation. This is because, according to DOE, the recommendation establishes agency-specific requirements for DOE contractors that are more prescriptive than current federal requirements and that its M&O contractors, not DOE, are responsible for performing data analytics and determining what data are needed to do so. DOE’s response to our recommendation is concerning because it demonstrates that DOE does not fully appreciate its responsibility for overseeing contractor costs. We believe that the use of data-analytic techniques by DOE employees could help mitigate some of the challenges that limit the effectiveness of DOE’s approach for overseeing M&O contractor costs. However, effectively applying data-analytics depends on the availability of complete and sufficiently detailed contractor data. Therefore, by implementing our recommendation DOE could take the important steps necessary to require contractors maintain sufficiently detailed transaction-level cost data that are reconcilable with amounts charged to the government. Program and Project Management Although, as mentioned previously, DOE has taken some steps to improve program and project management, our recent work has shown that DOE continues to face several challenges in these areas. Specifically on program management: In November 2017, we found that NNSA had established program management requirements, such as developing cost and schedule estimates for its uranium, plutonium, tritium, and lithium programs and had established managers’ roles and responsibilities for these programs. However, officials told us that the programs had not fully met these requirements primarily because of staff shortages. We recommended that NNSA determine the critical staff skills it will need for these programs and use that information to address staffing shortages. NNSA agreed with our recommendation. In a September 2017 report on the NNSA’s uranium program, we found that NNSA had not developed a complete scope of work, a life- cycle cost estimate, or an integrated master schedule for the overall uranium program—all of which are considered leading practices—and it had no time frame for doing so. We reported that NNSA plans to do so for the specific Uranium Processing Facility project, as required by DOE’s project management order. However, NNSA had not developed a complete scope of work for key program requirements, including important and potentially costly repairs and upgrades to existing buildings in which NNSA intends to house some uranium processing capabilities. We concluded that because NNSA had not developed a complete scope of work for the overall uranium program, it did not have the basis to develop a life-cycle cost estimate or an integrated master schedule for the entire uranium program, which runs counter to best practices identified in GAO’s cost estimating and scheduling guides. We recommended that NNSA set a time frame for completing the scope of work, life-cycle cost estimate, and integrated master schedule for the overall uranium program. NNSA generally agreed with this recommendation and has ongoing efforts to complete these actions. In September 2017, we found that DOE’s program to re-establish the production of a plutonium isotope used to provide electrical power for the National Aeronautics and Space Administration missions had made progress but that it faced a number of technical and organizational challenges to meeting production goals. Specifically, we found that NNSA had not developed an implementation plan that identifies milestones and interim steps that can be used to demonstrate progress in meeting production goals. Our prior work has shown that plans that include milestones and interim steps help an agency to set priorities, use resources efficiently, and monitor progress in achieving agency goals. In our September 2017 report, we made three recommendations, including that DOE develop such a plan for its plutonium isotope production approach and that DOE assess the long-term effects of known production challenges and communicate these effects to the National Aeronautics and Space Administration. DOE concurred with our recommendations. Our prior work also demonstrates that DOE continues to face project management challenges in terms of having reliable performance data or conducting reliable analyses of alternatives. Specifically, In a January 2018 report, we found management challenges associated with NNSA’s life extension programs (LEP). For example, we found that NNSA had begun implementing requirements for using earned value management (EVM) —a tool used across industry and government for conducting cost and schedule performance analysis—in three LEPs, but it had not adopted a key best practice that could help the agency better manage risk for LEPs. Specifically, we found that NNSA does not require an independent team to validate the EVM systems used by NNSA’s contractors for LEPs against the national EVM standard. We concluded that without requiring validation of EVM systems, NNSA may not have assurance that its LEPs are obtaining reliable EVM data for managing their programs and reporting their status. We recommended that NNSA require an independent team to validate contractor EVM systems used for LEPs. NNSA agreed with our recommendation but stated that it already relies on a DOE project management office to independently validate contractor EVM systems. However, as we reported, DOE has not independently validated contractor EVM systems at six of the seven contractor sites that are responsible for conducting LEP activities. In May 2015, we reported that DOE initiated a new project, the Low Activity Waste Pretreatment System project, to accelerate waste treatment at Hanford. We found that this project was selected on the basis of similar past proposals without consideration of other potentially viable alternatives, contrary to requirements in DOE’s project management order. We also reported that DOE’s cost and schedule estimates for completion of the project were not conducted according to best practices and were therefore not reliable. We recommended that DOE re-evaluate alternatives and that it revise the cost and schedule estimates in line with best practices. DOE generally agreed with our recommendations but not some of the conclusions. In September 2017, amid concerns about project cost growth and schedule delays, DOE directed the contractor to conduct a new analysis of alternatives to identify options that will allow the project to be completed within current cost and schedule estimates. The department has suspended work on the project pending a decision on its design. We will continue to monitor EM’s management and oversight of its operations activities and DOE’s risk-informed cleanup decisions to address environmental liabilities, as part of our ongoing work for this subcommittee. Major Legacy Projects As previously mentioned, in response to a 2015 memorandum on project management policies from the Secretary of Energy, DOE instituted project management reforms that—if fully implemented—will help ensure that future projects are not affected by the challenges that have persisted for DOE’s major legacy projects. Although DOE has taken action on certain major projects, we found that it has not consistently applied these reforms, and in particular, DOE has not applied such reforms to its largest legacy cleanup project at its Hanford Site in Washington state. As we found in a May 2015 report, DOE continues to allow construction of certain Waste Treatment and Immobilization Plant (WTP) facilities at DOE’s Hanford Site before designs are 90 percent complete. This contrasts with DOE’s revised project management order that now requires a facility’s design to be at least 90 percent complete before establishing cost and schedule baselines and cost and schedule estimates that meet industry best practices. The WTP is DOE’s largest project, and it has faced numerous technical and management challenges that have added decades to its schedule and billions of dollars to its cost. We recommended in May 2015 that DOE (1) consider whether to limit construction on the WTP until risk mitigation strategies are developed to address known technical challenges, and (2) determine the extent to which the quality problems exist, in accordance with its quality assurance policy, for the facilities’ systems that have not been reviewed to determine if additional vulnerabilities exist. However, as of September 2016, DOE had not yet implemented our recommendations. In December 2016, DOE announced that the cost estimate for one portion of the WTP—the part needed to treat a fraction of the low-activity waste—had increased to nearly $17 billion. We are currently in the process of completing a report on DOE’s WTP quality assurance program. NNSA’s Nonproliferation Program Faces Performance Measurement and Program Management Challenges Our previous work has found that NNSA also faces challenges implementing its nonproliferation programs under its Office of Defense Nuclear Nonproliferation (DNN), which implements nuclear nonproliferation programs worldwide. In recently completed reviews of DNN programs, we have identified several challenges NNSA faces in how it measures performance and conducts program management of these efforts. Specifically, In September 2017, we found that four DNN programs did not have schedule and cost estimates covering their planned life cycles and did not measure performance against schedule and cost baselines as is recommended by program management leading practices. NNSA officials explained that in general this is due in part to high levels of uncertainty in planning the selected programs’ work scope or schedules, particularly in working with partner countries; however, we noted that uncertainty should not prevent these programs from establishing more complete or longer-term estimates to account for the time and resources they need to achieve their goals and track their performance. In addition, we observed that DOE’s cost estimating guide, which applies to NNSA programs, describes approaches for programs to incorporate risk and uncertainty in estimates. But we found that DNN’s program management policy, which was updated in February 2017, did not outline requirements for programs to establish life-cycle estimates or measure performance against schedule and cost baselines. We recommended that DNN revise its program management policy to require DNN programs to follow life-cycle program management, such as requiring life-cycle estimates and measuring against baselines. Updating the DNN policy to include requirements and guidance on cost estimating and tracking performance against schedule and cost baselines could help ensure that NNSA managers and Congress have better information on (1) how much DNN programs may cost, (2) the time they may need to achieve their goals, and (3) how effectively they are being executed compared to plans. Although NNSA neither agreed nor disagreed with the recommendation, it indicated that it plans to take action to revise its policy to address the recommendation. In February 2017, we found that NNSA was unable to demonstrate the full results of its research and development technology for preventing nuclear proliferation. Specifically, we reported that DNN’s Research and Development program did not consistently track and document projects that result in technologies being transitioned or deployed. Furthermore, we found that DNN’s Research and Development project performance was difficult to interpret because the program’s performance measures did not define criteria or provide context justifying how the program determined that it met its targets. We concluded that this, in turn, could hinder users’ ability to determine the program’s progress. NNSA officials said that final project reports did not document their assessment of performance against baseline targets and that there was no common template for final project reports. We noted that documenting assessments that compare final project performance results against baseline targets for scope of work and completion date could enhance NNSA’s ability to manage its programs in accordance with these standards. We also concluded that more consistently tracking and documenting the transitioned and deployed technologies that result from DNN’s projects could also facilitate knowledge sharing within DNN. This would also provide a means by which to present valuable information to Congress and other decision makers about the programs’ results and overall value. We recommended that NNSA consistently track and document results of DNN Research and Development projects and document assessments of final project results against baseline performance targets. NNSA agreed to take actions in response to both recommendations. In June 2016, we found that the Nuclear Smuggling Detection and Deterrence (NSDD) program had developed a program plan but that the plan did not include measurable goals and performance measures aligned to the goals. As a result, we concluded that the NSDD program may not be able to determine when it has fully accomplished its mission and risked continuing to deploy equipment past the point of diminishing returns. We recommended that NSDD develop a more detailed pr ogram plan that articulates when and how it will achieve its goals, including completing key activities, such as the deployment of radiation detection equipment to partner countries. NNSA agreed with this recommendation. Chairman Upton, Ranking Member Rush, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. GAO Contact and Staff Acknowledgements If you or your staff members have any questions about this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Nico Sloss, Assistant Director; Nathan Anderson; Allison Bawden; Natalie Block; Mark Braza; Antoinette Capaccio; Jenny Chow; Ricki Gaber; Jonathan Gill; William Hoehn; Cristian Ion; Amanda Kolling; and Diane LoFaro. Related GAO Products The following is a selection of GAO’s recent work assessing the Department of Energy’s management efforts, including at the National Nuclear Security Administration and at the Office of Environmental Management: Nuclear Weapons: NNSA Should Adopt Additional Best Practices to Better Manage Risk for Life Extension Programs. GAO-18-129. Washington, D.C.: January 30, 2018. Nuclear Weapons: NNSA Needs to Determine Critical Skills and Competencies for Its Strategic Materials Programs. GAO-18-99. Washington, D.C.: November 14, 2017. Nuclear Nonproliferation: NNSA Needs to Improve Its Program Management Policy and Practices. GAO-17-773. Washington, D.C.: September 28, 2017. Modernizing the Nuclear Security Enterprise: A Complete Scope of Work Is Needed to Develop Timely Cost and Schedule Information for the Uranium Program. GAO-17-577. Washington, D.C.: September 8, 2017. Space Exploration: DOE Could Improve Planning and Communication Related to Plutonium-238 and Radioisotope Power Systems Production Challenges. GAO-17-673. Washington, D.C.: September 8, 2017. Nuclear Waste: Opportunities Exist to Reduce Risks and Costs by Evaluating Different Waste Treatment Approaches at Hanford. GAO-17-306. Washington, D.C.: May 3, 2017. 2017 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-17-491SP. Washington, D.C.: April 26, 2017. National Nuclear Security Administration: Action Needed to Address Affordability of Nuclear Modernization Programs. GAO-17-341. Washington, D.C.: April 26, 2017. Department of Energy: Use of Leading Practices Could Help Manage the Risk of Fraud and Other Improper Payments. GAO-17-235. Washington, D.C.: March 30, 2017. High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. Nuclear Nonproliferation: Better Information Needed on Results of National Nuclear Security Administration’s Research and Technology Development Projects. GAO-17-210. Washington, D.C.: February 3, 2017. Nuclear Waste: Benefits and Costs Should Be Better Understood Before DOE Commits to a Separate Repository for Defense Waste. GAO-17-174.Washington, D.C.: January 31, 2017. National Nuclear Security Administration: A Plan Incorporating Leading Practices Is Needed to Guide Cost Reporting Improvement Effort. GAO-17-141. Washington, D.C.: January 19, 2017. Program Management: DOE Needs to Develop a Comprehensive Policy and Training Program. GAO-17-51. Washington, D.C.: November 21, 2016. Department of Energy: Actions Needed to Strengthen Acquisition Planning for Management and Operating Contracts. GAO-16-529. Washington, D.C.: August 9, 2016. DOE Project Management: NNSA Needs to Clarify Requirements for Its Plutonium Analysis Project at Los Alamos. GAO-16-585. Washington, D.C.: August 9, 2016. Orion Multi-Purpose Crew Vehicle: Action Needed to Improve Visibility into Cost, Schedule, and Capacity to Resolve Technical Challenges. GAO-16-620. Washington, D.C.: July 27, 2016. Department of Energy: Whistleblower Protections Need Strengthening. GAO-16-618. Washington, D.C.: July 11, 2016. Combating Nuclear Smuggling: NNSA’s Detection and Deterrence Program Is Addressing Challenges but Should Improve Its Program Plan. GAO-16-460. Washington, D.C.: June 17, 2016. Modernizing the Nuclear Security Enterprise: NNSA’s Budget Estimates Increased but May Not Align with All Anticipated Costs. GAO-16-290. Washington, D.C.: March 4, 2016. Weapons System Acquisitions: Opportunities Exist to Improve the Department of Defense’s Portfolio Management. GAO-15-466. Washington, D.C.: August 27, 2015. Hanford Waste Treatment: DOE Needs to Evaluate Alternatives to Recently Proposed Projects and Address Technical and Management Challenges. GAO-15-354. Washington, D.C.: May 7, 2015. DOE and NNSA Project Management: Analysis of Alternatives Could Be Improved by Incorporating Best Practices. GAO-15-37. Washington, D.C.: December 11, 2014. Modernizing the Nuclear Security Enterprise: NNSA’s Budget Estimates Do Not Fully Align with Plans. GAO-14-45. Washington, D.C.: December 11, 2013. Commercial Nuclear Waste: Effects of a Termination of the Yucca Mountain Repository Program and Lessons Learned. GAO-11-229. Washington, D.C.: April 8, 2011. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study DOE's NNSA is responsible for managing the nuclear weapons stockpile and supporting nuclear nonproliferation efforts. DOE's Office of Environmental Management's mission includes decontaminating and decommissioning facilities that are contaminated from decades of nuclear weapons production. Over the last few years, GAO has reported on a wide range of challenges facing DOE and NNSA. These challenges contribute to GAO's continuing inclusion of DOE's and NNSA's management of major contracts and projects on the list of agencies and program areas that are at high risk of fraud, waste, abuse, and mismanagement, or are in need of transformation. GAO also recently added the U.S. government's environmental liabilities to this list. This statement is based on 25 GAO reports issued from April 2011 through January 2018 and discusses (1) challenges related to the affordability of NNSA's nuclear modernization plans; (2) challenges related to DOE's environmental liability; (3) the status of DOE's efforts to improve its management of contracts, projects, and programs; and (4) challenges facing NNSA's nonproliferation programs. What GAO Found The Department of Energy's (DOE) National Nuclear Security Administration (NNSA) faces challenges related to the affordability of its nuclear modernization programs. In April 2017, GAO found a misalignment between NNSA's modernization plans and the estimated budgetary resources needed to carry out those plans. Specifically, GAO found that NNSA's estimates of funding needed for its modernization plans sometimes exceeded the budgetary projections included in the President's planned near-term and long-term modernization budgets by billions of dollars. GAO also found that the costs of some major modernization programs—such as for nuclear weapon refurbishments—may also increase and further strain future modernization budgets. GAO recommended in April 2017 that NNSA include an assessment of the affordability of its modernization programs in future versions of its annual plan on stockpile stewardship; NNSA neither agreed nor disagreed with that recommendation. DOE also faces challenges with addressing its environmental liabilities—the total cost of its cleanup responsibilities. In February 2017, GAO found that DOE was responsible for over 80 percent ($372 billion) of the U.S. government's estimated $450 billion environmental liability. However, this estimate does not reflect all of DOE's cleanup responsibilities. Notably, this estimate does not reflect all of the future cleanup responsibilities that DOE may face. For example, in January 2017, GAO found that the cost estimate for DOE's proposal for separate defense and commercial nuclear waste repositories excluded the costs and time frames for site selection and site characterization, and therefore full costs are likely to be billions of dollars more than DOE's reported environmental liabilities. To effectively address cleanup, GAO has made at least 28 recommendations to DOE and other federal agencies, which could reduce long-term costs as well as environmental risks more quickly. Of these, 13 remain not implemented. DOE has taken several important steps that demonstrate its commitment to improving contract and project management, but challenges persist. Specifically, DOE's revised project management order, issued in May 2016, made several changes in response to recommendations GAO made in prior years, such as requiring that projects develop cost estimates and analyses of alternatives according to our best practices. However, DOE's recent efforts do not address several areas, such as acquisition planning for major contracts and aspects of program and project management, where the department continues to struggle. GAO has made several recommendations related to these areas, and DOE has generally agreed with and begun to take action on most of them. Finally, NNSA faces challenges in implementing its nonproliferation programs. For example, in September 2017, GAO found that selected programs in NNSA's Office of Defense Nuclear Nonproliferation (DNN) did not measure performance against schedule and cost baselines, as recommended by program management leading practices because DNN's program management policy did not require programs to measure performance in this way. GAO recommended that DNN revise its policy to require programs to measure performance against cost and schedule baselines. NNSA indicated it plans to take action to revise its policy. What GAO Recommends GAO has previously suggested that Congress consider changes to the laws governing environmental cleanup activities. In addition to these suggestions, GAO has made numerous recommendations to DOE to address its management challenges.
gao_GAO-18-258
gao_GAO-18-258_0
Background Research on Student Behavior and School Discipline The issue of who gets disciplined and why is complex. Studies we reviewed suggest that implicit bias—stereotypes or unconscious associations about people—on the part of teachers and staff may cause them to judge students’ behaviors differently based on the students’ race and sex. Teachers and staff sometimes have discretion to make case- by-case decisions about whether to discipline, and the form of discipline to impose in response to student behaviors, such as disobedience, defiance, and classroom disruption. Studies show that these decisions can result in certain groups of students being more harshly disciplined than others. Further, the studies found that the types of offenses that Black children were disciplined for were largely based on school officials’ interpretations of behavior. For example, one study found that Black girls were disproportionately disciplined for subjective interpretations of behaviors, such as disobedience and disruptive behavior. A separate study used eye-tracking technology to show that, among other things, teachers gazed longer at Black boys than other children when asked to look for challenging behavior based on video clips. The Department of Health and Human Services (HHS) reported that this research has highlighted implicit bias as a contributing factor in school discipline and may shed some light on the persistent disparities in expulsion and suspension practices, even though the study did not find that teacher gazes were indicative of how they would discipline students. Children’s behavior in school may be affected by health and social challenges outside the classroom that tend to be more acute for poor children, including minority children who experience higher rates of poverty. Research shows that experiencing trauma in childhood may lead to educational challenges, such as lower grades and more suspensions and expulsions; increased use of mental health services; and increased involvement with the child welfare and juvenile justice systems, according to HHS’s Substance Abuse and Mental Health Services Administration (SAMHSA). Further, a substantial share of children nationwide are estimated to have experienced at least one trauma, referred to as an adverse childhood experience (ACE), according to the National Survey of Children’s Health. Additionally, as we recently reported, there has been an increase in certain mental health issues within the school age population. For example, from 2005 to 2014, the suicide rate of youth ages 15 to 19 rose slightly, with older youth having a much higher rate of suicide than younger youth, and since 2007, the percentage of youth ages 12-17 experiencing a major depressive episode increased. K-12 Students and Discipline About 50 million students were enrolled in K-12 public schools during the 2013-14 school year, according to the CRDC. About 90 percent of students attended traditional public schools; the remainder were enrolled at public charters, magnets, and other types of schools (see table 1). About half of all public school students were White and the other half fell into one of several minority groups, with Hispanic and Black students being the largest minority groups (see fig. 1). The number of boys and girls in public schools was almost evenly split. A larger percentage of boys were students with disabilities. Nearly half of all public school students went to schools where 50 percent or more of the students were low-income, and about a quarter went to schools where 75 percent or more of the students were low-income (see table 2). Discipline of students dropped between 2011-12 and 2013-14 over the six broad categories of discipline reported in Education’s CRDC, which were (1) out-of-school suspensions, (2) in-school suspensions, (3) referrals to law enforcement, (4) expulsions, (5) corporal punishment, and (6) school- related arrests. For example, in school year 2011-12 about 3.4 million (or 6.9 percent) of K-12 public school students were suspended out-of-school at least once, and in school year 2013-14 these suspensions fell to about 2.8 million (or 5.7 percent). Other disciplinary actions affected a much smaller portion of the student body—specifically, less than 0.5 percent of all K-12 public school students were expelled, referred to law enforcement, had a school-related arrest, or experienced corporal punishment in 2013-14, according to Education’s reported data. Education and Justice Enforcement Responsibilities Education’s Office for Civil Rights and Justice’s Civil Rights Division are responsible for enforcing a number of civil rights laws, which protect students from discrimination on the basis of certain characteristics (see table 3). As part of their enforcement responsibilities, both agencies conduct investigations in response to complaints or reports of possible discrimination. Education also carries out agency-initiated investigations, which are called compliance reviews and which target problems that Education has determined are particularly acute. Education may also withhold federal funds if a recipient is determined to be in violation of the civil rights laws and the agency is unable to reach agreement with the parties involved. In addition, Justice has the authority to file suit in federal court to enforce the civil rights of students in public education. Education and Justice have also issued guidance to assist public schools in meeting their obligations under federal law to administer school discipline without unlawfully discriminating against students on the basis of race, color, or national origin. According to the guidance, public schools are prohibited by federal law from discriminating in the administration of student discipline based on protected characteristics. Further, Education and Justice have noted in their guidance that disciplinary policies and practices can result in unlawful discrimination based on race, for example, in two ways: first, if students are intentionally subject to different treatment on account of their race; and second, if a policy is neutral on its face but has a disproportionate and unjustified effect on students of a particular race, referred to as disparate impact. According to Education and Justice guidance, significant and unexplained racial disparities in student discipline give rise to concerns that schools may be engaging in racial discrimination that violates federal civil rights laws; however, data showing such disparities, taken alone, do not establish whether unlawful discrimination has occurred. Selected Recently Enacted Federal Laws with Provisions Related to School Discipline Two significant, recently enacted laws include provisions related to school discipline: the Every Student Succeeds Act (ESSA) and the Child Care and Development Block Grant Act of 2014 (CCDBG Act of 2014). ESSA, enacted in December 2015, amended Title I program requirements to allow states’ accountability systems to use multiple indicators of success, which can include measures of school climate and safety. As we previously reported in 2017, some states were considering measures related to suspension rates or school attendance. Additionally, ESSA amended the Elementary and Secondary Education Act of 1965 to authorize the Student Support and Academic Enrichment Program, under which school districts may use grant funding to, among other things, design and implement a locally-tailored plan to reduce exclusionary discipline practices in elementary and secondary schools. These grants also allow the use of funding to expand access to school- based mental health services, including counseling. In addition, the CCDBG Act of 2014 allows states to use certain funds to support the training and professional development of child care workers through activities such as behavior management strategies and training that promote positive social and emotional development and reduce challenging behaviors, including reducing expulsions of young children for those behaviors. Black Students, Boys, and Those with Disabilities Were Disproportionately Disciplined Regardless of Type of Discipline, Level of School Poverty, or Type of School Black students, boys, and students with disabilities were disproportionately disciplined in K-12 public schools, according to our analysis of Education’s most recent CRDC data. This pattern of disproportionate discipline persisted regardless of the type of disciplinary action, level of school poverty, or type of public school these students attended. Type of Disciplinary Action Across each disciplinary action, Black students, boys, and students with disabilities experienced disproportionate levels of discipline. Black students were particularly overrepresented among students who were suspended from school, received corporal punishment, or had a school- related arrest (see fig. 2). For example, Black students represented 15.5 percent of all public school students and accounted for 39 percent of students suspended from school, an overrepresentation of about 23 percentage points. Differences in discipline were particularly large between Black and White students. Although there were approximately 17.4 million more White students than Black students attending K-12 public schools in 2013-14, nearly 176,000 more Black students than White students were suspended from school that school year. See appendix IV, table 12 for additional data on the disciplinary experiences of different racial or ethnic groups. For example, American Indian and Alaska Native students had higher than average rates of receiving each of the six disciplinary actions. This pattern of disproportionate discipline affected both Black boys and Black girls—the only racial group for which both sexes were disproportionately disciplined across all six actions. For example, Black girls were suspended from school at higher rates than boys of multiple racial groups and every other racial group of girls (see fig. 3). Further, boys as a group were overrepresented, while girls were underrepresented among students disciplined across each action. Specifically, boys accounted for just over half of all public school students, but were at least two-thirds of students disciplined across each of the six actions, according to our analysis of Education’s school year 2013-14 data. Boys were particularly overrepresented among students who received corporal punishment, by about 27 percentage points (see fig. 4). These kinds of disparities presented as early as pre-school (see sidebar). Additional information about discipline for pre-school students is in appendix IV, table 17. Regardless of the level of school poverty, Black students, boys, and students with disabilities were suspended from school at disproportionately higher rates than their peers (see fig. 6). This was particularly acute for Black students in high-poverty schools, where they were overrepresented by nearly 25 percentage points in suspensions from school. This pattern persisted across all six disciplinary actions, as well. A similar pattern emerged for boys and students with disabilities. However, unlike Black students, boys and students with disabilities were particularly overrepresented among students suspended from low-poverty public schools (poverty less than 25 percent). Effect of School Poverty on Discipline GAO used a regression model to examine the independent effect of school poverty on discipline in school year 2013-14. The model showed that increases in the percentage of low-income students in a school were generally associated with significantly higher rates for each of the six disciplinary actions GAO reviewed (in-school and out-of-school suspensions, referrals to law enforcement, expulsions, corporal punishment, and school- related arrests). In these schools, boys and students with disabilities were overrepresented by approximately 24 and 20 percentage points, respectively. See appendix IV, table 14 for more information on discipline by the poverty level of the school. In addition, see sidebar for regression results that were relevant to poverty and school discipline. Full results from our regression model are in appendix I, table 10. Type of Public School Regardless of the type of public school a student attended—traditional, magnet, charter, alternative, or special education—Black students, boys, and students with disabilities were disciplined at disproportionately higher rates than their peers, with few exceptions (see fig. 7). For example, Black students were disproportionately suspended from all types of public schools, and this was particularly acute in charter schools. That is, although they represented about 29 percent of all students in charter schools, Black students accounted for more than 60 percent of the students suspended from charter schools (about 32 percentage points higher than their representation in those schools). Boys and students with disabilities were particularly overrepresented among students suspended from traditional public schools (roughly 19 and 14 percentage points, respectively, above their representation in traditional public schools). Effect of School Type on Discipline GAO used a regression model to examine the independent effect of attending different types of public schools on disciplinary outcomes. The model showed several significant associations between school type and the likelihood of receiving discipline. For example, attending an alternative school was associated with a significantly higher likelihood of being suspended (in-school or out-of-school), expelled, referred to law enforcement, or arrested for a school-related incident, compared to attending a traditional public school. The model also showed that students were significantly less likely to be suspended (in-school or out-of-school) if they attended a magnet, charter, or special education school as compared to a traditional public school. We found a few exceptions to the general pattern of Black students, boys, and students with disabilities receiving disproportionately high rates of discipline by school type. For example, Black students attending special education schools did not receive corporal punishment at disproportionate levels. See appendix IV, table 15 for additional information on discipline by the type of public school. In addition, see sidebar for regression results that were relevant to school type and school discipline. Full results from our regression model are in appendix I, table 10. We also found a regional component to discipline in public schools. For example, corporal punishment generally occurred in southern states. See appendix II for maps showing the rates of disciplinary actions by public school district. Five Selected Districts Reported Changing Their Approach to Discipline in Order to Address Student Behavior Challenges Selected School District and School Officials Said Complex Issues Confronting Students Make It Challenging to Address Student Behavior We spoke with school officials at five school districts about how they are addressing discipline, including challenges they face in responding to student conduct given the complex issues influencing student behavior. Several school officials noted a range of issues, including complex issues such as the effects of poverty, mental health issues, and family dysfunction, that they said contributed to behavior that leads to discipline (see fig. 8). For example, officials at a high-poverty Georgia high school said that their students have additional responsibilities, such as raising or watching siblings or working to support their family, which may cause students to be late to, or skip, class. This observation is consistent with our recent report on child well-being, which cited research showing that children in poverty are more likely to face academic and social challenges than their peers, and with our analysis of CRDC data, which showed that rates of chronic absenteeism (being absent 15 or more days in a school year), were higher in high-poverty schools. See appendix IV, table 19 for detailed data on chronic absenteeism. At one high school in Georgia, officials said that attendance issues were the reason for a majority of disciplinary actions at their school. They said that if students were repeatedly late to school or did not get to their next class within the set amount of time, students could amass enough infractions to warrant suspension from school. In contrast, an official at an elementary school in Georgia said that they usually do not discipline their students for being late to school, as they have found that it was often due to circumstances beyond the child’s control. According to several school officials, some groups such as homeless youth, American Indian, or Lesbian, Gay, Bisexual, Transgender, or Questioning (LGBTQ) students have had greater attendance problems than others. For example, education officials in California said that homeless and foster youth frequently miss school because of all the transitions and instability in their lives. In a school in Texas, officials also reported attendance issues with students who are homeless or in foster care because they lack transportation and clothing. Similarly, we previously reported that American Indian students face school attendance challenges, including access to reliable transportation. In addition, American Indian and Alaska Native students had the highest rates of chronic absenteeism in school year 2013-14, compared to students of other races, according to our analysis of CRDC data (see appendix IV). LGBTQ students are at a high risk of suicide and other emotional issues during adolescence, and often feel disconnected from their peers and families, according to county education officials in California. According to these officials, this can contribute to attendance problems. Officials in our five selected school districts also described what they perceived as a growing trend of behavioral challenges or provided examples related to mental health and trauma, such as increased anxieties, thoughts of and attempts at suicide, and depression among students. For example, state education officials in Georgia said they viewed a growing number of their students as being “trauma complex.” Officials at one school in Massachusetts said that they involve the mental health clinicians or social worker for additional support when students are dealing with traumatic experiences, depression, or are struggling to self- regulate. Further, officials at another school in Massachusetts said that many of their students have experienced trauma and this may lead to more aggressive behaviors at the elementary school level, and to more self-destructive behaviors at the middle school level. Specifically, these officials said that children who have experienced trauma may kick, bite, and punch others when they are younger and cut themselves or become suicidal when older. Similarly, officials at a school in Texas said that they have seen a growth in suicidal ideation and self-harm among the students. Some school officials also said that they felt ill-equipped or that schools lacked resources to deal with the increase in students with mental health issues and the associated behaviors. School officials in all five of the selected states also said that social media results in conflicts or related behavioral incidents among students, such as related bullying and arguments. Officials at a school in Georgia said that social media arguments can cause students who were not part of the original situation to be pulled in, creating classroom disruptions that end in discipline for a larger group. Moreover, officials in a North Dakota middle school said that disagreements on social media last for longer periods of time. They said that social media has also been used to facilitate the purchase of illegal drugs, which can result in students being arrested in school and expelled. Use of Corporal Punishment in School for Five Selected States California, Massachusetts, and North Dakota: Corporal punishment in schools is prohibited. Texas: If a school district adopts a policy to permit corporal punishment, school staff may use corporal punishment unless the student’s parent has provided a written, signed statement prohibiting it. None of the schools GAO visited used corporal punishment, according to officials. Georgia: Boards of education are authorized to determine policies related to corporal punishment, including allowing school staff, at their discretion, to administer corporal punishment in order to maintain discipline. However, none of the schools GAO visited used corporal punishment, according to officials. School district officials from three of the five selected districts we visited stated that officials at individual schools generally have a lot of discretion in determining what discipline a student receives. In several schools, officials said they often try other avenues first to address behavior, such as detention, alerting or having a discussion with the parent, or taking away certain privileges such as making the student eat lunch with the teacher instead of with their friends. However, for certain offenses, officials in most districts said that discipline was automatically more severe. Gun possession, for example, prompts an automatic expulsion at most of the school districts we visited. In another example, school district officials in Texas said drug-related incidents, physical assault of a teacher or student, or extreme sexual behaviors can result in a student being placed in an alternative school. School officials at one alternative school we visited stated that 80 to 90 percent of their students are there due to drug-related incidents. Officials in several of the school districts said their districts had School Resource Officers who only become involved in school disciplinary issues when requested by school administrators. In a Texas high school with over 3,800 students, a school official said School Resource Officers patrol school grounds, monitor gang activity, and may become involved when there are illegal drug issues. Officials also said that School Resource Officers sometimes provide trainings for students, parents, or school staff on subjects such as safety, good decision making, substance abuse, and peer pressure. Further, although corporal punishment was legal in two of the five states we visited (see sidebar), the school district officials with whom we spoke in those states said it was not used anymore in their districts. Our analysis of schools nationwide using school year 2013-14 data showed that corporal punishment tended to be most prevalent in southern states (see maps in appendix II). All Selected School Districts Described Changing Their Approach to Discipline While there is no one-size-fits-all solution to addressing challenging student behavior, or to the evident disparities in discipline for certain student groups, officials in two school districts we visited told us they recognize the importance of finding alternatives to discipline that unnecessarily removes children from the learning environment. Some school officials said they have begun to specifically address disparities for certain student groups. Officials in all selected school districts reported they are implementing efforts to better address student behavior or reduce the use of exclusionary discipline. For example, officials in all school districts said that they are implementing alternative discipline models that emphasize preventing challenging student behavior and focus on supporting individuals and the school community, such as positive behavioral interventions and supports (PBIS), restorative justice practices, and social emotional learning (SEL) (see sidebar). For example, officials at a selected school district in Texas said they have implemented a classroom management model that uses positive behavior techniques. Texas state law allows schools to develop and implement positive behavior programs as disciplinary alternatives for very young students. This was also true in California, where state law specifically lists suggested alternatives to suspension, including restorative justice, a positive behavior support approach with tiered interventions, and enrollment in programs that teach positive social behavior or anger management. Examples of Alternatives to Discipline that Removes Students from the Classroom Positive Behavioral Interventions and Supports (PBIS): A school-wide framework that focuses on positive behavioral expectations. By teaching students what to do instead of what not to do, the school can focus on the preferred behaviors. All of the selected school districts used some form of positive behavioral intervention and supports. One school official told us that PBIS has significantly reduced their discipline referral numbers and provided teachers more tools to get behavior situations under control. Restorative Justice Practices: This approach focuses on repairing harm done to relationships and people. The aim is to teach students empathy and problem-solving skills that can help prevent inappropriate behavior in the future. For example, according to officials we interviewed at one school, their restorative practices help students take ownership of their actions and work collaboratively to restore relationships that may have been strained. Officials at another school said schools use mediation techniques as alternatives to suspensions. Social and Emotional Learning (SEL): SEL enhances students’ abilities to deal effectively and ethically with daily tasks and challenges. SEL integrates the following five core competencies: self-awareness, self- management, social awareness, relationship skills, and responsible decision making. At a school implementing this model, officials said that they are strengthening their SEL program to improve the whole child instead of treating discipline and mental and behavioral health separately. With regard to directly addressing disparities in school discipline, officials at one school district in California said they created a new leadership team for equity, culture, and support services, and developed a district- wide equity plan that includes mandatory training on implicit bias for principals. Officials from that district also said they had recently changed a policy to increase the consistency of discipline actions across the district’s schools. Similarly, officials at a school district in Massachusetts reported they were working to build awareness among school leadership to address racial bias and the achievement gap through multiyear trainings. Officials we spoke with at a school within that district said they conduct trainings for staff on implicit bias and other related issues to reduce school discipline disparities. As some of the schools and districts we visited have begun implementing alternative discipline models and efforts to reduce the use of exclusionary discipline in recent years, we heard from officials in two districts that there has been difficulty with implementation due to limited resources, staffing turnover, and resistance on the part of some parents. During our visits to schools, we observed classroom spaces that school officials used to manage student behavior, including through various alternative approaches to discipline (see fig. 9). Officials in two school districts said they are moving away from exclusionary discipline because it decreases the amount of academic instruction. Officials at one school district in Georgia said that the district had a history of overusing exclusionary discipline and they understood that schools cannot “suspend their way out of behavioral and discipline issues.” Officials at that district said they are currently rolling out PBIS to their schools, although progress has been slow. While they said discipline rates have decreased and they have received fewer parent and staff complaints, change is difficult because of limited resources, staff turnover, and some resistance to alternative discipline versus punitive discipline on the part of both some school staff and parents. State education officials in all five states said that changes to state law were made or considered related to school discipline in the past several years. For example, California officials said that state law now prohibits suspensions and expulsions for children in grades K-3 for willful defiance. For all ages suspensions may only be used when other means of correction fail to bring about proper conduct. Similarly, Massachusetts law requires that during a student meeting or a hearing to decide disciplinary consequences for a student, school administrators consider ways to re-engage students in the learning process and that expulsion only be used after other remedies and consequences have failed. Massachusetts also revised its state law effective July 2014 to require that schools provide educational services for expelled students. Georgia state law includes a preference for reassignment of disruptive students to alternative educational settings in lieu of suspending or expelling such students. In addition, most of the selected states plan to include school discipline or absenteeism as measures of school quality in their state ESSA Title I plans (see sidebar). Education and Justice Identify and Address School Discipline Issues by Investigating Cases, Analyzing Data, and Providing Guidance and Support Education Has Investigated and Found Instances of Discrimination and Disparities in School Discipline According to administrative data from Education, the Office for Civil Rights (OCR) resolved over 2,500 K-12 school discipline cases between 2011 and summer 2017 through several means, including voluntary resolution (leading to agreed-upon actions and subsequent monitoring), dismissal, or closure due to insufficient evidence. These cases stemmed both from external complaints and reviews self-initiated by Education. When we analyzed a non-generalizable sample of resolved cases, we found that most of them focused on alleged racial discrimination or disability status. In the four cases we selected for more in-depth review, the school districts agreed to address discipline issues by, for example, designating a discipline supervisor, training staff, revising district policies, holding student listening sessions, and regularly reviewing data to identify disparities (see case descriptions below). Some of these remedies are designed to reduce exclusionary discipline or improve overall school climate, and others are more directly focused on addressing disparities in school discipline. For example, having school leadership regularly review data, particularly when disaggregated by race and other student characteristics, would increase awareness of disparities. Education Case 1: Race and Exclusionary Discipline in a Mississippi School District. OCR’s 2014 investigation of the Tupelo Public School District found that Black students were disproportionately disciplined in nearly all categories of offenses. These commonly included subjective behaviors like disruption, defiance, disobedience, and “other misbehavior as determined by the administration.” The consequences for “other misbehavior” in high school could be severe, ranging from detention to referral to an alternative school. Once at the alternative school, students were searched thoroughly each day upon entry, escorted by security officers when changing classes, and not allowed to carry purses or book bags. OCR concluded that the district’s discipline codes afforded administrators broad discretion, and found different treatment of Black students when looking at specific disciplinary records. For example, among several students who were disciplined for the first offense of using profanity, Black students were the only ones who were suspended from school, while White students received warnings and detention for substantially similar behavior. To address these issues, the district entered into a voluntary resolution agreement whereby it committed to taking specific actions to ensure that all students have an equal opportunity to learn in school. It agreed, among other things, to revise its student discipline policies, practices, and procedures to include clear and objective definitions of misconduct, eliminate vague and subjective offense categories, and describe criteria for selection within the range of possible penalties when imposing sanctions. The district also agreed to require that alternatives to suspension and other forms of exclusionary discipline be considered in all cases except where immediate safety of students or staff is threatened, and where the behavior in question is such that the disruption to the educational environment can only be remedied by removal, or where the student’s removal is a result of the district’s progressive discipline policy. Education Case 2: Disability and Restraint & Seclusion in a Non- Public California School. This 2016 OCR investigation focused on restraint and seclusion of a student with disabilities who was placed at the non-public school with which Oakland Unified School District contracted to provide the student with certain services, including developing and implementing behavior intervention plans. OCR found the use of prone restraint on this student to be severe, persistent, and pervasive: staff held the student face-down 92 times over a period of 11 months, with the longest duration of a single face-down restraint being 93 minutes. Examples of behaviors that led to the use of restraint included disruptive behavior, not following directions, pushing desks, and ripping up assignments. Staff said that the student wanted to be disciplined and understood prone restraint to be disciplinary. OCR determined that the district allowed the student to be treated differently for non-dangerous behavior on the basis of disability. The district entered into a resolution agreement, committing to resolve these issues by offering individual relief to the student—arranging for an evaluation of the student for adverse effects of the restraint and seclusion, with recommendations for addressing areas of harm—and implementing district-wide policy changes related to restraint and seclusion. The latter included establishing a protocol for responding to any contracted non-public schools’ reports of restraining or secluding district students, and providing training on positive interventions. Excerpt from Christian County, KY Case An African American 10th grader was assigned 1-day out-of-school suspension for skipping school. In comparison, a white 12th grader was assigned a conference with the principal for skipping school. The African American student had 19 previous disciplinary referrals, while the white student had 28 previous disciplinary referrals. Education reported that it would be difficult for the district to demonstrate how excluding a student from attending school in response to the student’s efforts to avoid school meets an important educational goal. Education Case 3: Race and Exclusionary Discipline in a Kentucky School District. In this 2014 case, OCR found that Christian County School District disciplined Black students more frequently or harshly than similarly situated White students. Specifically, Black students were more than 10 times more likely than White students to receive out-of-school suspension for disorderly conduct, and Black students were more likely to be assigned to an “Isolated Classroom Environment” when discipline was for a violation that afforded discretion. OCR also found that the district’s discipline code did not define 61 types of violations, including ones that involve interpretation, such as disorderly conduct, failure to follow directions, deliberate classroom disruption, and profanity. OCR found that administrators had wide discretion in determining the consequences for such actions, and noted that the discipline code allowed for virtually every type of sanction, including expulsion, for each type of violation. OCR also found inconsistencies in treatment of students in different racial groups when looking at individual records (see sidebar). Although district officials said they were aware of the higher rates of discipline for Black students, OCR found that there were no safeguards to ensure that discretion would be exercised in a nondiscriminatory manner. To resolve these issues, the district agreed to ensure as much as possible that misbehavior is addressed in a way that avoids exclusionary discipline, collaborate with experts on research-based strategies to prevent discrimination in discipline, and provide support services to decrease behavioral difficulties, among other things. Education Case 4: Race and Informal Removals in a California Charter School. In this 2015 case, OCR investigated whether Black students were disproportionately disciplined at a charter school which emphasizes Hmong culture and language. The complaint noted that the student’s parents had been asked to take him home on a few occasions because he was disruptive in class. School administrators confirmed the practice of “early dismissal” in response to misbehavior, but said they did not consider the dismissal to be disciplinary. Because the school did not maintain records of these removals, OCR was unable to determine if the student was subjected to discriminatory discipline. However, OCR noted that the practice of removing students from school for disciplinary reasons without appropriate recordkeeping and due process makes it almost impossible for the school to assess whether it is fully meeting its duty of ensuring nondiscrimination with respect to discipline. To resolve these issues, the school agreed, among other things, to revise its discipline policies, provide due process and alternatives to exclusionary discipline, and clearly prohibit the kinds of informal suspensions that OCR observed. Justice Has Investigated Discrimination in School Discipline Based on Long- standing Desegregation Orders and Public Complaints Justice also investigates discrimination in school discipline based on complaints filed under federal civil rights statutes and as part of monitoring desegregation orders. Three recently-resolved cases investigated exclusionary discipline or restraint and seclusion for students of color and those with disabilities (see case descriptions below). Justice Case 1: Race and Exclusionary Discipline in an Arkansas School District. This Justice case, originally stemming from a desegregation order, focused on whether the Watson Chapel School District was discriminating against Black students in its administration of school discipline. Justice found that the district suspended and expelled Black students at significantly higher rates than White students, and that district policies and procedures were responsible for this difference. The parties signed a Consent Order in 2016, under which the school district agreed to implement positive interventions and supports, transition away from exclusionary discipline, revise the code of conduct to list specific levels of disciplinary infractions and consequences, prohibit corporal punishment, establish a memorandum of agreement with any law enforcement agency that supplies school resource officers, and provide training to staff. In addition, the district agreed to provide due process before students receive out-of-school suspensions, expulsions, or referrals to the alternative education program because of disruptive behavior. Justice Case 2: Race and Disability in a Maryland School District. Justice investigated complaints that discipline policies in the Wicomico County Public School District resulted in the discriminatory suspension of Black and Latino students and students with disabilities. After the investigation, Justice and the district negotiated and entered into a voluntary out-of-court settlement agreement in January 2017. The district agreed to hire a consultant to implement positive behavioral interventions and supports and restorative practices, revise the code of conduct to include objective definitions of behavioral infractions and incorporate alternatives to exclusionary discipline, establish clear guidelines for when law enforcement intervention is appropriate, and provide appropriate due process procedures. Justice Case 3: Race and Restraint & Seclusion in a Kentucky School District. This 2017 Justice case investigated whether Covington Independent Schools’ disciplinary practices, including the use of exclusionary discipline, restraint, and seclusion, discriminated on the basis of race, national origin, or disability. The parties agreed to negotiate a settlement agreement under which the district agreed to develop a process to regularly identify students who disproportionately had disciplinary referrals, with a focus on offenses that may be the result of unaddressed behavioral needs such as disruptive behavior or aggression, defiance, and being “beyond control.” The district also agreed to discontinue the use of “calm rooms” (where students are isolated during an episode of misbehavior) and prohibit the use of physical restraint except in the case of imminent danger that could not be addressed through de-escalation techniques. The district agreed to adopt an intervention procedure to meet the needs of students with disabilities who may need support beyond the standard discipline policies. In addition, if parents of students with disabilities were asked to come to the school to become involved in an ongoing instance of misbehavior, the district could no longer require the parent to take the student home unless the student had been assigned an out-of-school suspension or expulsion. Education and Justice Provide Guidance and Resources on School Discipline and Related Issues, Including How to Identify and Address Disparities Education and Justice collaborated on a “Rethink Discipline” campaign in 2014 to address what they viewed as widespread overuse of suspensions and expulsions. This awareness campaign included comprehensive guidance to help states and schools implement alternatives to exclusionary discipline, reduce discrimination, and identify root causes of disparities (see sidebar). The agencies have also collaborated to provide guidance encouraging school districts that use school resource officers to formalize partnerships with local law enforcement agencies and clarify that school resource officers should not administer discipline in schools. Education has also issued special guidance related to the discipline of students with disabilities, including an explanation of the requirement to provide appropriate strategies to address behavior in students’ individualized education programs (IEPs). This guidance stated that when a student with a disability is regularly sent home early from school for behavior reasons, it is likely that the child’s opportunity to make progress in the general education curriculum is significantly impeded (see sidebar). The guidance states that being sent home regularly in this way constitutes a disciplinary removal, which comes with statutory reporting obligations and other considerations. For further information on available federal guidance related to discipline in public schools, see appendix III. available could result in an inappropriately restrictive placement. demonstrates that disciplinary measures such as short-term removals from the current placement (e.g., suspension), or other exclusionary disciplinary measures that significantly impede the implementation of the individualized education program (IEP), generally do not help to reduce or eliminate reoccurrence of the misbehavior. Education and other federal entities have also awarded grants and established special initiatives related to student behavior and school discipline, many of which started around the same time as the federal Rethink Discipline campaign and were designed to be complementary. For example, Education awarded about $130 million from 2014-2016 to states and school districts through the School Climate Transformation Grant, which was established in 2014 to support districts taking steps to improve behavioral outcomes. According to Education, nearly 3,000 schools have worked to implement these behavioral support systems through the grant, and preliminary outcomes data have shown increased student attendance and fewer disciplinary referrals. In addition, Education awarded about $68 million for fiscal years 2015-2019 to over 20 school districts under Project Prevent—a grant to promote conflict resolution skills in students, particularly when they have been exposed to pervasive violence. According to the districts’ grant summary documents, these districts have experienced nearly 10,000 fewer violent behavioral incidents and have provided access to mental health services for over 5,000 students. Justice’s research arm, the National Institute of Justice, also started the Comprehensive School Safety Initiative in 2014 and has since provided about $84 million to fund nearly 40 research projects and interventions that address school discipline and safety, such as implementing restorative practices and studying the root causes of the school-to-prison pipeline. More recently, Education collaborated with HHS to fund the Pyramid Equity Project for early learning programs, which is designed to address implicit bias in school discipline, implement culturally responsive practices in addressing student behavior, and use data systems to understand equity issues. For ongoing technical assistance related to student behavior and school discipline, Education sponsors centers on supportive learning environments, improving student engagement and attendance, and implementing positive behavioral interventions and supports (PBIS). For example, the National Center on Safe Supportive Learning Environments provides information and resources on addressing school discipline, mental health, substance abuse, physical safety, student engagement, and other related issues. Justice funds a technical assistance center on school-justice partnerships that works to enhance collaboration among schools, mental and behavioral health specialists, and law enforcement officials. This center recently published a bulletin on the intersection of exclusionary school discipline and the juvenile justice system, which offers tips for judges who handle school-related cases and information on successful efforts to reduce the number of school-based referrals to law enforcement. For a list of other technical assistance centers related to student behavior or discipline, see appendix III. Lastly, to help identify discipline disparities among the nation’s schools, Education collects comprehensive data on school discipline every other year through the CRDC. The agency publicly releases highlights from these data through their “First Look” documents and in annual reports, which typically focus on a limited number of disciplinary actions (primarily suspensions) and student demographics (usually race and disability status). Education’s public analyses of school discipline data have not included school characteristics like poverty level or type of school. Education encourages districts and schools to disaggregate their data by various student demographics and examine it for disparities. In addition, Education’s Office of Special Education and Rehabilitative Services recently examined racial and ethnic disparities for students with disabilities using data collected under IDEA, Part B. This IDEA report provides the public with information on whether districts had significant disproportionality on the basis of race or ethnicity in the discipline of students with disabilities. Agency Comments, Third Party Views, and Our Evaluation We provided a draft of this report to the Departments of Education and Justice for review and comment. These agencies provided technical comments, which we incorporated as appropriate. We also provided selected draft excerpts relevant to officials we interviewed in state agencies, school districts, and school officials. We received technical comments from those officials in four of our five selected states, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Education, the Secretary of Health and Human Services, the Attorney General, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0580 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to examine (1) the patterns in disciplinary actions among public schools, (2) the challenges selected school districts reported with student behavior and how they are approaching school discipline, and (3) the actions the Department of Education (Education) and the Department of Justice (Justice) have taken to identify and address any disparities or discrimination in school discipline. To conduct this work we (1) analyzed federal discipline data by student demographics and school characteristics; (2) visited five school districts to provide illustrative examples of approaches to school discipline; and (3) interviewed federal agency officials and reviewed agency documentation, federal laws, regulations and policies, selected state laws, and a selection of resolved school discipline cases. To inform all aspects of our work, we interviewed representatives from several nonfederal civil rights organizations and advocacy organizations that represent parents and families, individuals with disabilities, and people from specific racial or ethnic backgrounds, such as Hispanic, African-American, and American Indian communities. We also met with academic subject matter experts to discuss issues related to school discipline, including disparities in school discipline and initiatives intended to reduce exclusionary discipline. In addition, we reviewed two dozen articles containing research that had been published since 2010 to further understand the context of school discipline issues and programs. We evaluated the methods used in the research and eliminated the research if we felt the methods were not appropriate or rigorous. The following sections contain detailed information about the scope and methodology for this report. Analysis of School Discipline National Data To determine the patterns in disciplinary actions among public schools, we used Education’s Civil Rights Data Collection (CRDC) to analyze discipline data from all public schools by student demographics (e.g., race, sex, disability) and school characteristics (e.g., school type, such as charter or magnet school). Our analyses of this data, taken alone, do not establish whether unlawful discrimination has occurred. The CRDC is a biennial survey that is mandatory for every public school and district in the United States. Conducted by Education’s Office for Civil Rights, the survey collects data on the nation’s public schools (pre-K through 12th grade), including disciplinary actions as well as student characteristics and enrollment, educational and course offerings, and school environment, such as incidents of bullying. CRDC data are self-reported by districts and schools, and consequently there is potential for misreporting of information. In school years 2011-12 and 2013-14, the CRDC collected data from nearly every public school in the nation (approximately 17,000 school districts, 96,000 schools, and 50 million students in school year 2013-14). Using the public-use data file of the CRDC, we focused our analysis primarily on data for school year 2013- 14, the most recent data available at the time of our analysis. We also compared disciplinary data from school years 2011-12 and 2013-14 to analyze how discipline may have changed over that period. The 2013-14 CRDC collected data on six broad types of disciplinary actions: (1) out-of-school suspensions, (2) in-school suspensions, (3) referrals to law enforcement, (4) expulsions, (5) corporal punishment, and (6) school-related arrests. The CRDC did not collect data on less severe forms of discipline, such as detentions, Saturday school, or removing privileges to engage in extracurricular activities, such as athletic teams or field trips. As shown in table 4, we combined related variables for out-of- school suspension and expulsion; we also provide a crosswalk of discipline variables used in this report and those captured in the CRDC. For each of the six disciplinary actions in our review, we examined discipline counts and rates both overall and disaggregated by student demographic characteristics. Specifically, we examined counts and rates for each disciplinary action by student sex (boy or girl), race or ethnicity (see table 5), disability status (students with or without disabilities), and English Language Learners. Using the CRDC, we also examined race and sex intersectionally, for example, disciplinary rates for Black boys or White girls. In order to analyze discipline counts and rates by the poverty level of the school, we pulled in data on free or reduced-price lunch eligibility from the 2013-14 Common Core of Data (CCD), and matched it to schools in the 2013-14 CRDC, which did not collect eligibility data. The CCD is administered by Education’s National Center for Education Statistics, and annually collects nonfiscal data about all public schools in the nation. A student is generally eligible for free or reduced-price lunch based on federal income eligibility guidelines that are tied to the federal poverty level and the size of the family. State education agencies supply these data for their schools and school districts. We then sorted schools into quartiles based on the percentage of students eligible for free or reduced-price lunch as follows: 0 to 25 percent, 25.1 to 49.9 percent, 50 to 74.9 percent, and 75 to 100 percent (see table 6). The poverty thresholds and measure of poverty discussed here and throughout this report were commonly used in the literature and also aligned with how Education analyzed its data. To analyze discipline counts and rates by the type of public school a student attended, we sorted schools into mutually exclusive categories and reviewed disciplinary data by student demographic information. The 2013-14 CRDC allowed schools to self-identify as special education, magnet, charter, and alternative schools (see table 7). The categories of public schools in the CRDC were not mutually exclusive; that is, schools could select multiple school types to describe their school, such as a charter school that was also an alternative school. To create mutually exclusive categories for analytical purposes, we applied the following criteria: Alternative school: all schools that selected “alternative” as the school type in the CRDC, even if they selected other types as well. Special education school: schools that selected “special education” as the school type in the CRDC, except those schools that also selected the alternative school type. Charter school: schools that selected “charter” as the school type in the CRDC, except those schools that also selected the alternative school type and/or the special education school type. Magnet school: schools that selected “magnet” as the school type in the CRDC, except those schools that also selected the alternative school type, the special education school type, and/or the charter school type. Traditional school: schools that did not select any other school type in the CRDC. Table 8 provides the breakdown of students and schools captured in the 2013-14 CRDC after applying these criteria. For each of our school discipline analyses, we also examined disparities in disciplinary rates by student demographics. Specifically, we compared each student groups’ representation among students disciplined to their representation in the overall student population. For example, if boys accounted for 50 percent of all K-12 public school students, but represented 75 percent of students that received a given disciplinary action, then boys would be overrepresented among students that received that type of discipline by 25 percentage points. We also compared disciplinary rates across student groups and similarly examined disparities based on school poverty level and school type for all students. We also analyzed CRDC data on discipline of pre-school students. The disciplinary data for pre-school students that was collected in the CRDC for school year 2013-14 was different than disciplinary data collected for K-12 students. Specifically, data on pre-school discipline was limited to out-of-school suspensions and expulsions. Findings from our analysis of pre-school discipline data are included where applicable in the report and additional data are provided in appendix IV, table 17. In addition to analyzing data on school discipline, we also analyzed data on chronic absenteeism, which was defined as students who were absent 15 or more days during the school year for any reason, which could include for suspensions and expulsions. The CRDC also collected data on instances in which students were restrained—both physically and mechanically—or secluded at school. Education has provided a resource document with principles to school districts that indicates restraint and seclusion should only be used in instances where a student’s “behavior poses imminent danger of serious physical harm to self or others,” and should never be used as punishment or discipline. However, multiple sources, including civil rights complaints filed with Education, news stories, and other reports have alleged that these practices have been used in response to student misbehavior, in particular for students with disabilities. We included data on chronic absenteeism and restraint and seclusion in our analyses, and present related findings in appendix IV, tables 18 and 19. We determined that the data we used from the CRDC and CCD were sufficiently reliable for the purposes of this report by reviewing technical documentation, conducting electronic testing, and interviewing officials from Education’s Office for Civil Rights and National Center for Education Statistics. For our analysis of the 2013-14 CRDC, we used the final data file that was publicly available as of June 2017 because it corrected errors in the original data previously submitted by several school districts. Regression Analysis We conducted a generalized linear regression using the 2013-14 CRDC and CCD data to explore whether and to what extent certain school-level characteristics were associated with higher rates of each disciplinary action. Such a model allowed us to test the association between a given school characteristic and the percentage of students receiving a given disciplinary action, while holding other school characteristics constant. We selected different school characteristics (our independent variables) for the regression based on factors that Education’s Office for Civil Rights and other researchers have identified as potential drivers of school discipline rates (our dependent, or outcome variables). Table 9 lists the variables we included in our regression model. We conducted a separate regression for each of the six disciplinary actions listed as an outcome variable. We excluded some schools from our regression model. Specifically, we excluded schools that met one or more of the following criteria: Data were not available in both the CRDC and CCD data sets, and therefore we were unable to determine the percentage of students eligible for free or reduced-price lunch in these schools or whether these schools were located in rural, suburban, or urban areas. School was listed as “ungraded” in the CRDC because we could not determine if these schools offered grade 6 or above. School only offered pre-school because pre-school disciplinary data were reported separately and differently than K-12 disciplinary data in the CRDC. School identified as a juvenile justice facility in the CRDC. In the 2013-14 CRDC, schools could identify as a juvenile justice facility, and select one of the other school types in our analysis (i.e., traditional, magnet, charter, alternative, and special education schools). Due to this overlap, and because it is reasonable to expect discipline within a juvenile justice facility could function differently than discipline in other schools, we excluded these schools from our regression model. School had less than 10 students enrolled because in smaller schools minor fluctuations in the numbers of students receiving a given disciplinary action could have a large effect on disciplinary rates. In the 2013-14 data, these exclusions reduced the total number of public schools in our regression model from a universe of 95,507 public schools to 86,769 public schools. All regression models are subject to limitations and for this model the limitations included: Data we analyzed were by school rather than student. Consequently, we were not able to describe the association between our independent variables and a student’s rate of different disciplinary actions, while controlling for characteristics of an individual student, such as sex, race or ethnicity, disability status, or grade level. Instead, the school-level nature of the CRDC data limited our description of the associations between school characteristics and disciplinary rates to whether there was an increase, decrease, or no effect on disciplinary rates for schools with a given characteristic, controlling for other characteristics of the entire school’s population, such as percent of students who are boys or are Black. Some variables that may be related to student behavior and discipline are not available in the data. For example, in this context, it could be that parent education or household type (single- versus multiple- headed household) could be related to student behaviors, such as those that lead to receiving the six disciplinary actions we analyzed. Results of our analyses are associational and do not imply a causal relationship because, for example, CRDC data were not gathered by a randomized controlled trial, where students would be randomized to attend schools with certain characteristics. Typically, a generalized linear regression model provides an estimated incidence rate ratio, where a value greater than one indicates a higher or positive association, in this case, between the disciplinary outcome and the independent variable of interest, such as being a charter school or having a higher percentage of Black students. An estimated incidence rate ratio less than one indicates a lower incidence of a given disciplinary action when a factor is present. Given the limitations of our model as described above, we present the results of our regression model in table 10 by describing the direction of the associations, rather than an estimated rate (incidence) of disciplinary outcomes. For categorical variables in table 10, we provided the comparison school characteristic in brackets and italics. For example, the results in this table should be interpreted as students attending alternative schools were significantly more likely than students attending traditional schools to be suspended out of school. For continuous variables (i.e., those starting with “Percent”), the results in this table should be interpreted as the likelihood of receiving a given disciplinary action as the percentage of students in the school with a given characteristic increased. For example, as the percentage of students eligible for free or reduced- price lunch increased, we found that the likelihood of receiving each of the six disciplinary actions also increased. It should be noted that interactions (i.e., where we combine both race and sex variables) should be interpreted differently than other variables in table 10. Though an interaction may be “negative,” it does not necessarily imply that the group presented in the interaction was significantly less likely to receive the given disciplinary action because interactions are interpreted relative to the main effect of each variable in the interaction. For example, as shown in table 10, the interaction for percentage of Black boys was negative for out-of-school suspensions; however, the estimated incidence of out-of-school suspensions for a school with a higher than average percentage of Black students and a higher than average percentage of boys was positive. Since the contribution for an interaction coefficient is relative, in this example the contribution of the main effects outweighed that of the interaction, resulting in a positive effect altogether, despite the negative interaction. School District Site Visits To obtain information on how selected school districts are addressing discipline issues, including any challenges they face in doing so, we selected five school districts to serve as illustrative examples. To select school districts, we used CRDC data to sort school districts into categories based on district size; the presence of disparities in out-of- school suspension rates for boys, Black students, or students with disabilities; and whether the out-of-school suspension rate was increasing or decreasing between the two most recent CRDC collections. With regard to size, we collapsed several categories that Education has previously used into three groupings, each with roughly one-third of all students attending public schools in school year 2013-14: Large School District: 25,000 or more students (34.7% of all students in 2013-14) Medium School District: 5,000 to 24,999 students (33.2% of all students in 2013-14) Small School District: Less than 5,000 students (32.1% of all students in 2013-14) Further, we focused on out-of-school suspensions for selection purposes because this disciplinary action was one of the most frequently reported disciplinary actions employed by schools in Education’s two most recent data collection efforts on the issue (2011-12 and 2013-14 CRDC). Moreover, out-of-school suspensions are an exclusionary disciplinary action; that is, they remove or exclude students from the usual instructional or learning environment. Selecting districts with a range of out-of-school suspension rate was intended to generate a mix of districts that commonly use exclusionary discipline, as well as those that may employ alternatives. For site selection, we used out-of-school suspension data in two ways. First, we excluded districts that did not have a disparity in out-of-school suspension rates for Black students, boys, or students with disabilities. Prior GAO work and Education’s data showed that these groups were particularly vulnerable to discipline disparities, and the purpose of this research objective was to understand district efforts to identify and address such disparities. Second, we grouped school districts by whether their out-of-school suspension rate increased or decreased between 2011-12 and 2013-14. Exploring school districts that changed in different ways over time was intended to help us identify successful efforts to reduce suspensions as well as challenges districts face in addressing disparate discipline. Using the above criteria, we grouped school districts into the following categories: Category 1 and 2: Large school district and out-of-school suspension rate that increased (or decreased) from 2011-12 to 2013-14 Category 3 and 4: Medium school district and out-of-school suspension rate that increased (or decreased) from 2011-12 to 2013- 14 Category 5 and 6: Small school district and out-of-school suspension rate that increased (or decreased) from 2011-12 to 2013-14. After sorting school districts into the above categories, we randomized the list within each category to improve the methodological rigor of selecting school districts. In addition, we applied a series of post-checks to our list of districts in each grouping to ensure we had appropriate variety to consider other key factors in school discipline. Specifically, we checked for variety in: types of public schools in the district, geographic diversity both in terms of region of the country and use of corporal punishment in the district, and use of restraint or seclusion in the district. To select specific districts, we started with the district in each category that was at the top of the randomized list and then applied the above post-checks. We then conducted outreach to district superintendents or their designees via telephone and email to obtain their agreement to participate in this review. When school districts were unresponsive to our outreach or unwilling to participate, we contacted additional districts that had similar characteristics in order to achieve variety in our final selections. This resulted in the selection of five schools districts, one each in California, Georgia, Massachusetts, North Dakota, and Texas (see table 11). We visited each district and interviewed district-level officials involved in school discipline and school climate initiatives. These officials included superintendents, assistant superintendents, program managers, and directors of applicable district departments (e.g., student support services and special education). We also reviewed district-level discipline data, school district discipline policies, and relevant state laws related to school discipline to better understand the local context in each selected district. In the five districts we visited, we also interviewed officials at a total of 19 schools. At each school, we typically met with principals and/or assistant principals, and in some instances, spoke with other personnel at the school, such as counselors, attendance coordinators, school resource officers (i.e., law enforcement officers), and teachers. In each district, we selected a variety of schools to visit based on grade level, school type, and disciplinary data. For each selected district, we also interviewed officials from the state educational agency that oversees that district to better understand the statewide context around discipline, such as state laws that may affect district disciplinary policies, statewide initiatives related to discipline, and state-level monitoring of district-level disciplinary actions. In California, we also met with the county office of education that oversees the district we selected because, in that state, counties have a primary role in the local school district accountability structure. Because we selected these school districts judgmentally, we cannot generalize the findings about these districts’ approaches to discipline, and the challenges they face, to all school districts and schools nationwide. Review of Federal Actions To determine the extent to which, and in what ways, Education and Justice are identifying and addressing discipline disparities and discrimination, we interviewed agency officials at headquarters and regional offices, reviewed agency documentation and administrative data, reviewed federal laws and regulations, and reviewed a non-generalizable sample of seven recently resolved school discipline investigations undertaken by Education and Justice (which we refer to as cases). With both agencies, we interviewed officials about each agency’s responsibilities with respect to federal civil rights laws and regulations, as well as the actions the agencies took to enforce them. We also discussed each agency’s guidance, support to school districts on these issues (e.g., grants and technical assistance), and data collection activities. In addition, we collected and reviewed relevant agency procedures and guidance documents. We also requested and reviewed Education’s data on the number of civil rights complaints received and cases related to school discipline investigated from 2011 to August 2017 to better understand the scope of the agency’s efforts. Education provided these data from their internal database, where investigators categorized cases as being related to school discipline. We assessed the reliability of this source through discussion with knowledgeable officials and reviewing key documents and determined the data to be reliable for our purposes. To select resolved school discipline cases to review, we searched Education’s and Justice’s respective online repositories of resolved investigations and compliance reviews, as well as Education’s annual reports, to create a list of resolved cases related to school discipline. We then narrowed the list to cases resolved in approximately the past 3 years (from 2014 to May 2017) and excluded long-standing cases that were opened several decades ago to help ensure the information in the cases reflected recent policies and practices in each agency. We also excluded cases regarding institutions of higher education because they were outside the scope of this review. This resulted in a list of 12 relevant resolved cases—9 for Education and 3 for Justice. From this list, we selected 7 cases to review in depth to better understand Education’s and Justice’s investigatory processes and resolutions with regard to school discipline cases in pre-K through 12th grade, and to provide illustrative examples in our report. We selected 4 cases from Education that provided a mix of the type of alleged discrimination (e.g., race or disability) and type of discipline (e.g., suspension, expulsion, arrest, etc.). We selected all 3 relevant cases from Justice. For each case, we reviewed the type of investigation (complaint investigation or compliance review); the reason for the investigation; any applicable findings or recommendations; and the ultimate resolution of the investigation, such as a voluntary agreement with the school district or remedies to address findings. In all instances, we are presenting Education’s and Justice’s findings and do not reach any independent conclusions regarding the cases. We conducted this performance audit from November 2016 to March 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Maps of Disciplinary Actions by School District This appendix contains maps showing rates of disciplinary actions by school district for each of the six disciplinary actions captured in the Department of Education’s Civil Rights Data Collection for school year 2013-14. Appendix III: Key Federal Resources Related to Student Behavior and School Discipline Technical Assistance Centers Funded by Department of Education (Education): National Center on Safe Supportive Learning Environments: offers information and technical assistance focused on improving student supports and academic enrichment. This includes resources on using positive approaches to discipline, as well as promoting mental health for students and ensuring the safety and effectiveness of physical learning environments. https://safesupportivelearning.ed.gov/. National Student Attendance, Engagement, and Success Center: a center that disseminates evidence-based practices and facilitates communities of practice to help students attend school every day, be engaged in school, and succeed academically, so that they graduate high school prepared for college, career, and civic life. It offers webinars on identifying the root causes of chronic absence, linking school climate and exclusionary discipline to absenteeism, and improving attendance for vulnerable students. http://new.every1graduates.org/nsaesc/ National Technical Assistance Center for the Education of Neglected or Delinquent Children and Youth: provides technical assistance to state agencies with Title I, Part D programs and works to improve education services for children and youth who are neglected, delinquent, or at risk. This includes running the Supportive School Discipline Communities of Practice, which brings together education and justice leaders for knowledge-sharing events and offers webinars on discipline initiatives such as restorative practices. https://www.neglected-delinquent.org/ Positive Behavioral Interventions and Supports Technical Assistance Center: funded by Education’s Office of Special Education Programs, this center supports implementation of a multi- tiered approach to social, emotional and behavior support. In addition, it offers resources on cultural responsiveness, addressing discipline disproportionality, and interconnecting mental health with behavior support systems, among other issues. https://www.pbis.org/. Funded by Department of Health and Human Services (HHS): Center of Excellence for Infant and Early Childhood Mental Health Consultation: supports states, tribes, and communities in promoting mental health and school readiness. It provides training to leaders in early childhood education around mental health and school readiness issues. https://www.samhsa.gov/iecmhc Center for School Mental Health: works to strengthen policies and programs in school mental health to improve learning and promote success for youth. This center is supported in full by HHS’s Maternal and Child Health Bureau, Division of Child, Adolescent and Family Health Adolescent Health Branch in the Health Resources and Service Administration. http://csmh.umaryland.edu/ National Center for Trauma-Informed Care and Alternatives to Seclusion and Restraint: works to develop approaches to eliminate the use of seclusion, restraints, and other coercive practices and to further advance the knowledge base related to implementation of trauma-informed approaches. https://www.samhsa.gov/nctic National Child Traumatic Stress Network: works to improve access to care, treatment, and services for children and adolescents exposed to traumatic events. The group provides a comprehensive focus on childhood trauma by collaborating with the health, mental health, education, law enforcement, child welfare, juvenile justice, and military family service systems. http://nctsn.org/ National Resource Center for Mental Health Promotion and Youth Violence Prevention: offers resources and technical assistance to states, tribes, territories, and local communities to promote overall child wellness and prevent youth violence. http://www.healthysafechildren.org/ Now Is the Time Technical Assistance Center: provides national training and technical assistance to recipients of the Healthy Transitions (youth access to mental health) and Project Advancing Wellness and Resilience Education (AWARE) grants. https://www.samhsa.gov/nitt-ta/about-us Funded by Department of Justice (Justice): School-Justice Partnership National Resource Center: provides trainings and webinars, and partners with stakeholders in the law enforcement, juvenile justice, mental health, and public education arenas. The National Council of Juvenile and Family Court Judges operates this center. https://schooljusticepartnership.org/ Office of Juvenile Justice and Delinquency Prevention (OJJDP) Key Federal Guidance Other Related Resources Appendix IV: Additional Discipline and Discipline-Related Data Tables This appendix contains several tables that show the underlying data used throughout this report, as well as additional analyses we conducted using the Department of Education’s Civil Rights Data Collection (CRDC) and Common Core of Data (CCD) for school year 2013-14. Our analyses of Education’s data, as reflected in these tables, taken alone, do not establish whether unlawful discrimination has occurred. The following tables and information are included in this appendix: Table 12: students who received disciplinary actions captured in the CRDC, disaggregated by student sex, race or ethnicity, and English Language Learner status. Table 13: students with or without disabilities who received disciplinary actions captured in the CRDC, disaggregated by student sex and race or ethnicity. Table 14: students who received disciplinary actions captured in the CRDC, disaggregated by the poverty level of the school and other student characteristics. Table 15: students who received disciplinary actions captured in the CRDC, disaggregated by the type of public school and other student characteristics. Table 16: students who received disciplinary actions captured in the CRDC, disaggregated by the grades offered in the school and other student characteristics. Table 17: pre-school students who were suspended from school, disaggregated by student sex and race or ethnicity, as well as the poverty level of school and the type of public school. Table 18: students who were restrained—mechanically or physically—or secluded, disaggregated by student sex, race or ethnicity, and disability status as well as the poverty level of school and the type of public school. Table 19: students who were chronically absent, disaggregated by student sex, race or ethnicity, and disability status, as well as the poverty level of school and the type of public school. Table 20: schools that reported having access to a school counselor or sworn law enforcement officer, disaggregated by the poverty level of school and the type of public school. Table 21: students disciplined for harassment or bullying, disaggregated by student sex, race or ethnicity, and disability status. Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Sherri Doughty (Assistant Director), Amy Moran Lowe (Analyst-in-Charge), James Bennett, Holly Dye, Aaron Karty, Jean McSween, John Mingus, James Rebbe, Sonya Vartivarian, and David Watsula made key contributions to this report. Also contributing were Johana Ayers, Deborah Bland, Irina Carnevale, Caitlin Croake, Vijay D’Souza, Gretta Goodwin, Gloria Hernandez-Saunders, Reginald Jones, DuEwa Kamara, John Karikari, Ted Leslie, Sheila R. McCoy, Brittni Milam, Cady Panetta, Moon Parks, Caroline Prado, Steven Putansu, Maria Santos, Margie K. Shields, Ruth Solomon, Alexandra Squitieri, and Barbara Steel-Lowney.
Why GAO Did This Study Research has shown that students who experience discipline that removes them from the classroom are more likely to repeat a grade, drop out of school, and become involved in the juvenile justice system. Studies have shown this can result in decreased earning potential and added costs to society, such as incarceration and lost tax revenue. Education and Justice are responsible for enforcing federal civil rights laws that prohibit discrimination in the administration of discipline in public schools. GAO was asked to review the use of discipline in schools. To provide insight into these issues, this report examines (1) patterns in disciplinary actions among public schools, (2) challenges selected school districts reported with student behavior and how they are approaching school discipline, and (3) actions Education and Justice have taken to identify and address disparities or discrimination in school discipline. GAO analyzed discipline data from nearly all public schools for school year 2013-14 from Education's Civil Rights Data Collection; interviewed federal and state officials, as well as officials from a total of 5 districts and 19 schools in California, Georgia, Massachusetts, North Dakota, and Texas. We selected these districts based on disparities in suspensions for Black students, boys, or students with disabilities, and diversity in size and location. We also reviewed federal laws and a non-generalizable sample of seven recently resolved federal school discipline investigations (selected in part based on the type of alleged discrimination). We incorporated technical comments from the agencies as appropriate. What GAO Found Black students, boys, and students with disabilities were disproportionately disciplined (e.g., suspensions and expulsions) in K-12 public schools, according to GAO's analysis of Department of Education (Education) national civil rights data for school year 2013-14, the most recent available. These disparities were widespread and persisted regardless of the type of disciplinary action, level of school poverty, or type of public school attended. For example, Black students accounted for 15.5 percent of all public school students, but represented about 39 percent of students suspended from school—an overrepresentation of about 23 percentage points (see figure). Officials GAO interviewed in all five school districts in the five states GAO visited reported various challenges with addressing student behavior, and said they were considering new approaches to school discipline. They described a range of issues, some complex—such as the effects of poverty and mental health issues. For example, officials in four school districts described a growing trend of behavioral challenges related to mental health and trauma. While there is no one-size-fits-all solution for the issues that influence student behavior, officials from all five school districts GAO visited were implementing alternatives to disciplinary actions that remove children from the classroom, such as initiatives that promote positive behavioral expectations for students. Education and the Department of Justice (Justice) documented several actions taken to identify and address school discipline issues. For example, both agencies investigated cases alleging discrimination. Further, to help identify persistent disparities among the nation's schools, Education collects comprehensive data on school discipline every other year through its Civil Rights Data Collection effort.
gao_GAO-19-27
gao_GAO-19-27_0
Background National Guard Counterdrug Program The National Guard counterdrug program is part of DOD’s broader counterdrug mission, which focuses on supporting local, state, federal, and foreign government agencies in addressing the illegal drug trade and narcotics-related terrorism. The program was originally conceived as a reconnaissance support mission largely focused on marijuana eradication efforts. In 1977, the Hawaii National Guard became the first state National Guard to assist law enforcement agencies in counterdrug missions. Hawaii law enforcement officials sought Hawaii National Guard helicopter transport to support Operation Green Harvest, a marijuana eradication mission. By 1984, four additional states’ National Guards were supporting state law enforcement agencies with counterdrug efforts. That number grew to 32 states in 1988. However, this assistance was limited in scope and generally conducted as Guard units performed normal training activities, and costs associated with this assistance were paid for by the states. The National Defense Authorization Act, Fiscal Year 1989 tasked DOD with the mission to ensure the availability of military support to law enforcement agencies nationwide. This law established DOD as the single lead agency of the federal government for the detection and monitoring of aerial and maritime transit of illegal drugs into the United States, and it amplified the National Guard’s role as a support agency for state law enforcement in counterdrug support missions under the Governor of each state, territory, and the District of Columbia. By 1994, the program was in operation in 54 states and territories across the United States. As of fiscal year 2018, National Guard Bureau policy allows state counterdrug programs to perform 15 support activities grouped into five broad mission categories—(1) technical support (including linguist and translator, operational and investigative case and criminal analyst, and counterthreat finance support), (2) general support (including domestic cannabis suppression and eradication operations and transportation support), (3) reconnaissance and observation (including ground and aerial reconnaissance), (4) civil operations and coalition development, and (5) counterdrug training. Legal Authorities of the National Guard Counterdrug Program The National Guard counterdrug program conducts activities under the authority of two titles in the United States Code—Title 32 and Title 10. Section 502 of title 32 allows a member of the National Guard to be ordered to full-time National Guard duty status under regulations prescribed by the Secretary of the Army or Secretary of the Air Force. In addition, Section 112 of title 32 authorizes personnel of the National Guard of a State, under regulations prescribed by the Secretary of Defense, to be ordered to perform full-time National Guard duty under section 502 for the purposes of carrying out drug interdiction and counterdrug activities in accordance with state plans. Section 112 also authorizes the Secretary of Defense to provide funds to support the approved drug interdiction and counter-drug activities plan of state governors. In addition, Title 10 allows the Secretary of the Army or Air Force to order a member of the National Guard, under the Secretary’s jurisdiction, to active duty with the consent of the member and the governor of that state. Under Section 284 of title 10, DOD provides support to a number of partners, such as federal agencies, in their counterdrug activities, at times using National Guard personnel on active duty. Table 1 provides a summary of the Title 32 and Title 10 authorities. Funding for the National Guard Counterdrug Program To fund DOD’s counterdrug mission, Congress appropriates amounts to DOD’s Drug Interdiction and Counterdrug Activities, Defense account. The categories of activities funded by the account include: detection and monitoring; international support; intelligence, technology, and other; domestic support, which includes the National Guard counterdrug program; and drug demand reduction. Of all the activities, the domestic support activity, which includes the National Guard counterdrug program, receives the largest amount of funding from DOD’s Drug Interdiction and Counterdrug Activities account. In fiscal year 2018, Congress appropriated about $934.8 million to the Drug Interdiction and Counterdrug Activities, Defense account, of which about $261.4 million, or 28 percent, was appropriated for the National Guard counterdrug program. Figure 1 shows the program funding in DOD’s Drug Interdiction and Counterdrug Activities Account for fiscal year 2018. DOD’s budget request to the President for the National Guard counterdrug program was generally steady from fiscal year 2004 through fiscal year 2012, but was reduced significantly in fiscal year 2013. Since then, congressionally-directed increases have generally accounted for 50 percent or more of the program’s total funding, as shown in figure 2 below. In fiscal year 2018, the Senate Committee on Appropriations expressed concerns that DOD reduced overall funding for the National Guard counterdrug program from the fiscal year 2017 enacted levels and failed to include an individual budget line in its budget request for the National Guard counterdrug schools program. DOD’s budget request for fiscal year 2018 was about $116.4 million, while the final appropriation designated $261.4 million for the program—approximately a 125 percent increase. Roles and Responsibilities On July 31, 2002, the Deputy Secretary of Defense issued a memorandum that, among other things, assigned responsibility for DOD’s counternarcotics program to the Deputy Assistant Secretary of Defense for Counternarcotics. The responsibilities include developing and implementing DOD’s counternarcotics policy, conducting analyses, making recommendations, and issuing guidance regarding DOD’s counternarcotics plans and programs. In addition, the office is responsible for coordinating and monitoring DOD’s counternarcotics plans and programs to ensure adherence to this policy. Chief National Guard Bureau Instruction 3100.01A, National Guard Counterdrug Support, establishes policy and assigns responsibilities for the National Guard counterdrug program. The instruction assigns the Director of the National Guard Domestic Operations and Force Development as the proponent for the program. The Director’s responsibilities include publishing supporting documents for the instruction, verifying that the plans outlining each state’s proposed activities are consistent with annual instructions published by the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats and are processed efficiently and on-time, and conducting periodic evaluations of program operations at the state level. DOD Lacks a Current Strategy and Guidance for the National Guard Counterdrug Program DOD Counternarcotics and Global Threats Strategy Is Outdated DOD’s 2011 Counternarcotics and Global Threats Strategy, the governing strategy for the National Guard counterdrug program, is outdated and does not reflect current drug threats outlined in more recent executive branch strategies. While the 2011 Counternarcotics and Global Threats Strategy shares common themes with the updated executive branch strategies, such as the importance of combatting transnational criminal organizations involved in drug trafficking, it has not been updated to reflect changes in the drug threats faced by the United States that are outlined by the more recent executive branch strategies. Table 2 provides details on national-level strategies that have been released since 2011. The Office of National Drug Control Policy released a new National Drug Control Strategy each year between 2011 and 2016. Each update discussed the threat posed by opioids, which the 2016 update labeled as the greatest drug threat facing the nation. The 2017 National Security Strategy also addressed opioids by emphasizing the need to dismantle transnational criminal organizations that feed the illicit opioid epidemic. However, DOD’s 2011 Counternarcotics and Global Threats Strategy does not address the domestic opioid epidemic. In addition, the 2016 National Southwest Border Counternarcotics Strategy states that the increased role of Mexican heroin manufacturers and traffickers is altering previously established trafficking patterns. While the 2011 Counternarcotics and Global Threats Strategy considers the illicit trafficking of cocaine from the Southwest border, it does not consider changes in the heroin threat. Further, because DOD’s Counternarcotics and Global Threats Strategy has not been updated, it does not take into consideration other strategies that have since been issued, such as the 2015 Caribbean Border Counternarcotics Strategy. According to officials from the National Guard Bureau, DOD’s 2011 counternarcotics strategy only addresses the National Guard counterdrug program in a limited capacity and therefore they are challenged to provide strategic direction to the state counterdrug programs. DOD’s 2011 Counternarcotics and Global Threats Strategy states that officials will ensure that the strategy remains consistent with and integrates key DOD and executive branch strategies, such as National Drug Control Strategy. It also states that, given the dynamic environment within which the challenges related to the flow and impact of illegal drugs exist, the strategy is meant to be a living document, to be modified regularly. However, officials from the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats acknowledged that they have not regularly modified the strategy and that the security environment has changed. These officials stated that they have been in the process of developing an updated Counternarcotics and Global Threats Strategy with revised strategic goals and objectives since 2013, but the document has not been signed and released by the Secretary of Defense. DOD officials stated that after the 2018 National Defense Strategy was issued, they delayed the release of an updated Counternarcotics and Global Threats Strategy in order to ensure alignment between the two documents. However, according to DOD officials, the 2018 National Defense Strategy, which was issued in January 2018, did not address DOD counternarcotics efforts as they had anticipated, requiring them to reconsider their approach. Officials from the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats stated that they now plan to issue a strategic framework, which would allow them to respond to changes in the security environment more quickly because updates to the framework would not require Secretary of Defense approval, as is the case with a DOD strategy. However, they stated that they are now waiting for the release of a new National Drug Control Strategy before issuing the framework. Officials with the Office of National Drug Control Policy stated that, while they have drafted a new National Drug Control Strategy, they have not committed to an issuance date and are waiting for their new director to be confirmed by the Senate before proceeding with reviewing and issuing the draft. However, a substantial amount of time has lapsed since DOD’s counternarcotics strategy was last issued—over 7 years— and there have been significant developments during that time in the nature of the drug threats facing the United States. DOD officials acknowledged that because the process to update its strategic framework requires less review than a full strategy, DOD could quickly update it, if necessary, to ensure that it aligns with a new National Drug Control Strategy once one is released. Without a DOD counternarcotics and global threats strategic framework that reflects DOD’s current strategic priorities and drug threats, the National Guard counterdrug program risks focusing activities and resources in areas that are less imperative to address than others and that do not counter current drug threats. The National Guard Bureau Does Not Have Guidance for Operating and Administering the Counterdrug Program The National Guard Bureau had guidance—National Guard Regulation 500-2—that prescribed policies, procedures, and responsibilities for the National Guard counterdrug program, but it was rescinded in September 2014 by Chief National Guard Bureau Instruction 3100.01 to conform with new National Guard publications guidance, according to National Guard Bureau officials. Chief National Guard Bureau Instruction 3100.01A, which replaced Chief National Guard Bureau Instruction 3100.01 in June 2015, establishes policies and assigns responsibilities for the National Guard counterdrug program, but it does not provide detailed procedures and processes that states can use to implement these policies. For example, National Guard Regulation 500-2 provided information on how states should operate and administer the National Guard counterdrug program, including how to perform counterdrug financial management, acquisition and logistics management, personnel and administration, records and reports, and operate the counterdrug schools. Chief National Guard Bureau Instruction 3100.01A does not provide these types of instructions. State counterdrug program officials we interviewed stated that without the detailed procedures and processes included in National Guard Regulation 500-2, they have no administrative guidance regarding hiring, retirement, budgeting, and planning for their counterdrug programs. Additionally, National Guard Bureau officials stated that they do not have procedures and processes instructing states on how to provide cross-state support. For example, there are currently no guidelines on how a state that can perform aerial reconnaissance activities could provide these resources to another state upon request. National Guard Bureau officials told us they should have guidelines to facilitate cross-state support. Table 3 provides an overview of National Guard Bureau publications. To help implement policy established by Chief National Guard Bureau instructions, the National Guard Bureau can issue more detailed guidance on the corresponding procedures and processes in the form of a Chief National Guard Bureau Manual. Additionally, Chief National Guard Bureau Instruction 3100.01A, National Guard Counterdrug Support, assigns the Director of National Guard Domestic Operations and Force Development the responsibility to publish supporting documents to implement the instruction and counterdrug program when required. However, the National Guard Bureau officials acknowledge that they have not issued a manual that provides detailed procedures and processes to implement National Guard counterdrug program policies since the prior operating guidance in the National Guard regulation was rescinded in September 2014. National Guard Bureau officials stated that they intended to publish a Chief National Guard Bureau Manual in September 2014, concurrent with Chief National Guard Bureau Instruction 3100.01, which would have provided additional operating guidance for administering and operating the counterdrug program. However, according to National Guard officials, issuance of the manual was delayed because of disagreements among National Guard Bureau officials about its content. Specifically, some National Guard Bureau officials stated that the draft manual was too focused on support for Title 10 activities and did not adequately address Title 32 support, which reflects the bulk of the activities conducted by the program. National Guard Bureau officials stated that they intended to re- issue National Guard Regulation 500-2 as interim guidance until they completed the Chief National Guard Bureau Manual; however, they have yet to do so because they have been focused on other efforts. National Guard Bureau officials stated that they have now worked with state counterdrug program officials to more adequately address Title 32 support activities and intend to publish a Chief National Guard Bureau Manual in June 2019. The draft manual is in the beginning of the review process. However, the National Guard Bureau will not have guidance to operate the counterdrug program until at least June 2019. Without interim guidance that provides detailed procedures and processes for the National Guard counterdrug program, such as reissuing National Guard Regulation 500-2, states will continue to be left without clear instructions on how to operate and administer the program, such as how and when to provide support across state lines and to interagency partners. The National Guard Bureau Has Taken Steps to Improve the Availability of Funds When Operating under Continuing Resolutions The federal government has operated under a continuing resolution for 36 of the last 40 years. National Guard counterdrug program officials stated that they have experienced program disruptions during these periods. The disruptions described by the officials are similar to the problems that other programs experience during continuing resolutions. For example, National Guard Bureau officials stated that continuing resolutions have created challenges for the National Guard counterdrug program in fully obligating its funds. DOD data show that the program obligated 84 and 82 percent of total budget authority amounts in fiscal year 2011 and 2013 respectively, although the gap between total budget authority amounts and obligations has decreased since then. According to National Guard officials, the differences over the years between the amounts obligated and total budget authority amounts were partly due to the timing and amount of funding received by the program. Specifically, they stated that it is difficult to fully obligate funds when DOD provides them with a significant portion of their funding close to the end of the fiscal year. Remaining unobligated amounts are transferred back to DOD’s Drug Interdiction and Counterdrug Activities, Defense account. Figure 3 details the counterdrug program’s obligations from fiscal years 2010 through 2017. State counterdrug program officials stated that the timing of DOD’s distribution of funds also creates program execution challenges. For example, state counterdrug program officials stated that prior to fiscal year 2017, they began each year with a minimal number of personnel performing state drug interdiction and counterdrug activities until DOD provided more funding to the program after the enactment of the appropriation for the remainder of the fiscal year. Thereafter, state program officials stated that they increased the number of National Guard personnel supporting National Guard counterdrug program activities. However, state program officials said that after the appropriation expired at the end of each fiscal year, they were once again forced to reduce the number of personnel performing state drug interdiction and counterdrug activities until the enactment of another final appropriation was passed. Figure 4 provides a summary of the number of National Guard personnel performing state drug interdiction and counterdrug activities by month during fiscal years 2012 through 2017. According to state counterdrug program officials, the majority of funds provided after a final appropriation is passed fund temporary personnel and seasonal work, rather than analysis support activities deemed a priority for the National Guard counterdrug program by the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats. State counterdrug program officials stated that this is because they cannot hire, train, and integrate personnel on a full-time basis and that law enforcement agencies are looking more for long-term, rather than temporary support. State counterdrug officials told us that as a result of the funding uncertainty they experience significant fluctuations in the number of personnel performing state drug interdiction and counterdrug activities and that they are challenged in obtaining and retaining highly qualified National Guard personnel. Additionally, state counterdrug program officials stated that withdrawing National Guard personnel from partner organizations after appropriations expire can severely affect their operations and diminish trust between counterdrug programs and law enforcement partners. According to National Guard Bureau officials, the National Guard Bureau revised its process for funding the National Guard counterdrug program in fiscal year 2017 to try to mitigate the effects of DOD’s process for providing funds under continuing resolutions on the program. Specifically, the National Guard Bureau worked with the Army and Air National Guard budget execution offices to establish a process to expedite funding made available to the state-level counterdrug programs. Under the revised process, the Army and Air National Guard budget execution offices reprogram available amounts from other programmatic activities, such as funds for annual training, to the counterdrug program earlier in the fiscal year. According to Army and Air National Guard budget execution officials, amounts provided through reprogramming are based on a number of factors, including prior years’ appropriations for the program, execution levels, current-year appropriations and congressional directions, and an assessment of risk to the other activities. The National Guard Bureau and state counterdrug program officials stated that this revised funding process has helped mitigate challenges arising from uncertainty of when and how much funding would be provided to the states. For example, state counterdrug program officials said that in fiscal year 2017, the funding process enabled them to retain more personnel on orders and decrease the amount of funds that went unspent. The total number of personnel assigned to the National Guard counterdrug program at the beginning of fiscal year 2018 was approximately 2,250. Conversely, the program began fiscal year 2016 with approximately 1,350 personnel on orders. In addition, program officials stated that the process to provide funding earlier in the fiscal year helped them to obligate almost 97 percent of the total budget authority in fiscal year 2017, a higher percentage compared to many of the previous fiscal years. National Guard officials stated that while reprogramming amounts from other programmatic activities has helped to address the fiscal challenges of the National Guard counterdrug program, they cannot provide assurance that this funding process will continue from year to year. However, National Guard Bureau officials have assessed the risks and believe this is the best solution available for funding the program during a continuing resolution until the enactment of the final appropriation. DOD Could Improve Its Processes for Approving and Distributing Funds to State Counterdrug Programs DOD Has Provided Funding to State Counterdrug Programs without Approved Plans DOD has established a process for development and review of the state plans—an annual plan of each state’s counterdrug activities—to ensure that state counterdrug program activities reflect DOD’s counternarcotics strategic priorities. However, since at least 2009 DOD has not met the statutory requirement to examine the adequacy of state plans prior to distributing funding to state counterdrug programs. To develop the state plans, counterdrug coordinators in each state counterdrug program use guidance in annual memorandums issued by DOD. According to the guidance, the plans should identify the state’s counterdrug priorities and how each state counterdrug program intends to obligate its available funds. Counterdrug coordinators then work with their state’s Adjutant General, Attorney General, and Governor, who each review and sign them, before the plans are sent to the National Guard Bureau for further review. Once the National Guard Bureau reviews the plans, they are forwarded to the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats. Officials from that office review the plans and make recommendations to the Secretary of Defense to approve or disapprove the plans. Based on these recommendations, the Secretary of Defense reviews the plans for adequacy, and when satisfied, signs a memorandum of agreement approving the plans. Figure 5 provides an outline of the process to approve state plans for their counterdrug activities. However, since at least 2009, DOD has provided funding to the state counterdrug programs prior to the Secretary of Defense approving states’ plans for their counterdrug activities, according to National Guard Bureau officials. This is inconsistent with section 112 of title 32 of the United States Code, which requires that before funds are provided to the Governor of a state for counterdrug activities and before members of the National Guard of that State are ordered to full-time National Guard duty, the Secretary of Defense must examine the adequacy of the plan submitted by the Governor. We found that that the delay in approval of states’ plans for their counterdrug activities has worsened since 2009, and in fiscal year 2018, approval took over 9 months (283 days) after funding was provided at the beginning of the fiscal year. Figure 6 provides information on the number of days between the beginning of the fiscal year, when states received funding, and when all plans were approved in fiscal years 2009 through 2018. Officials from the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats and the National Guard Bureau stated that several factors have contributed to delays in the state plan approval process. First, officials stated that, prior to fiscal year 2016, the National Guard Bureau submitted state plans to the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats signed by the Division Chief of the National Guard counterdrug program, a colonel in the Army or the Air Force. However, in fiscal year 2016, officials from the Office of the Secretary of Defense found the Counterdrug Program Division Chief’s review and approval of the state plans to be insufficient because the approving official did not have the appropriate rank to approve state plans on behalf of the National Guard Bureau. As a result, officials from the National Guard Bureau elevated the level of approval within the National Guard Bureau to the National Guard Bureau Joint Staff Director of Domestic Operations and Force Development, a Major General in the Army National Guard or Air National Guard. Officials from the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats stated that this resulted in an increase in the number of days that it took the National Guard Bureau to provide reviewed state plans. Officials stated that they are working to develop an updated timeline to address delays created by the approval process. Specifically, officials stated that they are working to submit the plans for review earlier in order to allow enough time to ensure that state plans are approved before funds are provided to state counterdrug programs. Second, officials from the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats stated that their office required state plans to include information, such as narratives detailing states’ planned activities that were not critical to determining plans’ alignment with DOD priorities. In addition, officials stated that, over time, states had expanded the narratives in their plans, which increased the length of each submission. As a result of this required information, officials stated that the department’s review of state plans took longer than had the extra information not been included. Officials from the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats stated they have reviewed the statutory requirements for the plans to identify which components are necessary and streamlined the format of the plans for use in fiscal year 2019. Third, officials from the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats stated that in the past the Office of the Secretary of Defense would not accept state plans from the National Guard Bureau in batches, but instead insisted on receiving and reviewing them altogether, delaying the review process. These officials noted that they have since begun accepting state plans from National Guard Bureau in batches in order to speed up the approval process. On June 7, 2018, the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats issued a memorandum to the Chief of the National Guard Bureau that required all states and territories to submit their plans, through National Guard Bureau and the Joint Staff, to his office no later than August 31, 2018. According to officials from the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats, the state plans were to detail fiscal year 2019 National Guard counterdrug program activities and provide the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats additional time to review state plans prior to the beginning of the fiscal year. However, in October 2018, officials from the National Guard Bureau and the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats told us that none of the fiscal year 2019 plans had been approved prior to the beginning of the fiscal year, and that DOD had provided state counterdrug programs with funding for fiscal year 2019. As of mid-November, officials from the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats told us that 39 of the 54 state plans had been approved. DOD has not assessed why the steps it took to improve the state plan review process did not result in timely approval of the state plans. GAO’s Standards for Internal Control in the Federal Government note that management should monitor activities and evaluate the results of programmatic changes. Assessing the revised process for reviewing states’ plans would enable DOD to determine what additional actions are needed to ensure the plans are approved by the Secretary of Defense before funding is provided to state counterdrug programs, as statutorily required by section 112 of title 32. National Guard Bureau’s Funding Distribution Process Does Not Incorporate DOD Strategic Counternarcotics Priorities We found that the National Guard Bureau’s funding distribution process does not consider DOD’s strategic counternarcotics priorities. For example, while DOD’s 2011 Counternarcotics and Global Threats Strategy prioritizes efforts on the southwest and northern borders, the National Guard Bureau’s funding distribution process does not specifically account for this. Rather than taking into account established DOD counternarcotics priorities to inform funding distribution, the National Guard Bureau uses survey results and statistics on drugs from a number of national-level databases to develop a distribution percentage for each state within its threat-based resource model that reflects its relative drug threat. Each state’s threat-based resource model percentage is then applied to the funding transferred to the National Guard Bureau from the Drug Interdiction and Counterdrug Activities, Defense account and disbursed to the 54 state programs. For example, Arizona’s threat percentage was determined to be 6.25 percent based on existing drug threats; as a result, Arizona received about $11.8 million in funding for state plans in fiscal year 2018. National Guard Bureau officials stated that while the threat-based resource model’s variables and the data that feed the model relate to DOD strategic counternarcotics priorities, they do not adjust the process to reflect these priorities when distributing funding. When we asked National Guard Bureau officials why its funding distribution process does not consider DOD’s strategic counternarcotics priorities, National Guard Bureau officials stated that they were focused on identifying variables and data sources within the threat-based resource model to reflect relative drug threats and did not consider incorporating DOD’s strategic counternarcotics priorities as part of the funding distribution process. Our work on results-oriented management states that strategy should inform program activities and resourcing. In addition, the National Guard Bureau reported that the goal of the threat- based resource model is to prioritize the most pressing threats from a national perspective, informed by current national and DOD counternarcotics strategies. Both the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats and National Guard Bureau officials stated that incorporating DOD’s strategic counternarcotics priorities into the National Guard Bureau’s funding distribution process would help ensure that DOD priorities are resourced. National Guard Bureau officials stated that they are considering how to align the funding distribution process with DOD’s strategic counternarcotics priorities. They added that the next time they could make changes to their funding distribution process would be for use in fiscal year 2020. Until the National Guard Bureau incorporates DOD’s strategic counternarcotics priorities into the funding distribution process, the National Guard Bureau risks directing funding toward lower priority counterdrug activities at the expense of higher priority activities. Conclusions The National Guard counterdrug program was established nearly 30 years ago to assist efforts of the Governors of the 50 states, District of Columbia, and three U.S. territories in addressing illicit drug production, trade, and consumption. The drug threats facing the nation are complex and continue to evolve over time, and efforts to combat those threats will require continued support from DOD, to include the National Guard counterdrug program. DOD lacks current strategy and guidance for the National Guard counterdrug program. Although DOD has a counternarcotics and global threats strategy from 2011, it is outdated and does not reflect current drug threats or changes in national-level strategies, which are critical for informing DOD’s strategic counternarcotics priorities. Issuing a strategic framework will ensure that DOD’s counterdrug priorities are aligned with the priorities of other agencies involved in counternarcotics efforts, provide direction for DOD’s counternarcotics activities, and ensure that the National Guard counterdrug program addresses current drug threats. Further, the National Guard Bureau guidance to operate and administer the program was rescinded and has not been replaced, leaving state counterdrug programs officials without clear instructions on how to operate and administer program activities. Issuing interim guidance would provide detailed processes and procedures that states could use to operate their counterdrug programs. Without current strategy or guidance for the National Guard counterdrug program, it will be difficult for the program to operate effectively. In addition, it is important to ensure that funding is distributed to the state- level programs in support of DOD’s strategic counternarcotics priorities. Although the Secretary of Defense is statutorily responsible for reviewing the adequacy of states’ plans prior to providing funds to the states, these reviews have not occurred before state counterdrug programs received funding. Also, the National Guard Bureau has not incorporated DOD’s strategic counternarcotics priorities into its funding distribution process, which is instead wholly reliant on survey responses and drug data. While these are important factors to consider when distributing funding, incorporating DOD strategic counternarcotics priorities into the National Guard Bureau’s funding distribution process would better inform such decisions. Until DOD’s process to approve state plans and the National Guard Bureau’s process to distribute funding are improved, DOD may not be able to ensure that resources are applied to its strategic counternarcotics priorities. Taken together these actions should improve the Department’s oversight of the National Guard counterdrug program and help ensure that the program uses resources effectively and achieves positive results. Recommendations for Executive Action We are making five recommendations to DOD. The Secretary of Defense should ensure that the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats issues its counternarcotics and global threats strategic framework that incorporates relevant national-level strategies and reflects current drug threats, and update it, as appropriate, upon issuance of the new National Drug Control Strategy. (Recommendation 1) The Secretary of Defense should ensure that the Chief of the National Guard Bureau issues interim guidance that provides detailed procedures and processes on how to operate and administer the National Guard counterdrug program. (Recommendation 2) The Secretary of Defense should ensure that the Chief of the National Guard Bureau take steps to ensure it issues a manual to accompany Chief National Guard Bureau Instruction 3100.01A, National Guard Counterdrug Support, by June 2019. (Recommendation 3) The Secretary of Defense should ensure that the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats, in coordination with the Chief of the National Guard Bureau, assess the revised process for reviewing states’ plans for their counterdrug activities, and take actions based on the assessment to ensure the plans are approved by the Secretary of Defense before funding is provided to state counterdrug programs, as statutorily required. (Recommendation 4) The Secretary of Defense should ensure that the Chief of the National Guard Bureau incorporate the strategic counternarcotics priorities, to be outlined in DOD’s counternarcotics and global threats strategic framework, into the National Guard Bureau’s funding distribution process. (Recommendation 5) Agency Comments and Our Evaluation In written comments on a draft of this report, DOD concurred with all five of our recommendations and identified actions it plans to take to improve its oversight of the National Guard counterdrug program. DOD’s comments are reprinted in their entirety in appendix VI. DOD also provided technical comments on a draft of this report, which we incorporated as appropriate. For example, we adjusted the wording of our fifth recommendation, replacing threat-based resource model with funding distribution process, to reflect the department’s technical comment that it is unlikely that the National Guard Bureau would change the threat-based resource model, but rather add strategic priorities to the funding distribution process to meet the intent of our recommendation. We are sending copies of this report to appropriate congressional committees, the Acting Secretary of Defense, the Assistant Secretary of Defense for Special Operations/Low-Intensity Conflict, and the Chief of the National Guard Bureau. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-2775 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. Appendix I: National Guard Counterdrug Program Funding by Project Code Department of Defense (DOD) budgets for National Guard counterdrug program activities using 5 projects codes: 7403—State Plans—funds DOD support to U.S. State Governors in accordance with State requests in the form of drug interdiction and counter-drug activities plans submitted in accordance with 32 U.S.C. § 112(c). 7415—Counterdrug Schools—funds five National Guard Counterdrug Schools as authorized by §901 of the Office of National Drug Control Policy Reauthorization Act of 2006, as amended, and as identified in plans submitted by host State Governors to the Secretary of Defense in accordance with 32 U.S.C. § 112(c). 9301—Counterthreat Finance—funded reserve military pay and associated support costs for National Guard personnel in support of State, Federal, and Combatant Command efforts to identify, target, and disrupt illicit financial systems that enable drug trafficking, and when vital to U.S. national security interests—terrorism and transnational organized crime. 1295—Linguist and Data Analysis—funds DOD support for combatant command and interagency law enforcement efforts to detect and disrupt transnational criminal organizations’ operations using linguistic and analytical skills of National Guard personnel. 9498—Linguist Support—funds language transcription, translation, and data analysis support to the U.S. Department of Justice and Drug Enforcement Administration using Utah National Guard personnel. DOD’s budget request for the National Guard counterdrug program increased steadily from fiscal year 2004 through fiscal year 2012, peaking at just more than $205 million. However, in fiscal year 2013 DOD’s budget request for the program decreased substantially and continued to decline through fiscal year 2017. The decrease in requested funding amounts for the program is primarily in the State Plans and Counterdrug Schools project codes. In fiscal year 2018, the budget request for the program increased slightly and included additional funding amounts within the State Plans and Counterdrug Schools project codes. Table 4 provides a summary of DOD’s budget request for the National Guard counterdrug program, by project code, in fiscal years 2004 through 2018. Since at least 2004, Congress has directed increases above DOD’s budget request level for the activities of the National Guard counterdrug program. Congressionally-directed increases have been directed to the State Plans and Counterdrug Schools project codes. Beginning in fiscal year 2013, congressionally-directed increases have generally made up half or more of the total funding appropriated to the National Guard counterdrug program. Table 5 provides a summary of congressionally- directed increases for the National Guard counterdrug program, by project code, in fiscal years 2004 through 2018. According to DOD’s data, total budget authority for the National Guard counterdrug program varied from fiscal year 2010 through fiscal year 2017. Total budget authority may be above or below congressionally- enacted amounts because DOD can transfer or reprogram amounts into other authorized accounts and activities based on program requirements. Table 6 provides a summary of total budget authority for the National Guard counterdrug program, by project code, in fiscal years 2010 through 2017. According to DOD’s data, obligation amounts for the National Guard counterdrug program varied from fiscal year 2010 through fiscal year 2017. According to National Guard officials, variation was partly due to the timing and amount of allocations received by the program. Funds transferred from the Drug Interdiction and Counterdrug Activities, Defense account to various other DOD drug interdiction accounts or programs, including the National Guard program, can be transferred back to the account upon a determination that all or part of the funds are not necessary and remain unobligated. Once funds are returned to the Drug Interdiction and Counterdrug Activities, Defense account, they are available for transfer to other DOD counterdrug programs for obligation. Table 7 details the counterdrug program’s obligations from fiscal years 2010 through 2017. Appendix II: Overview of State Counterdrug Program Planned Support Activities, Fiscal Year 2018 As of fiscal year 2018, National Guard Bureau policy allows state counterdrug programs to perform 15 approved support activities grouped into five broad mission categories. The five mission categories are technical support (including linguist and translator, operational and investigative case and criminal analyst, and counterthreat finance support), general support (including domestic cannabis suppression and eradication operations and transportation support), reconnaissance and observation (including ground and aerial reconnaissance), civil operations and coalition development, and counterdrug training. Of the 15 approved support activities, the investigative case and analyst support activity was the most frequently provided activity; it accounted for 42 percent of all support provided in fiscal years 2011 to 2014. Among all of the supported organizations from fiscal year 2011 to fiscal year 2014, law enforcement agencies received about 38 percent of all support provided by the National Guard counterdrug program. Table 8 lists the fiscal year 2018 approved state plan mission categories and support activities. Appendix III: Process to Fund the National Guard Counterdrug Program After Congress appropriates amounts to the Drug Interdiction and Counterdrug Activities, Defense account, there are multiple steps performed by various organizations before counterdrug funds are provided to each individual state program. To begin the process to distribute funding, the Department of Defense (DOD) Counternarcotics and Global Threats program officials prepare and submit to the Office of the Under Secretary of Defense (Comptroller) a reprogramming action (DD1415-3), which details the allocation of funds by appropriation or budget activity account for each program they manage. DOD Comptroller officials review and approve the DD1415-3 and forward it to the Office of Management and Budget. Once approved by the Office of Management and Budget, the DOD Comptroller issues a funding authorization document to transfer funds to the military services appropriation accounts (such as military personnel or operation and maintenance). The military services then transfer funds to appropriation accounts managed by Army National Guard and Air National Guard, which, in turn, distribute the funds onto each state National Guard participating in the program. The National Guard Bureau’s Counterdrug Program office coordinates the process involving the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats, the Army and Air National Guard budget and financial management offices, and the individual state counterdrug programs. According to officials from the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats, the process to complete the DD1415-3 takes 3 full weeks and then an additional 8 weeks, on average, for the funding to become available for state counterdrug programs. Figure 7 outlines the process to fund the National Guard counterdrug program. Appendix IV: Funding Provided by the Department of Defense under Congressional Appropriations Appendix V: Threat-Based Resource Model The National Guard Bureau’s threat-based resource model has been used since fiscal year 2012 to help determine funding distribution percentages for the state counterdrug programs. Between fiscal years 2013 and 2015, National Guard Bureau officials stated that they determined planned funding amounts based on a combination of historical funding levels and threat-based resource model threat percentages. According to officials, beginning in fiscal year 2016, funding aligned more closely with threat-based resource model threat percentages. However, National Guard Bureau officials stated that funding distribution percentages from the threat-based resource model were deemed unusable in fiscal year 2017 due to concerns they had with the amount of reporting and the quality of the data that was reported. As a result, officials stated that the fiscal year 2016 threat-based resource model funding percentages were used to distribute fiscal year 2017 funding to state programs while National Guard Bureau officials revised the model for use in fiscal year 2018. Updates to the model included expanding the number of variables to better respond to changes in drug threats, adjusting the model so that it did not treat all drug seizure incidents and amounts equally, and increasing the number of data sources. Table 10 provides threat-based resource model percentages and table 11 funding amounts, by state, for fiscal years 2012 through 2018. Appendix VI: Comments from the Department of Defense Appendix VII: Status of October 2015 Recommendations on National Guard Counterdrug Program In October 2015, GAO issued a report on the National Guard counterdrug program titled Drug Control: Additional Performance Information Is Needed to Oversee the National Guard’s State Counterdrug Program. In that report, we made two recommendations aimed at ensuring that resources are being efficiently applied to meet the National Guard counterdrug program’s objectives. Table 12 provides an update on the status of the recommendations from that report. Appendix VIII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments: In addition to the contact named above, Rich Geiger (Assistant Director), Joy Booth, Carol Henn, Jesse T. Jordan, Amie M. Lesser, Shari Nikoo, Tobin J. McMurdie, Carol D. Petersen, Clarice Ransom, Michael D. Silver, Alexandra L. Stewart, and Sarah B. Warmbein, made key contributions to this report.
Why GAO Did This Study Since 1989, DOD has received billions of dollars to fund the National Guard's participation in a counterdrug program focused on domestic drug interdiction activities. DOD received $261 million for this program in fiscal year 2018. This program provides military support to assist state, local, and tribal law enforcement organizations with counterdrug activities and operates in 54 states and territories across the United States. Senate Report 115-125 included a provision for GAO to evaluate the National Guard counterdrug program. This report (1) evaluates the extent to which DOD has strategy and implementing guidance for the National Guard counterdrug program, and (2) assesses DOD's processes to approve states' counterdrug plans and distribute funding to the program, among other things. GAO reviewed DOD's counterdrug strategy and guidance; DOD funding and personnel data; and its processes to distribute funding. What GAO Found The Department of Defense (DOD) lacks current strategy and guidance to implement the National Guard counterdrug program. Although a number of key national-level strategies, such as the National Drug Control Strategy, have been updated since 2011 to address changing drug threats, GAO found that DOD's 2011 Counternarcotics and Global Threats Strategy has not been updated to reflect these changes. In addition, the National Guard lacks detailed procedures and processes for the states to implement the National Guard counterdrug program, such as how to conduct cross-state aerial reconnaissance. Without current strategy or guidance, it will be difficult for the National Guard to operate its counterdrug program effectively. DOD's processes to approve state counterdrug plans and distribute funding to the state-level counterdrug programs could be improved. Since at least 2009, DOD has provided funding to the states without first approving state plans for counterdrug activities, as required by statute. GAO found that the delay in approval of state counterdrug plans has worsened since fiscal year 2009; in fiscal year 2018, approval took over 9 months (283 days); see figure below. In 2018, DOD took some steps to address the timely review of state plans, but GAO found that those steps did not rectify the problem. GAO also found that the process used by the National Guard to distribute funding to the states within the program does not incorporate DOD's strategic counternarcotics priorities, such as the U.S. southwest and northern border areas. GAO's work on results-oriented management states that strategy should inform program activities and resourcing. Until National Guard's process to distribute funding to state counterdrug programs is improved, it risks directing funding toward lower priority counterdrug activities at the expense of higher priority activities. What GAO Recommends GAO is making a total of five recommendations, including, among others, that DOD issue a strategic framework that addresses current drug threats, the National Guard issue guidance with detailed procedures on how states should administer the program, DOD assess the revised process for approving state plans, and the National Guard incorporate DOD's strategic counternarcotics priorities into its funding distribution process. DOD concurred with GAO's recommendations.
gao_GAO-19-100
gao_GAO-19-100_0
Background Treasury established HHF in February 2010 to help stabilize the housing market and assist homeowners facing foreclosure in the states hardest hit by the housing crisis. The HHF program is implemented by Treasury’s Office of Financial Stability. Treasury obligated funds to 18 states and the District of Columbia. Treasury allocated funds to each state’s HFA to help unemployed homeowners and others affected by house price declines. HFAs, in turn, design their own programs under HHF specific to local economic needs and circumstances pursuant to their contracts with Treasury. Treasury allocated $9.6 billion in HHF funding to 19 HFAs in five rounds. As described below, Treasury allocated $7.6 billion to participating HFAs during the first four rounds of funding, all of which occurred in 2010. HFAs were required to disburse these funds by December 2017. Round one: In February 2010, Treasury allocated $1.5 billion to the HFAs in the five states that had experienced the greatest housing price declines—Arizona, California, Florida, Michigan, and Nevada. Round two: In March 2010, Treasury allocated $600 million to the HFAs in five states with a large proportion of their populations living in counties with unemployment rates above 12 percent in 2009—North Carolina, Ohio, Oregon, Rhode Island, and South Carolina. Round three: In August 2010, Treasury allocated $2 billion to the HFAs in nine of the states funded in the previous rounds, along with the HFAs for eight additional states and the District of Columbia, all of which had unemployment rates higher than the national average in 2009. The additional HFAs that received funding were Alabama, the District of Columbia, Georgia, Illinois, Indiana, Kentucky, Mississippi, New Jersey, and Tennessee. Round four: In September 2010, Treasury allocated an additional $3.5 billion to the same 19 HFAs that received HHF funding through the previous rounds. In December 2015, the Consolidated Appropriations Act, 2016 authorized Treasury to make an additional $2 billion in unused TARP funds available to existing HHF participants. In early 2016, Treasury announced a fifth round of HHF funding. According to Treasury and HFA officials and other stakeholders, by that time some of the participating HFAs had begun to wind down their programs by letting go of program staff or making other changes after they had disbursed most of their funding from the first four rounds. Treasury allocated this additional $2 billion in two phases. Round five, phase one: In February 2016, Treasury allocated $1 billion to 18 of the HFAs that had previously been awarded HHF funds based on each state’s population and utilization of previous HHF funds. In order to qualify for phase one funding, states had to have drawn at least 50 percent of their previously received funding. Round five, phase two: In April 2016, Treasury allocated an additional $1 billion to 13 HFAs that applied and sufficiently demonstrated to Treasury their states’ ongoing housing market needs and the ability to effectively utilize additional funds. The HFAs that received funding were California, District of Columbia, Illinois, Indiana, Kentucky, Michigan, Mississippi, New Jersey, North Carolina, Ohio, Oregon, Rhode Island, and Tennessee. In conjunction with the fifth round of funding, Treasury extended the deadline for disbursement to December 31, 2021. Treasury also determined that HFAs must finish reviewing and underwriting all applications for final approval to participate in the program no later than December 31, 2020. HFAs that do not disburse HHF funds by the December 31, 2021, deadline will have to return the remainder of the funds to Treasury. See figure 1 for an overview of the allocation amounts and disbursement deadlines. HHF Programs Under HHF, HFAs designed locally tailored programs that address HHF’s goals of preventing foreclosures and stabilizing housing markets. These programs had to meet the requirements of the Emergency Economic Stabilization Act of 2008 and be approved by Treasury. Treasury categorizes programs into six types, which are discussed in detail later in this report, including programs that provide monthly mortgage payment assistance and programs that reduce the principal of a mortgage. Programs vary by state in terms of eligibility criteria and other details. HFAs contract with various stakeholders to implement HHF programs, including mortgage servicers and, in some cases, housing counseling agencies and land banks. The types of stakeholders involved vary depending on program design. For example, HFAs with blight elimination programs may choose to provide HHF funding to a local land bank to demolish and green blighted properties in distressed housing markets. Also, HFAs may contract with housing counseling agencies approved by the Department of Housing and Urban Development (HUD) to identify eligible applicants at risk of foreclosure. HFAs are required to report performance information on each of their HHF programs to Treasury on a quarterly basis. This information includes outputs, such as the number of homeowners assisted or properties demolished, as well as outcomes, such as the number of homeowners who are no longer participating in HHF programs. The specific types of performance information that Treasury requires HFAs to report vary depending on the program type and include both intended and unintended consequences of the program. For example, HFAs with mortgage payment assistance programs must report on the number of homeowners who have transitioned out of the program due to specific changes in their circumstances, such as regaining employment. HFAs do not have to report on the number of borrowers who transitioned out of the program into foreclosure sales, short sales, or deeds-in-lieu of foreclosure for their down payment assistance programs because the assistance is provided on behalf of a buyer who is purchasing, not selling or otherwise exiting, the home. Treasury provides HFAs with spreadsheet templates, which HFAs are to fill out and submit back to Treasury. The templates include data-reporting guidance in the form of a data dictionary, which describes the data elements HFAs are to report. Participation Agreements Participating HFAs’ HHF programs are governed by a participation agreement, or contract, with Treasury that outlines the terms and conditions in providing services that the HFA must meet as a recipient of HHF funds. Each agreement includes reporting requirements, program deadlines, and descriptions of permitted administrative expenses. Additionally, agreements include detailed descriptions of the HHF programs that Treasury has approved. Program descriptions include details such as eligibility criteria, structure of assistance, and the estimated number of participating homeowners. Participation agreements may be amended with Treasury approval to reflect changes to HHF programs, such as new requirements from Treasury or changes in the amounts HFAs allocate to each program. As an example, in 2015 Treasury added new conditions, called utilization thresholds, to each HFA’s participation agreement. The thresholds establish the percentage of allocated funds each HFA was required to draw from its Treasury account by the end of each year from 2016 through 2018. If an HFA did not meet a threshold, Treasury reallocated a portion of the additional funds received during the fifth round to HFAs that did meet the threshold. If an HFA would like to make a change to an HHF program, the HFA must submit a request to Treasury that outlines the proposed change. Treasury reviews the proposal through an interdisciplinary committee and, if the proposal is approved, amends the participation agreement. As of December 2017, the 19 participating HFAs had each received approval from Treasury and executed between 9 and 21 amendments to their individual participation agreements. Treasury’s Monitoring of HHF Addresses or Partially Addresses Leading Practices for Program Oversight Treasury’s policies and procedures to monitor HFAs’ implementation of the HHF program address 10 leading monitoring practices, including practices related to the collection of periodic performance reports and validation of performance through site visits. However, Treasury’s assessment of HFAs’ internal control programs, development of performance indicators, documentation of goals and measures, and documentation of HFAs’ monitoring could better address leading practices (see fig. 2). Treasury Addressed 10 Leading Practices for Monitoring Regular Monitoring of Policies and Procedures Treasury created policies and procedures to guide regular oversight of HFAs’ implementation of HHF. According to internal control standards for the federal government, management should design control activities to achieve objectives and implement control activities through policies— such as by periodically reviewing policies, procedures, and related control activities. In addition, management should establish and operate activities to monitor the internal control system and evaluate the results— for example, through ongoing monitoring procedures and separate evaluations. Treasury documented procedures for key areas of its monitoring framework, including providing funds to HFAs, evaluating HFAs’ requests to change their programs, collecting financial and performance information from HFAs, conducting site visits, and addressing fraud detection and mitigation for Treasury’s staff. Treasury regularly updates the policies and procedures it created and reviews its compliance oversight procedures annually. In addition, Treasury regularly conducts site visits to HFAs, as discussed below. Risk-Based Monitoring Approach Treasury uses a risk-based approach to selecting HFAs for its regular site visits. This approach is consistent with leading practices we have developed for managing fraud risk, which state that agencies should employ a risk-based approach to fraud monitoring by taking into account internal and external factors that can influence the control environment. In 2018, Treasury began using a point-based, 29-factor approach to selecting HFAs for site visits for compliance reviews, taking into account factors such as whether prior fraud was detected or reported, observations from HFAs’ compliance reviews, administrative dollars spent compared to program assistance provided, and whether HFAs have documented blight-specific policies and procedures. According to Treasury staff, during site visits Treasury determines its test and sample sizes for a risk-based review of an HFA’s programs. Treasury also uses a risk-based approach to responding to potentially impermissible payments, and according to Treasury staff, its responses depend on the circumstances. If an HFA notifies Treasury of issues related to inappropriate payments involving fraud, waste, or abuse, Treasury staff notify and work with the Office of the Special Inspector General for the Troubled Asset Relief Program (SIGTARP) to provide technical assistance as needed. In 2017, Treasury implemented additional procedures with regard to HFAs’ administrative expenses. If Treasury identifies an administrative expense issue during a site visit, Treasury requires the visited HFA to undertake a multistep review of its administrative expenses, including reviewing additional administrative expenses if similar problems are identified during the initial review. The HFA is required to reimburse HHF for any administrative expenses that were not made in accordance with federal cost principles. Additionally, Treasury may require the HFA to create a plan for corrective action. Periodic Collection of Performance Reports and Data from Implementing Partners Treasury collects performance information from participating HFAs on a regular basis, which a compliance team receives and reviews. These efforts are consistent with internal control standards, which state that management should use quality information to achieve the entity’s objectives, such as by obtaining relevant data from reliable sources. Treasury tracks its receipt of agencies’ quarterly performance reports and financial statements, as well as HFAs’ annual internal control certifications. Quarterly performance reports include information about homeowners, such as the number of homeowners who receive or are denied assistance. These reports also include program-specific performance data, such as the median assistance amount, and outcomes, such as the number of program participants who still own their home. According to HFAs’ participation agreements, HFAs are required to report performance information through the end of their programs. In addition, Treasury collects informal monthly updates from HFAs on their program performance and is in frequent contact with HFAs by phone to obtain information on HFAs’ performance, including any challenges states are facing, according to Treasury staff and HFAs with whom we met. Treasury also collects reports on the impact of blight elimination programs, which HFAs with these programs are required to submit to Treasury. Periodic Analysis of Performance Data Treasury regularly analyzes the performance and financial data that it collects through quarterly performance reports, quarterly unaudited financial statements, and annual audited financial statements that HFAs are required to submit. Periodic analysis of these materials is consistent with standards for internal control, which state that management should design control activities to achieve objectives and respond to risks—for example, by establishing activities to monitor performance measures and indicators. Treasury uses information from quarterly performance reports to produce quarterly reports for the public on the number of homeowners who received or were denied assistance, among other things. Treasury also includes data on the extent to which states have spent their HHF funding in monthly reports to Congress. Additionally, Treasury analyzes quarterly unaudited and annual audited financial statements to monitor HFAs’ spending of program funds and identify any areas of concern. According to Treasury staff, the agency also uses performance information HFAs report quarterly, such as the number of homeowners who receive or are denied assistance, to assess whether HFAs are making sufficient progress in effectively utilizing program funds to reach the targets for assisting homeowners. Procedures for Ensuring Quality of Performance Data Treasury has procedures to assess the quality of HFAs’ performance data when reviewing quarterly performance reports and conducting site visits. These procedures are consistent with internal control standards, which state that management should use quality information to achieve the entity’s objectives, such as by evaluating data sources for reliability. According to Treasury staff, beginning in the first quarter of 2018, Treasury required all participating HFAs to upload their performance data into a system that does basic data reliability testing, such as ensuring the numbers submitted by HFAs are consistent with data submitted for previous quarters. This system flags outliers or large changes for further review. Prior to this requirement, HFAs could use the system optionally. HFAs are able to upload their data as frequently as they want to check for errors or inconsistencies. After performance information is uploaded into the system, two Treasury staff review any issues flagged by the system and follow up with HFAs to resolve them. According to Treasury staff, as an additional validation step, Treasury staff conducts a reconciliation by checking whether the funds reported in HFAs’ performance reports match the data in the HFAs’ quarterly financial reports. After Treasury reviews each HFA’s performance data, it combines that information to create quarterly reports. In addition, Treasury staff told us that they do a detailed review of HFAs’ financial statements during site visits, including but not limited to the timeliness of financial reporting, corrections to reports after the reporting cycle, and supporting documentation for all categories of expenditures sampled during the review. Roles and Responsibilities of Personnel Responsible for Monitoring Treasury documents the offices that are responsible for receiving and reviewing monitoring materials, the deadlines for receiving this information, and the responsibilities of staff who execute internal control. This documentation is consistent with internal control standards, which state that management should implement control activities through policies, such as by documenting each unit’s internal control responsibilities. The standards also state that management should remediate identified internal control deficiencies on a timely basis, such as by having personnel report internal control issues through established reporting lines. Treasury’s policies and procedures document which offices are in charge of executing its monitoring procedures, such as collecting required documentation, conducting site visits, and evaluating HHF performance. Treasury informs HFAs of reporting lines to Treasury through phone calls and emails. Treasury and HFA staff also noted that they are in frequent contact with each other regarding administration of the program. Validation of Implementing Partners’ Performance through Site Visits or Other Means of Verification Treasury uses regular (at least biennial) site visits, biweekly calls with HFAs, and monthly informal performance updates as means of validating HFAs’ performance. These practices are consistent with OMB guidance, which states that a federal awarding agency may make site visits as warranted by program needs. Treasury uses its site visits to assess HFAs’ program implementation, conduct its own analyses of program results, review HFAs’ use of program funds, and review HFAs’ implementation of internal controls. According to Treasury staff, Treasury also uses site visits to corroborate the information HFAs report on their program performance and use of HHF funds. According to HFAs with whom we met, site visits typically last multiple days and include entrance and exit conferences between Treasury and HFA staff. During site visits, Treasury staff review documentation related to homeowners and properties associated with the programs, quality assurance processes, antifraud procedures, information technology and data security, finances, and legal matters. After the site visit, Treasury issues a report documenting its observations. Within 30 days of receiving Treasury’s written report, HFAs are required to provide Treasury with a written response describing how they will address any issues of concern. Procedures for Project Closeout Treasury included some procedures for project closeout in HFAs’ participation agreements. Creating procedures for project closeout is consistent with OMB guidance, which states that agencies should close out federal awards when they determine that applicable administrative actions and all required work have been completed by the nonfederal entity. Participation agreements describe various procedures for closing out HHF programs, including requirements for the return of unexpended funds to Treasury and final reporting and provisions for reimbursement of expenses. In addition, according to Treasury staff, Treasury is in the process of developing and issuing wind-down guidance for HFAs in stages to address specific areas of program activity. Agency officials also discussed winding down the HHF program during Treasury’s 2018 Annual Hardest Hit Fund Summit. The annual summit is a meeting that HFAs, servicers, and other stakeholders are invited to attend to facilitate information sharing among stakeholders involved in HHF. At the 2018 summit, the agency discussed topics that included final compliance and financial reviews, program change requests, operational timelines, and budgeting and staffing as they relate to the wind-down of HHF programs and operations. In addition, as states have begun to close some of their programs, Treasury has issued clarifying guidance to HFAs in order to effectively wind down the HHF program—including on streamlining the process for requesting changes to programs. Treasury staff also performed outreach to each HFA in April 2018 about their wind-down plans and, according to Treasury staff, the agency expects to prepare written guidelines for HFAs on certain other topics related to winding down the program, including reporting requirements, as appropriate. Consideration of Performance Information in Making Management Decisions Treasury uses performance information to assess whether HFAs are performing at a satisfactory level. This practice is consistent with internal control standards, which state that management should establish and operate monitoring activities to monitor the internal control system and evaluate results, which can include evaluating and documenting the results of ongoing monitoring and separate evaluations to identify internal control issues. In addition, management should remediate identified internal control deficiencies on a timely basis. This can entail management completing and documenting corrective actions to remediate internal control deficiencies on a timely basis. Treasury staff described the agency’s process of assessing HFAs’ performance as “holistic.” As a part of this process, Treasury staff review the targets HFAs set for assisting households or demolishing blighted properties and monitor HFAs’ utilization rates. According to Treasury staff, if performance and financial data suggest that an HFA is not making sufficient progress toward its performance targets or is drawing funds too slowly, Treasury collaborates with the HFA and the HFA must create a plan to improve its performance. If an HFA is not responsive to Treasury’s efforts, Treasury issues a performance memorandum requiring the HFA to create a plan to address its deficiencies. As of October 2018, Treasury had issued performance memorandums to seven HFAs—five in 2012 and two in 2015. Additionally, as mentioned previously, Treasury issues a report to each HFA following each site visit describing any issues of concern Treasury identified. Treasury requires HFAs to provide the agency with a written response to the report within 30 days of the report date describing the HFA’s plan for addressing any deficiencies. Communication with External Parties to Address Risks and Achieve Objectives Treasury regularly communicates with HFAs, servicers, and other stakeholders interested in HHF, which is consistent with internal control standards that state management should externally communicate the necessary quality information to achieve the entity’s objectives. This can include communicating with, and obtaining quality information from, external parties using established reporting lines. According to Treasury staff, Treasury holds biweekly calls with HFAs and servicers, facilitates issue-specific working groups between HFAs and stakeholders, and holds an annual summit related to HHF. HFA staff said Treasury staff are very responsive to program-related questions. Treasury’s annual summit allows interested parties, such as HFAs, servicers, and other stakeholders, to discuss important issues related to HHF. Treasury Partially Addressed Four Leading Practices Identification, Evaluation, and Monitoring of Risks To assist HFAs in designing their internal control activities, including defining program objectives, Treasury created an optional risk assessment matrix to help HFAs and their auditors identify and assess HFAs’ risks. The matrix includes control objectives and example control activities, and it allows HFAs to determine their risk tolerances for each control objective. For example, for the risk of improper use of administrative funds, the matrix includes “ensuring that appropriate documentation exists to support HHF administrative expenses” as a control objective, and it lists routine review of administrative payments by internal auditors as an example control activity. HFAs can identify their risk tolerances as low, medium, or high in the matrix. This matrix is consistent with federal internal control standards, which state that management should define objectives clearly to enable the identification of risks and define risk tolerances. However, Treasury does not systematically collect or evaluate HFAs’ risk assessments. HFAs’ participation agreements require them to submit an annual certification of their internal control programs by an independent auditor to Treasury. According to Treasury staff, independent auditors sometimes choose to include HFAs’ risk assessments with the annual certification, and during site visits Treasury obtains documentation of HFAs’ internal control programs, which sometimes includes their risk assessments. Outside of these instances, Treasury does not routinely collect HFAs’ risk assessments. Further, in those instances when Treasury does collect them, it does not analyze the assessments to evaluate whether the risk levels are appropriate. While Treasury does a more in-depth evaluation of HFAs’ internal controls during site visits, this review does not include evaluating the appropriateness of the risk levels HFAs identified. For example, one of the risk assessment matrixes we reviewed listed the HFAs’ administrative expenses as low-risk despite this HFA having a history of alleged improper-payment related issues with its HHF program, which Treasury’s review would not have evaluated. Treasury officials told us that during site visits they may discuss the risk levels that HFAs determine, but Treasury has not asked or required any HFAs to change a risk level. Failure to collect and evaluate HFAs’ risk assessments is inconsistent with an important practice for preventing fraud we have previously identified—monitoring and evaluating the effectiveness of preventive activities, including fraud risk assessments and the antifraud strategy, as well as controls to detect fraud and response efforts. Further, according to internal control standards, management should identify, analyze, and respond to risks related to achieving the defined objectives, and an oversight body may oversee management’s estimates of significance so that risk tolerances have been properly defined. According to Treasury staff, the risk assessment matrixes are intended for use by HFAs and their independent auditors in preparing for the annual certification. They said that risk tolerances, or levels, are to be assigned by HFAs and their independent auditors, not by Treasury, and that it would be inappropriate for Treasury to interfere with their determination. However, agreed-upon procedures performed by HFAs’ independent auditors do not provide assurance or conclusion as to whether HFAs’ risk levels are appropriate. For example, in two agreed-upon procedures reports we reviewed, the auditors stated that the procedures performed were based on the HFAs’ risk matrixes, but they did not mention assessing whether the risk levels assigned to different controls were appropriate. Treasury staff also said that Treasury expands its sample size and criteria for specific programs or categories of expenses during a compliance review where repeated or significant observations have been previously found. However, by not collecting and evaluating HFAs’ risk assessments, Treasury limits its ability to monitor the effectiveness of HFAs’ preventive activities, controls to detect fraud, and response efforts. In addition, Treasury is missing an opportunity to help ensure that risk levels are appropriate. Documentation That Monitoring Plans Were Executed Treasury’s documentation of its efforts to monitor HFAs is consistent with internal control standards, which state that management should establish and operate activities to monitor the internal control system and evaluate results and remediate deficiencies on a timely basis. More specifically, the standards cite as characteristics of these principles that management evaluate and document the results of ongoing monitoring and separate evaluations to identify internal control issues, and determine appropriate corrective actions for internal control deficiencies on a timely basis. Treasury addresses these criteria by documenting its monitoring findings through site visit reports, as previously discussed. Treasury requires HFAs to provide the agency with a plan to address any issue described in the site visit report within 30 days. In addition, Treasury addresses these criteria by documenting HFAs’ responses and assessing whether the issue has been addressed at the next site visit. Furthermore, Treasury sets deadlines for and documents receipt of HFAs’ annual internal control certifications, quarterly financial and performance reports, and annual audited financial statements. When underperforming HFAs are not responsive to Treasury’s attempts to work with them to improve their performance, Treasury documents the issues it has found and requires the HFAs to create and submit a corrective plan. Treasury also directs HFAs to establish and execute their own internal control system, but it does not require HFAs to consistently document which of their staff are responsible for internal control execution. HFAs were required to submit staffing information within 90 days of joining HHF. However, HFAs are not required to regularly update this information. Further, Treasury’s written procedures for reviewing HFAs’ internal control programs during site visits do not include reviewing documentation of which HFA staff are responsible for responding to or reporting internal control issues. These practices are inconsistent with standards for internal control, which state that management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. The standards also note that effective documentation can assist management’s design of internal control by establishing the “who, what, when, where, and why” of internal control execution. We asked Treasury if it encouraged HFAs to document which personnel are in charge of executing internal control procedures. Treasury staff referred us to the initial requirement that HFAs submit staffing information within 90 days of joining HHF and stated that there is no requirement that HFAs update this information. Further, Treasury staff said that during site visits they interview key HFA staff who execute internal controls and document these interviews. However, this practice does not help ensure that HFAs consistently provided updated information to their staff about which of their staff are responsible for internal control execution. Without requiring HFAs to routinely update their documentation, particularly as HFAs are winding down their HHF programs and staff begin to turn over, Treasury cannot be assured that HFAs are keeping their staff updated about who is responsible for monitoring issues and internal control execution. Development of Relevant Output and Outcome Performance Indicators Treasury and HFAs created quantitative output and outcome measures to assess HFAs’ performance. For example, Treasury created utilization thresholds to help ensure HFAs spend their HHF funds in a timely manner. Also, HFAs created performance targets to estimate the number of homeowners they could assist (or blighted properties they could demolish) through HHF. These activities are consistent with an attribute of successful performance measures—specifically, that measures should have a numerical goal. However, some of Treasury’s performance measures are not clearly stated, and Treasury did not create consistent methodologies for HFAs to use to assess the performance of their HHF programs. In our previous work on attributes of successful measures, we identified that measures should be clearly stated and that the name and definition should be consistent with the methodology used to calculate them. While Treasury provided HFAs with a data dictionary to describe the information HFAs are required to report, Treasury defined the term “unique applicants” in a manner that allows HFAs to count applicants differently, leading to inconsistencies in HFAs’ methodologies for calculating some performance measures. As discussed later in this report, Treasury also allowed and sometimes required HFAs to self-define some data elements. Additionally, performance measures should indicate how well different organizational levels are achieving goals. However, Treasury did not design a consistent methodology for HFAs to use to develop targets for the number of homeowners and properties their HHF programs may assist, and as discussed later in this report, HFAs we interviewed used different methodologies. Because some of Treasury’s performance measures are not clearly stated and because Treasury did not design consistent methodologies for HFAs to use in setting targets, as HFAs close down their HHF programs, Treasury has a limited ability to compare performance across HFAs or aggregate these data to evaluate how well the HHF program as a whole is achieving its goals. Documentation of Goals and Measures and Their Relationship to Program Outputs Treasury created goals and measures to assess HHF performance, consistent with a practice we previously identified of creating performance goals and measures that address important dimensions of program performance and balance competing priorities. Treasury addressed this practice by creating utilization thresholds for HFAs and inserting them in HFAs’ participation agreements. Treasury also addressed this practice by documenting its performance measures, using standardized spreadsheets through which HFAs regularly report on outputs and outcomes related to the services provided to distressed homeowners. However, Treasury has not explicitly documented the relationship between program outputs and the overall goals of the HHF program, and it does not generally require HFAs to establish intermediate goals unless the HFA has not met Treasury’s performance expectations. This is inconsistent with practices we previously identified relating to results- oriented performance goals and measures. Among these practices are including explanatory information on goals and measures in performance plans and using intermediate goals to show progress or contributions toward intended results. The main goals of HHF are to prevent foreclosures and stabilize housing markets. However, Treasury has not documented the relationship between many of the program outputs it tracks and the main goals of the HHF program. According to Treasury, the relationship between its outputs and the goals of HHF can be inferred through various memorandums and materials it issued when HHF was created. However, these documents do not explicitly explain the rationale for the use of these output measures to assess HHF’s ability to stabilize neighborhoods and prevent foreclosures. By not documenting the relationship between HHF’s program outputs and services and the overall goals of the HHF program or requiring all HFAs to set intermediate goals, Treasury missed the opportunity to more proactively articulate a results- oriented focus for the HHF program. Most HFAs Have Met Thresholds for Withdrawing Funds, but Inconsistent Targets and Outcome Measures Limit the Assessment of Program Performance Most Homeowners Participating in HHF Were Assisted through Mortgage Payment Assistance Programs As of December 2017, the 19 participating HFAs had 71 active HHF programs. Active HHF programs fall under one of six Treasury-defined program types: mortgage assistance, reinstatement, transition assistance, principal reduction, down payment assistance, and blight elimination. Participating HFAs may have implemented additional HHF programs, but these programs had either stopped disbursing funds or had not received a total allocation from Treasury at the time of our review. Individual HFAs may implement multiple programs—for example, the Mississippi HFA had two active programs, and the South Carolina HFA had five. The most common type of HHF program as of December 2017 was mortgage assistance, as shown in table 1. All 19 HFAs had active mortgage payment assistance programs as of December 2017. In contrast, 3 HFAs had active transition assistance programs. As of December 2017, we found that the 71 active HHF programs had assisted approximately 400,000 homeowners and demolished almost 24,000 blighted properties. According to Treasury data, the majority of homeowners who received HHF assistance participated in a mortgage payment assistance program. Treasury data also indicate that transition assistance programs assisted the smallest number of homeowners relative to other HHF program types (see table 2). HHF programs of the same program type can vary in a number of ways, including eligibility criteria, length of time implemented, and number of homeowners assisted. Within each program type, HFAs designed programs that sometimes varied based on specific housing needs. For example, while both the Nevada and Florida HFAs had active reinstatement programs as of December 2017, these programs had different eligibility criteria. The Nevada HFA’s reinstatement program targeted low-to-moderate income homeowners who had fallen behind on their mortgages. The Florida HFA offered a similar reinstatement program for delinquent mortgages but also offered a program for senior homeowners who had fallen behind on property taxes and other fees. HHF programs also varied by duration and the amounts of assistance provided as of December 2017. For instance, since all HFAs initially launched mortgage payment assistance programs at the beginning of HHF, these programs have been active for an average of 7 years. In contrast, HFAs began implementing down payment assistance programs in 2015. Additionally, the median amount of assistance provided varied by program type. According to analysis of Treasury data from 2010 through 2017, assistance ranged from a median amount of $4,000 per household for transition assistance programs to over $42,000 per household for principal reduction programs. The HHF program is beginning to wind down. As of September 2018, Treasury had disbursed $9.1 billion of the $9.6 billion obligated under HHF. According to Treasury officials, although HFAs may continue issuing new approvals through December 31, 2020, most states have already begun to close down HHF programs or will do so by the end of 2018 as they exhaust their available funds. These include California and Florida, the two largest states in the program. Most HFAs Have Met Thresholds for Withdrawing Funds from Treasury According to Treasury officials, during the fifth round of funding Treasury established new conditions for HFAs, called utilization thresholds, to help maximize the use of the $2 billion in newly available funds. According to documentation from Treasury, if an HFA does not meet its utilization threshold, Treasury will reallocate a portion of the unused funds to HFAs that did. The amount reallocated to each HFA is determined by state population, the percentage of funds drawn by HFAs, and other factors. The utilization thresholds for 2016 and 2017 were structured as follows: 2016. If an HFA did not draw at least 70 percent of its funding from rounds one through four by December 31, 2016, 50 percent of its round five funding would have been reallocated. 2017. If an HFA did not draw at least 95 percent of its funding from rounds one through four by December 31, 2017, 75 percent of its round five funding would have been reallocated. Most HFAs have met Treasury’s 2016 and 2017 utilization thresholds. More specifically, the 18 HFAs eligible for round five funding met the 2016 utilization threshold. As a result, Treasury did not reallocate any HHF funds for that year. As of December 2017, 17 of the 18 HFAs eligible for round five funding met the 2017 utilization threshold. The Nevada HFA drew 70 percent of its funding for rounds one through four as of December 31, 2017, and therefore did not meet the 2017 utilization threshold. As a result, Treasury reallocated approximately $6.7 million of the Nevada HFA’s unused fifth round HHF funds to the 17 other HFAs. As of September 2018, all HFAs had met the 2018 utilization threshold, and Treasury had disbursed most of the funds obligated under HHF. If an HFA did not draw at least 80 percent of its participation cap by December 31, 2018, an amount equal to the portion of round five funding that had not been drawn from Treasury would have been reallocated. Data on the Extent to Which HHF Programs Met Targets Are of Limited Use Because Treasury Did Not Develop a Consistent Methodology for Calculating Targets The targets that HFAs set are of limited use for evaluating the performance of individual programs, program types, HFAs, or the HHF program overall. In their participation agreements, HFAs were required to estimate the number of homeowners they intended to assist and, if they had a blight elimination program, the number of blighted properties they intended to demolish for each of their HHF programs. Treasury refers to these estimates as targets. HFAs that we spoke with used different methodologies to calculate these targets. For instance, one of the HFAs we spoke to calculated targets for the number of homeowners they could assist by dividing the program’s total allocation by the average amount of assistance it anticipated awarding to each homeowner. In contrast, another HFA calculated its target for assisting homeowners by dividing that program’s total allocation by the maximum amount of assistance homeowners could be awarded through the program. According to Treasury staff, they did not develop a consistent methodology for HFAs to use in setting these targets because, in their view, HFAs are most familiar with local conditions and should have flexibility in adjusting the program criteria or creating new programs based on these conditions. Internal control standards state that management should define objectives clearly to enable the identification of risks and define risk tolerances. In particular, the standards note the importance of stating measurable objectives in a form that permits reasonably consistent measurement. Further, our guide to designing evaluations states that where federal programs operate through multiple local public or private agencies, it is important that the data agencies collect are sufficiently consistent to permit aggregation nationwide, which allows evaluation of progress toward national goals. Because Treasury did not develop a consistent methodology for HFAs to use when setting performance targets, the targets HFAs developed do not permit consistent measurement of program performance or an evaluation of how well the HHF program as a whole met its goals. However, with the program beginning to wind down, any changes going forward would not improve the consistency of previously collected data or Treasury’s ability to evaluate the program as a whole. Treasury Collects Information on Outcomes for Some HHF Programs, but This Information Is of Limited Use Treasury Requires HFAs to Report Some Outcome Information for Four Program Types Treasury collects quarterly data on outcomes from HFAs that implement four of the six HHF program types: mortgage payment assistance, principal reduction, reinstatement programs, and transition assistance programs. HFAs must track outcomes, both intended and unintended, until a household is no longer involved with an HHF program. Intended outcomes include, for example, the number of homeowners who completed or transitioned out of an HHF program as a result of regaining employment. Unintended outcomes include the number of homeowners who transitioned out of an HHF program into a foreclosure sale. The type of outcomes Treasury requires HFAs to track depends on the program type. Treasury did not design outcome measures in a way that would permit it to use these data to evaluate whether HFAs or the overall program are achieving the stated goals. More specifically, Treasury officials told us that the data they collect on outcomes cannot be used to compare the outcomes achieved by different HFAs or through different HHF program types. According to Treasury officials, HFAs have historically had different interpretations of Treasury’s outcome measures. Treasury revised its template for HHF reporting in 2015 and 2017 to clarify certain performance-related terms. However, Treasury officials told us that conclusions drawn from HHF data on some outcomes are of limited use because HFAs interpret Treasury’s guidance on these data differently. Additionally, after it made revisions to guidance on performance reporting in 2015, Treasury allowed—and in some cases required—HFAs to self- define certain data elements. For example, Treasury required HFAs to define how they calculate the median principal forgiveness awarded by an HHF program. As previously discussed, a key attribute of effective performance measurement is clearly stated performance measures with names and definitions that are consistent with the methodology used to calculate the measure. Additionally, we have noted in our guide to designing evaluations that a program’s outcomes signal the ultimate benefits achieved by a program and should be considered when evaluating a program. Further, OMB has set the expectation that agencies should conduct evaluations of federal programs. However, because Treasury did not clarify certain outcome measures until 5 years into the program, or take steps to ensure that HFAs calculated alternative outcomes consistently, even after Treasury clarified its reporting guidance, the alternative outcomes data that Treasury collects are of limited use for evaluating the performance of HFAs, HHF programs by program type, or the HHF program overall. As many programs are closing, further clarification or changes would not capture the full scope of the program and would not improve such evaluations. Treasury Requires HFAs with Blight Elimination and Down Payment Assistance Programs to Conduct Impact Studies Treasury requires HFAs with blight elimination and down-payment assistance programs to identify indicators that are intended to track and quantify the HHF program’s impact on targeted areas, although HFAs are not required to report outcomes data to Treasury in their quarterly performance reports for these program types. According to Treasury, blight elimination and down payment assistance programs are focused on stabilizing housing markets in targeted distressed areas to prevent foreclosures, and therefore they are not required to report individual-level outcomes for HFAs to report in quarterly performance reports. Treasury officials told us that the impact of these program types upon neighborhoods, such as increases in the values of properties in neighborhoods where down-payment assistance or blight elimination programs were used, may not be observable immediately but may appear over time. As of August 2018, four of eight HFAs with blight elimination programs had submitted impact studies to Treasury. Also, all HFAs with down payment assistance programs have submitted studies to Treasury. Three blight elimination program impact studies suggest that the programs had positive impacts on targeted areas, although two of the studies have important limitations. Studies on the programs in Michigan and Ohio found that home prices increased in communities where blighted properties were demolished. For example, the Ohio study found there was about a 4-dollar increase in home values for every dollar spent on the HHF-funded blight elimination program. However, this study examined only 1 of the 18 counties that were served by the Ohio HFA’s blight elimination program. A study on the Illinois program found that certain key economic indicators had improved over a 6-year period in areas targeted by the program. For example, the percentage of negative equity mortgages in 9 of the 10 areas studied declined by an average of 7 percent between 2010 and 2016. However, the findings of this study do not isolate the independent effect of the Illinois HFA’s blight elimination program because other factors, such as local economic conditions, could also affect the performance of key economic indicators. Stakeholders Identified a Variety of Challenges in Implementing HHF Programs Treasury, HFAs, and Mortgage Servicers Described Challenges Related to Implementing Programs HHF stakeholders with whom we spoke described challenges in implementing HHF programs related to staffing and multiple funding rounds, program implementation, outreach to borrowers, program accessibility, the variety of programs and their status, and external factors. Both Treasury staff with responsibilities for monitoring HFAs’ implementation of HHF and stakeholders told us that these were the types of topics discussed during regular phone calls and annual meetings. Stakeholders included staff from four HFAs that are implementing HHF programs, mortgage servicers and housing counseling agencies that are involved with HHF, and other interested organizations, including those that work with HFAs. Staffing and multiple funding rounds. All four HFAs and various stakeholders with whom we spoke told us that staff turnover at HFAs presents challenges. In some cases, turnover has been related to the way the HHF program has been funded. For example, staff from two HFAs mentioned that either they let staff go or their temporary staff found more permanent positions as the agencies spent down their initial HHF funds. When Congress authorized Treasury to make additional TARP funds available to HHF beginning in 2016, these HFAs had to hire and train new staff. Treasury officials told us that many HFAs encountered staffing challenges as a result of the program’s fifth funding round. Additionally, staff from two servicers and an organization that advocates for HFAs told us that HFA turnover presents challenges because it takes time for new staff to become familiar with the program and for programs to ramp back up. Program implementation. Staff from most of the HFAs and servicers with whom we spoke, as well as Treasury staff and other stakeholders, told us that implementation of the HHF program was challenging. Specific implementation challenges mentioned by HFAs included creating an in- house information system to manage HHF data; managing refinancing requests from homeowners who have been awarded HHF funds (to help ensure the HFA’s place as a lien-holder); and sharing information with servicers. While Treasury helped to develop a system to facilitate the sharing of loan-level information for the HHF program, one HFA and some servicers noted that the system has not always worked smoothly. Additionally, Treasury staff told us that a challenge HFAs are currently facing is the wind-down of the HHF program. They stated that HFAs must determine how they should advertise to the public, internal staff, and external partners that programs are closing; when they should stop accepting applications; and what resources are available for activities related to program closeout. Outreach to homeowners. All four HFAs and an advocacy organization told us that it can be challenging to effectively reach eligible homeowners. As an example, staff from one HFA told us that housing counseling agencies have been an effective tool for making homeowners aware of HHF programs but that there are fewer foreclosure counselors available to homeowners now compared to when the HHF program started in 2010. Staff from an HFA that closed its HHF programs to new applicants after the initial funding rounds told us that it was challenging to communicate to the public, and therefore to potential clients, that its HHF programs were reopening after they received additional funding. Additionally, a representative of a nonprofit organization that works to address challenges in the mortgage market told us that many people did not know about the HHF program and that program information was hard for consumers to find on many states’ websites. Program accessibility. According to academic research and two stakeholders (an advocacy group and a housing counseling agency), the accessibility of an HFA’s program can affect program participation. A 2014 study of Ohio’s HHF program found that the design of the program hampered accessibility and therefore program participation. The program was designed to require registrants (those who started the application process) to continue the application process by working with a housing counseling agency. The study found that registrants who lived within 5 miles of their assigned housing counseling agency submitted a complete application almost 32 percent of the time, while those who lived over 50 miles away submitted a complete application about 18 percent of the time. Similarly, a representative for an organization that advocates on behalf of low-income homeowners noted that the design of one state HHF program requires applicants to meet with specific housing counseling agencies to complete the application process. However, the housing counseling agencies to which applicants are assigned may not be nearby. The representative stated that in some cases, homeowners are assigned to a housing counseling agency that is located 3 or 4 hours away from where the homeowners live. According to the advocacy group representative, this design is particularly challenging for elderly homeowners who may have trouble applying online and need personal help. Additionally, representatives for a housing counseling agency told us that their state HFA stopped involving community organizations to guide applicants throughout the application process once the HFA received additional HHF funding in 2016 and instead chose to work with applicants directly. They said this design may hurt homeowners who do not live near the HFA and would benefit from in-person assistance that could be provided close to their homes. A representative from the state’s HFA confirmed that the HFA decided to work directly with applicants once it received additional HHF funds in 2016. The representative stated that while homeowners could also apply for HHF assistance online (after the HFA changed the program design in 2016), the HFA’s system did not accept electronic signatures. Thus, homeowners without the ability to print and scan documents would need to come to the HFA’s office to complete the application process. Variety of programs and their status. Treasury officials noted that the wide variety of programs that HFAs are implementing can create operational challenges for HFAs. As an example, the officials explained that HFAs may encounter challenges when their programs require coordination with local partners. For example, land banks can encounter delays in acquiring properties for demolition, and contractors may not do demolition work properly or may attempt to increase the amounts that they charge for their work after winning a contract. Five mortgage servicers with whom we spoke described similar challenges. For example, representatives from one servicer told us that it was challenging to work with the 19 different HFAs because they all implemented different HHF programs. The representative added that it was particularly challenging if an HFA had a change in either leadership or points of contact for the HHF program. Another servicer explained that servicers have to review each HFA’s participation agreement and subsequent updates. This servicer noted that updates to agreements can create challenges, as the servicer needs to determine whether it can provide what the HFA is requesting. Representatives from this and a third servicer told us that it would have been helpful for servicers to have an up-to-date list of active HHF programs. Further, one servicer told us that it is challenging to help homeowners understand that each HFA and program has different requirements and guidelines. As previously discussed, Treasury communicates information to stakeholders, such as servicers, through regular conference calls. However, Treasury expects HFAs to keep their servicers abreast of the status of HHF programs because HFAs contract directly with servicers. Representatives from one HFA noted that it was challenging to keep servicers updated on changes to their HHF programs. For example, they reported that when the HFA made changes to its unemployment program, servicers confused the program with another of the agency’s HHF programs. The representatives also stated that they have had to make many phone calls to try to keep servicers up to date. External factors. Treasury officials and other stakeholders noted that external factors such as changing market needs and natural disasters have created challenges for some HFAs. Treasury officials noted that some HFAs have had to change their HHF programs over time to respond to changes in local housing conditions. An organization that advocates for HFAs as well as an HFA similarly noted that changing housing markets present challenges for HFAs, which have to adjust their program offerings in an effort to continue to serve homeowners. As previously discussed, HFAs must obtain Treasury approval to add or revise their HHF programs, and they must document the changes by amending participation agreements. Treasury officials also noted that natural disasters can affect HHF programs because HFAs have to turn their attention to post-disaster housing needs. Additionally, Treasury officials stated that after a natural disaster it can become difficult to verify the eligibility of applicants, particularly if key documents have been lost or communication channels with homeowners or servicers are affected. Treasury and SIGTARP Also Identified Challenges through Their Monitoring and Oversight Activities Through its on-site monitoring efforts, Treasury has identified issues that participating HFAs must address for their HHF programs. During on-site reviews in 2016 and 2017, Treasury staff assessed selected HFAs’ efforts in one or more Treasury-identified areas. As previously noted, Treasury’s policy at the time of our review was to conduct on-site reviews of each participating HFA at least once every 2 years. In 2016 Treasury conducted on-site monitoring visits for 14 HFAs and identified issues that the HFAs needed to address to improve their HHF programs. Issues Treasury identified primarily fell into two areas. The first of these was monitoring processes and internal controls—for example, Treasury found that one HFA had not developed documentation of its compliance procedures for a down payment assistance program. The other primary area was homeowner eligibility—for example, Treasury found that an HFA had misclassified the reasons that some homeowners were not admitted into the state’s HHF program. In 2017 Treasury conducted site visits to 15 HFAs. For this period, Treasury’s most common issues related to homeowner eligibility and administrative expenses. According to Treasury officials, the increase in issues related to administrative expenses between 2016 and 2017 was a result of greater agency focus on this topic. Treasury observed, for example, that one HFA lacked sufficient documentation to support some administrative expenses and that another HFA had misclassified some administrative expenses. As previously discussed, HFAs are required to provide Treasury with a written plan describing how they will address issues Treasury identifies and reimburse HHF for any impermissible expenses. Through its oversight activities, SIGTARP reported that some participating HFAs have encountered challenges related to appropriate use of administrative expenses, management of their programs, and blight removal. In August 2017, SIGTARP reported that participating HFAs used $3 million in HHF funds for unnecessary expenses. The report maintained that some HFAs were using their administrative funds for expenses that were unnecessary. In a May 2018 hearing, SIGTARP testified that some HFAs were not following federal cost principles related to administrative expenses. Additionally, SIGTARP has issued reports describing mismanagement of the HHF program by specific HFAs, as well as challenges related to blight removal. While Treasury has disagreed with the dollar amount of administrative expenses used inappropriately by HFAs, it has also worked with HFAs and SIGTARP to address SIGTARP’s findings. Conclusions As HHF programs begin to close and participating HFAs take steps to ensure they spend all of their HHF funds before the program deadline, opportunities exist in two areas for Treasury to manage risk and improve program operation and closeout: By not consistently and routinely collecting HFAs’ risk assessments, Treasury limits its ability to monitor and evaluate the effectiveness of HFAs’ preventive activities, controls to detect fraud, and response efforts. Further, by not evaluating these risk assessments, Treasury is missing an opportunity to help ensure that risk levels are appropriate. As HFAs wind down their HHF programs and HFA staff are relieved of their HHF-related positions, maintaining updated and accurate staffing information can help ensure that HFA staff are informed of who in their own offices is responsible for internal control execution. Because Treasury did not implement the HHF program in a manner that is consistent with standards for program evaluation design we previously identified, the performance data that Treasury collects do not provide significant insights into the program’s effectiveness. More specifically, Treasury did not clearly state some of its performance measures; lacks documentation of the relationship between program outputs and overall goals; did not design consistent methodologies for HFAs to use in setting did not require participating HFAs to use consistent methodologies to calculate outcomes. As a result, Treasury cannot aggregate key performance data or compare performance data across HFAs or HHF program types to demonstrate the results of the HHF program. As we have previously reported, OMB has set the expectation that agencies should conduct evaluations of federal programs. Moreover, our guide to designing evaluations states that where federal programs operate through multiple local public or private agencies, it important to ensure the data these agencies collect are sufficiently consistent to permit aggregation nationwide in order to evaluate progress toward national goals. Although HHF programs must stop disbursing funds by December 31, 2021, many of the programs have already ended or are in the process of winding down, making it too late for changes to Treasury’s approach to performance measurement to have a meaningful impact. However, we note that if Treasury were to extend the current program, as it did after Congress provided additional funding in 2015, or if Congress were to establish a similar program due to a future housing crisis, it would be useful at that time for Treasury to develop a program evaluation design that would allow the agency to assess overall program performance, as well as assess performance across HFAs and program types. Recommendations for Executive Action We are making the following two recommendations to Treasury: The Assistant Secretary for Financial Institutions should annually collect and evaluate HFAs’ risk assessments, which include HFAs’ risk levels. (Recommendation 1) The Assistant Secretary for Financial Institutions should ensure that the documentation listing the HFA staff responsible for internal control execution is updated routinely. (Recommendation 2) Agency Comments We provided a draft of this report to Treasury for review and comment. In its comments, reproduced in appendix IV, Treasury agreed with our recommendations and stated that it has already taken steps toward addressing them by enhancing the existing review procedures for HFA’s risk assessments and staffing updates. Treasury also provided a technical comment, which we incorporated. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Treasury, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. We will make copies available to others upon request. The report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are listed on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to (1) determine the extent to which the Department of the Treasury’s (Treasury) monitoring of the Hardest Hit Fund (HHF) addresses leading practices for program oversight, (2) provide information on housing finance agencies’ (HFA) active programs and the status of HFAs’ progress toward program targets, and (3) describe challenges in implementing HHF programs that HFAs and others identified. To determine the extent to which Treasury’s monitoring of HHF addresses leading practices for program oversight, we used a scorecard methodology to compare Treasury’s monitoring policies and procedures, as implemented by 2016, against leading practices for an effective monitoring framework. To create the framework, we reviewed key reports and guidance related to monitoring, oversight, and performance management. In particular we reviewed relevant leading practices from internal control standards; previous GAO work on results-oriented performance goals and measures, key attributes for successful performance measures, characteristics for successful hierarches of performance measures, and managing fraud risk; and Office of Management and Budget guidance on oversight. Although Treasury is not required to follow all of the guidance that we identified, we determined that the guidance describes practices that are helpful for creating an effective monitoring framework. To select the practices for the scorecard, we focused on practices relevant to the structure of an oversight framework (including fraud risk); performance measures; goal setting; and communication with external parties. We reviewed key reports and guidance and then vetted our selected practices with stakeholders knowledgeable about performance measurement, design methodology, fraud risk, and the law. Based on this review and input, we consolidated identified practices into 14 leading practices to apply to Treasury’s monitoring framework. We then assessed Treasury’s policies and procedures against the framework. Specifically, we reviewed the agencies’ documented policies and procedures, reviewed documentation of how Treasury followed its policies and procedures, conducted interviews with Treasury staff responsible for overseeing HHF, and interviewed stakeholders, such as mortgage servicers, about Treasury’s monitoring of HHF. We also interviewed staff from four HFAs about Treasury’s monitoring of their programs; we selected the HFAs based on their mix of HHF programs, proportion of HHF funds disbursed, and geographic diversity. We also took into account whether stakeholders indicated that an HFA’s implementation of the program was particularly successful or challenging. With regard to the documentation Treasury collects as part of its monitoring, we limited our review to its 2016 and 2017 monitoring activities, and we limited our review of Treasury’s written policies and procedures to those implemented from January 2016 to September 2018. Two analysts independently reviewed agency policies and procedures to determine whether the policies were consistent with the 14 identified leading practices. Any disagreements in the determinations were resolved through discussion or with a third party, including the General Counsel’s office. We categorized each practice as follows: Addressed: Treasury’s policies and procedures reflect each component of the leading practice. Partially addressed: Treasury’s policies and procedures reflect some but not all components of the leading practice. Not addressed: Treasury’s policies and procedures do not reflect any of the components of the leading practice To describe active HHF programs and the status of HFAs’ progress toward program goals, we reviewed program documents, administered a data collection instrument, and spoke with officials at four HFAs (selected as previously described) and Treasury. We defined active programs as those that had a total allocation approved by Treasury and were accepting applications and still disbursing funds to households or blight elimination projects as of December 2017. In order to identify which programs were active, we developed, collected, and reviewed a questionnaire in which HFAs provided information on when each of their HHF programs started and stopped disbursing funds. For each of the 71 active programs we identified, we reviewed quarterly performance reports as of December 2017 to compile descriptive information such as program outputs and outcomes. Through the review of program documentation and interviews with knowledgeable officials, we found that Treasury’s output data were sufficiently reliable for our description of homeowners assisted and properties demolished. We also found that the data Treasury collected from HFAs on program outcomes were not reliable for the purpose of summarizing alternative outcomes by HFA or by program type. Treasury officials noted that the conclusions that can be drawn from alternative outcome data are inherently limited, particularly for the purpose of making comparisons between HFAs or program types, due to HFAs interpreting certain outcome measures differently, among other factors. Additionally, by comparing Treasury’s outcome measures to leading practices, we found that their definitions were not clearly stated. We also identified four studies on the impact of HHF blight elimination programs and reviewed them for reliable methodology. We determined that one of the four studies was not reliable for the purpose of assessing the impact of blight programs on targeted areas. Two of the three studies that we determined to be reliable had important limitations. One study examined 1 of the 18 counties that were served by that HFA’s blight elimination program. The other study did not isolate the independent effect of the HFA’s blight elimination program because other factors, such as local economic conditions, could also affect the performance of key economic indicators. We reviewed each HFA’s contract with Treasury as of December 2017 to identify each program’s target for assisting homeowners or demolishing blighted properties. Through comparison with internal control standards, we found that these targets were not reliable for the purpose of describing HFAs’ progress toward program goals because they were not stated in a form that permitted reasonably consistent measurement. To describe the factors Treasury identified as challenges for the HHF program, we analyzed Treasury’s on-site compliance monitoring reports for 2016 and 2017. As a part of our analysis, we identified the HFAs that Treasury visited in 2016 and 2017 and the extent to which Treasury had observations related to five Treasury-identified areas: monitoring processes and internal controls, eligibility, program expenses and income, administrative expenses, and reporting. We also interviewed key stakeholders regarding their views of challenges related to implementation of the HHF program, particularly since 2012. We discussed challenges with Treasury staff with responsibilities for monitoring HFAs’ implementation of the program; staff from four HFAs that are implementing HHF programs; six mortgage servicers that are involved with the HHF program; and two housing counseling agencies that are involved with the HHF program. For two of the HFAs with blight elimination programs, we conducted site visits to observe activities related to blight elimination. Additionally, we discussed challenges with other interested organizations, including an association for HFAs and an organization that brings together housing counselors, mortgage companies, investors, and other mortgage market participants to help address challenges in the mortgage market. Further, we reviewed reports issued by the Special Inspector General for the Troubled Asset Relief Program. We summarized the challenges that stakeholders described. We conducted this performance audit from November 2017 through December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Monitoring Scorecard To determine the extent to which the Department of the Treasury’s (Treasury) policies and procedures for monitoring and oversight address leading monitoring practices, we identified factors for an effective monitoring framework based on a review of key reports and guidance and input from stakeholders knowledgeable about performance measurement, design methodology, fraud risk, and the law. To select the practices for the scorecard, we focused on factors relevant to the structure of an oversight framework (including fraud risk); performance measures; goal setting; and communication with external parties. We consolidated identified factors into 14 leading practices to apply to Treasury’s oversight and monitoring framework. See Table 3 for the 14 leading practices and their underlying factors. Appendix III: Homeowners Assisted through the Hardest Hit Fund As shown in table 4, housing finance agencies (HFA) were implementing from one to seven Hardest Hit Fund (HHF) programs (excluding blight programs) as of the fourth quarter of 2017. We included programs for which HFAs were disbursing funds to homeowners. As of December 2017, individual HFAs had assisted from 807 to 86,220 homeowners. Eight HFAs were implementing active blight elimination programs as of December 2017, as shown in table 5. The number of blighted properties demolished by individual HFAs ranged from 0 to 13,925. The Department of the Treasury’s 2017 utilization threshold requires that HFAs draw at least 95 percent of their HHF funding from rounds one through four by December 31, 2017 (see table 6). As of December 2017, 17 of 18 HFAs had drawn 95 percent or more of their funding from rounds one through four. The Nevada HFA had drawn 70 percent of its funding from rounds one through four. Appendix IV: Comments from the Department of the Treasury Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Jill Naamane, Assistant Director; Lisa Moore, Analyst in Charge; Vida Awumey; Farrah Graham; John Karikari; Moira Lenox; Benjamin Licht; Dan Luo; John McGrail; Marc Molino; Jennifer Schwartz; Shannon Smith; Estelle Tsay-Huang; and Erin Villas made key contributions to this report.
Why GAO Did This Study Treasury established the HHF program in 2010 to help stabilize the housing market and assist homeowners facing foreclosure in the states hardest hit by the housing crisis. Through HHF, Treasury has obligated a total of $9.6 billion in Trouble Asset Relief Program funds to 19 state HFAs. HFAs use funds to implement programs that address foreclosure and help stabilize local housing markets—for example, by demolishing blighted properties. Congress extended HHF in 2015, and HFAs must disburse all HHF funds by December 31, 2021, or return them to Treasury. The Emergency Economic Stabilization Act of 2008 included a provision for GAO to report on Troubled Asset Relief Program activities. This report focuses on the HHF program and examines, among other objectives, (1) the extent to which Treasury's monitoring addresses leading practices for program oversight and (2) HFAs' progress toward program targets. GAO reviewed documentation of Treasury's HHF monitoring practices, interviewed HFAs (selected based on differences in program types implemented) and Treasury officials, and reviewed information on how HFAs developed program targets. What GAO Found For its Housing Finance Agency Innovation Fund for Hardest Hit Markets (HHF), the Department of the Treasury (Treasury) has addressed or partially addressed all 14 leading monitoring practices that GAO identified. For example, Treasury periodically collects performance data from housing finance agencies (HFA) and analyzes and validates these data. However, while Treasury requires HFAs to regularly assess the risks of their programs, it does not systematically collect or analyze these assessments. As a result, Treasury is missing an opportunity to ensure that HFAs are appropriately assessing their risk. Also, Treasury does not require HFAs to consistently document which of their staff are responsible for internal control execution. This documentation could help HFAs wind down their programs, particularly as staff turn over. Most HFAs met Treasury's goals for drawing down HHF funds, with $9.1 billion disbursed to HFAs as of September 2018. HHF programs have assisted hundreds of thousands of distressed homeowners since 2010. However, the data Treasury has collected are of limited use for determining how well HFAs met their goals for assisting households and demolishing blighted properties, or for evaluating the HHF program overall. For example, Treasury did not develop a consistent methodology for HFAs to use when setting performance targets, which limits Treasury's ability to compare across programs or assess the HHF program as a whole. Further, GAO's guide to designing evaluations states that where federal programs operate through multiple local public or private agencies, it is important that the data these agencies collect are sufficiently consistent to permit aggregation nationwide. Although HFAs have until the end of 2021 to disburse their HHF funds, many programs are beginning to close, making it too late for meaningful changes to Treasury's approach to performance measurement. However, should Congress authorize Treasury to extend the program beyond December 2021 or establish a similar program in the future, it would be useful at that time for Treasury to develop a program evaluation design that would allow the agency to assess overall program performance, as well as performance across HFAs and program types. What GAO Recommends GAO recommends that Treasury collect and evaluate HFAs' risk assessments and routinely update staffing documentation. Treasury agreed with these recommendations and stated that it has already taken steps toward addressing them.
gao_GAO-18-317T
gao_GAO-18-317T_0
Background NASA’s Commercial Crew Program is a multi-phased effort that began in 2010. Across the five phases, NASA has engaged several companies using both agreements and contract vehicles to develop and demonstrate crew transportation capabilities. As the program has passed through these phases, NASA has generally narrowed down the number of participants. The early phases of the program were under Space Act agreements, which is NASA’s other transaction authority. These types of agreements are generally not subject to the Federal Acquisition Regulation (FAR) and allow the government and its contractors greater flexibility in many areas. Under these Space Act agreements, NASA relied on the commercial companies to propose specifics related to their crew transportation systems, including their design, the capabilities they would provide, and the level of private investment. In these phases, NASA provided technical support and determined if the contractors met certain technical milestones. In most cases, NASA also provided funding. For the final two phases of the program, NASA awarded FAR-based contracts. By using FAR-based contracts, NASA gained the ability to levy specific requirements on the contractors and procure missions to the ISS, while continuing to provide technical expertise and funding to the contractors. Under these contracts, NASA will also evaluate whether contractors have met its requirements and certify their final systems for use. In September 2014, NASA awarded firm-fixed-price contracts to Boeing and SpaceX, valued at up to $4.2 billion and $2.6 billion, respectively, for the Commercial Crew Transportation Capability phase. Under a firm- fixed-price contract, the contractor must perform a specified amount of work for the price negotiated by the contractor and government. This is in contrast to a cost-reimbursement contract, in which the government agrees to pay the contractor’s reasonable costs regardless of whether work is completed. Thus, under the fixed-price contracts, the contractors must generally bear the risk of cost overruns or schedule delays. During this phase, the contractors will complete development of crew transportation systems that meet NASA requirements, provide NASA with the evidence it needs to certify that those systems meet its requirements, and fly initial crewed missions to the ISS. Under the contracts, NASA and the companies originally planned to complete the certification review for each system by 2017. Figure 1 shows the spacecraft and launch vehicles for Boeing and SpaceX’s crew transportation systems. The Commercial Crew Transportation Capability phase contracts include three types of services: Contract Line Item 001 encompasses the firm-fixed-price design, development, test, and evaluation work needed to support NASA’s final certification of the contractor’s spacecraft, launch vehicle, and ground support systems. Contract Line Item 002 covers any service missions that NASA orders to transport astronauts to and from the ISS. Under this indefinite-delivery, indefinite-quantity line item, NASA has ordered six missions from each contractor. Each service mission is its own firm- fixed-price task order. NASA must certify the contractors’ systems before they can fly these missions. Contract Line Item 003 is an indefinite-delivery, indefinite-quantity line item for any special studies, tests, and analyses that NASA may request. These tasks do not include any work necessary to accomplish the requirements under contract line item 001 and 002. As of July 2017, NASA had issued four orders under this contract line item to Boeing, worth approximately $1.8 million, including an approximately $180,000 study of the spacecraft’s seat incline. NASA has issued one order under this contract line item to SpaceX, which did not affect the value of this line item. The maximum value of this contract line item is $150 million. NASA divided the certification work under contract line item 001 into two acceptance events: the design certification review and the certification review. An acceptance event occurs when NASA approves a contractor’s designs and acknowledges that the contractor’s work is complete and meets the requirements of the contract. The design certification review verifies the contractor’s crew transportation system’s capability to safely approach, dock, mate, and depart from the ISS, among other requirements. After the contractor has successfully completed all of its flight tests, as well as various other activities, the certification review determines whether the crew transportation system meets the Commercial Crew Program’s requirements. The contractors must complete both acceptance events to receive NASA certification. NASA and the contractors also identified discrete performance-based events, called interim milestones, which occur as the contractors progress toward the two acceptance events. Each interim milestone has pre- determined entrance and exit criteria that establish the work that must be completed in order for the contractor to receive payment. The interim milestones serve several functions, allowing the government to finance work from development to completion, review the contractors’ progress, and provide approval to proceed with key demonstrations and tests. The program also uses these milestones to inform its annual budget request. Since the contracts were awarded, the Commercial Crew Program and the contractors have agreed to split several of the interim milestones. The contractors have also added new milestones, in part to capture changes in their development plans. NASA has also made changes to the contracts that have increased their value. While the contracts are fixed-price, their values can increase if NASA adds to the scope of the work or otherwise changes requirements. As of July 2017, NASA had increased the value of contract line item 001 for Boeing by approximately $48 million for hardware and software requirement changes, and contract line item 001 for SpaceX by approximately $91 million for a hardware requirement change and the addition of cargo during an ISS test flight. In our February 2017 report, we found the following: Both of the Commercial Crew Program’s contractors have made progress developing their crew transportation systems, but both also have aggressive development schedules that are increasingly under pressure. Both Boeing and SpaceX had determined that they would not be able to meet their original 2017 certification dates, and both expected certification to be delayed until 2018. We found that the schedule pressures were amplified by NASA’s need to provide a viable crew transportation option to the ISS before its current contract with Russia’s space agency runs out in 2019. If NASA needs to purchase additional seats from Russia, the contracting process typically takes 3 years. Without a viable contingency option for ensuring uninterrupted access to the ISS in the event of further Commercial Crew delays, we found that NASA was at risk of not being able to maximize the return on its multibillion dollar investment in the space station. The Commercial Crew Program was using mechanisms laid out in its contracts to gain a high level of visibility into the contractors’ crew transportation systems, but maintaining the current level of visibility through certification could add schedule pressures. For example, due to NASA’s acquisition strategy for this program, its personnel are less involved in the testing, launching, and operation of the crew transportation system. And while the program has developed productive working relationships with both contractors, the level of visibility that the program had required thus far had also taken more time than the program or contractors anticipated. Ultimately, the program has the responsibility for ensuring the safety of U.S. astronauts, and its contracts give it deference to determine the level of visibility required to do so. Moving forward though, we found that the program office could face difficult choices about how to maintain the level of visibility it feels it needs without adding to the program’s schedule pressures. In order to ensure that the United States had continued access to the ISS if the Commercial Crew Program’s contractors experienced additional schedule delays, we recommended that the NASA Administrator develop a contingency plan for maintaining a presence on the ISS beyond 2018, including options to purchase additional Russian Soyuz seats, and report to Congress on the results. NASA concurred with this recommendation, and in February 2017, NASA executed a contract modification to procure an option for three crewmember seats from Boeing on the Russian Soyuz vehicle. Our analysis found that these seats represented a contingency plan for U.S. access to the ISS through 2019. In April 2017, NASA informed the Congress of this action. Both Contractors Have Made Progress but Continue to Experience Schedule Delays Contractors Continue to Advance Development of Their Crew Transportation Systems Both Boeing and SpaceX have continued to make progress finalizing their designs and building hardware as they work toward final certification of their crew transportation systems, since we last reported in February 2017. Each contractor’s system includes a spacecraft and a launch vehicle with supporting ground systems. The contractors are also manufacturing test articles and flight spacecraft to support the uncrewed and crewed flight tests. The contractors plan to use the test articles to demonstrate system performance and the flight spacecraft to demonstrate their ability to meet contract requirements. As table 1 shows, these test articles and flight spacecraft are currently in varying stages of completion—some are completed and in testing while others are still early in the manufacturing phase. Should any issues arise during integration and test or the flight tests planned for 2018, the contractors may have to complete rework on the spacecraft already under construction. Schedule Delays Continue, and Risks Remain to Final Certification Dates The contractors have notified NASA that final certification dates have slipped to the first quarter of calendar year 2019 and, through our ongoing work, we have identified three key risk areas that could further delay certification of each contractor’s crew transportation system. These areas are (1) the contractors’ aggressive schedules, (2) programmatic and safety risks, and (3) Commercial Crew Program’s workload. These are consistent with the challenges we found facing the contractors and program in our February 2017 report. Aggressive schedules. Since the award of the current Commercial Crew contracts in September 2014, the program, Boeing, and SpaceX have all identified the contractors’ delivery schedules as aggressive. Program officials told us that, from the outset, they knew delays were likely due to the developmental nature of the program. Multiple independent review bodies—including the program’s standing review board, the Aerospace Safety Advisory Panel, and the NASA Advisory Council-Human Exploration and Operations committee—also noted the aggressiveness of the contractors’ schedules as they move toward certification. In February 2017, we found that both contractors had notified NASA that they would not be able to meet the 2017 final certification dates originally established in their contracts and expected final certification to be delayed until 2018. Based on our ongoing work, we found that the contractors have notified NASA that these dates have slipped further to the first quarter of calendar year 2019. Figure 2 shows the original Boeing and SpaceX contract schedule and the current proposed schedule for each contractor. However, the extent to which these schedules represent an accurate estimate of each contractor’s final certification date is unclear for the following two reasons: 1. Each contractor provides schedule updates to the Commercial Crew Program at quarterly status reviews, and the dates frequently change. The program has held 12 quarterly reviews since each contract was awarded. Boeing has reported a delay six times and SpaceX has reported a delay nine times that included at least one key event identified in the timeline above at these quarterly reviews. 2. The Commercial Crew Program is tracking risks that both contractors could experience additional schedule delays and, based on our ongoing work, we found that the program’s own analysis indicates that certification is likely to slip into December 2019 for SpaceX and February 2020 for Boeing. Each month, the program updates its schedule risk analysis, based on the contractors’ internal schedules as well as the program’s perspectives and insight into specific technical risks. The Commercial Crew Program manager stated that differences between the contractors’ proposed schedules and the program’s schedule risk analysis include the following: The contractors are aggressive and use their schedule dates to motivate their teams, while NASA adds additional schedule margin for testing. Both contractors assume an efficiency factor in getting to the crewed flight test that NASA does not factor into its analysis. The program manager explained further that the program meets with each contractor monthly to discuss schedules and everyone agrees to the relationships between events in the schedule even if they disagree on the length of time required to complete events. The program manager added, however, that she relies on her prior experience for a better sense of schedule timeframes as opposed to relying on the contractors’ schedules. While NASA has a fixed-price contract with both SpaceX and Boeing, there are consequences to the delays to date and the lack of certainty surrounding the final certification date. The United States has spent tens of billions of dollars to develop, assemble, and operate the ISS over the past two decades, and NASA relies on uninterrupted crew access to help maintain and operate the station itself and conduct the research required to enable human exploration in deep space and eventually Mars, among other science and research goals. To ensure uninterrupted access to the ISS through 2019, which includes launch and return of the astronauts, NASA purchased five seats on the Soyuz spacecraft through Boeing for an undisclosed value. Boeing obtained these seats though a legal settlement with the Russian firm, RSC Energia, which manufactures the Soyuz. The NASA Office of Inspector General found in its annual report on NASA’s top management and performance challenges that if the Commercial Crew Program experiences additional delays, NASA may need to buy additional seats from Russia to ensure a continued U.S. presence on the ISS. Further, the ISS is planned to be operational through 2024. Unless there is a decision to extend the ISS’s operational life, additional delays by Boeing and SpaceX may lessen NASA’s return on investment with the contractors. We will continue to monitor this as part of our ongoing work. Programmatic and safety risks. In addition to challenges facing Boeing and SpaceX’s aggressive schedules, both contractors face other risks that will need to be addressed to support their certification. This includes the contractors’ ability to meet the agency’s requirements related to the safety of their systems. These risks are not unusual; there are inherent technical, design, and integration risks in all NASA’s major acquisitions, as these projects are highly complex and specialized and often push the state of the art in space technology. The Commercial Crew Program monitors risks through two lenses—programmatic risks potentially affect the program’s cost and schedule or the performance of the crew transportation system, and safety risks could elevate the potential for the loss of crew. SpaceX Risks Similar to our findings in February 2017, our ongoing work indicates that the Commercial Crew Program’s top programmatic and safety risks for SpaceX, are in part, related to ongoing launch vehicle design and development efforts. SpaceX must close several of the program’s top risks related to its upgraded launch vehicle design, the Falcon 9 Block 5, before it can be certified for human spaceflight. Included in this Block 5 design is SpaceX’s redesign of the composite overwrap pressure vessel. SpaceX officials stated the new design aims to eliminate risks identified in the older design, which was involved in an anomaly that caused a mishap in September 2016. Separately, SpaceX officials told us that the Block 5 design also includes design changes to address cracks in the turbine of its engine identified during development testing. NASA program officials told us that they had informed SpaceX that the cracks were an unacceptable risk for human spaceflight. SpaceX officials told us that they have made design changes, captured in this Block 5 upgrade, that did not result in any cracking during initial life testing. However, this risk will not be closed until SpaceX successfully completes qualification testing in accordance with NASA’s standards without any cracks. SpaceX officials stated they expect this testing to be completed in first quarter calendar year 2018. Finally, both the program and a NASA advisory group consider SpaceX’s plan to fuel the launch vehicle after the astronauts are on board the spacecraft to be a potential safety risk. SpaceX’s perspective is that this operation may be a lower risk to the crew. To better understand the propellant loading procedures, the program and SpaceX agreed to demonstrate the loading process five times from the launch site in the final crew configuration prior to the crewed flight test. Boeing Risks Our ongoing work indicates that Boeing is mitigating several risks in order to certify its crew transportation system, including challenges related to its abort system performance, parachutes, and its launch vehicle. Boeing is addressing a risk that its abort system, which it needs for human spaceflight certification, may not meet the program’s requirement to have sufficient control of the vehicle through an abort. In some abort scenarios, Boeing has found that the spacecraft may tumble and that could pose a threat to the crew’s safety. To validate the effectiveness of its abort system, Boeing has conducted extensive wind tunnel testing and plans to complete a pad abort test in April 2018. Boeing is also addressing a risk that during re-entry to the Earth’s atmosphere, a portion of the spacecraft’s forward heat shield may reconnect and damage the parachute system. NASA’s independent analysis indicates that this may occur if both parachutes that pull the forward heat shield away from the spacecraft deploy as expected. Boeing’s analysis indicates the risk exists only if one of two parachutes does not deploy as expected. If the program determines this risk is unacceptable, Boeing would need to redesign the parachute system, which the program estimates could result in at least a 6-month delay. Finally, one of the program’s top programmatic and safety concerns is that it may not have enough information from Boeing’s launch vehicle provider, United Launch Alliance, to assess if the launch vehicle prevents or controls cracking that could lead to catastrophic failures. The program and Boeing are in the process of negotiating next steps. Program Safety Risk The Commercial Crew Program has identified the ability of it and its contractors to meet a crew safety requirement as one of its top risks. NASA established the “loss of crew” metric as a way to measure the safety of a crew transportation system. The metric captures the probability of death or permanent disability to one or more crew members. Under each contract, the current loss of crew requirement is 1 in 270, meaning that the contractors’ systems must carry no more than a 1 in 270 probability of incurring loss of crew. Near the end of the Space Shuttle program, the probability of loss of crew was approximately 1 in 90. As part of our ongoing work, we continue to work with NASA to understand how the loss of crew requirement was established for the Commercial Crew Program. Program officials told us that Commercial Crew is the first NASA program that the agency will evaluate against a probabilistic loss of crew requirement. They said that if the contractors cannot meet the loss of crew requirement at 1 in 270, NASA could still certify their systems by employing operational mitigations. They said this would entail a potentially increased level of risk or uncertainty related to the level of risk for the crew. Program officials told us their main focus is to work with the contractors to ensure that the spacecraft designs are robust from a safety perspective. The loss of crew metric and the associated models used to measure it are tools that help achieve that goal. For example, Boeing told us that in early 2016, it needed to identify ways to reduce the mass of its spacecraft. As Boeing found opportunities to reduce the spacecraft mass, the program stated that it had to consider how implementing those design changes would affect its loss of crew analysis in addition to compliance with other performance and safety requirements. According to the program, it is working with both contractors to address the factors that drive loss of crew risk through design changes or additional testing to gain more information on the performance and reliability of systems. As part of our ongoing work, we will continue to assess the extent to which the contractors are meeting this requirement and what tools the program and NASA will use to determine if the contractors meet the requirement. Program office workload. In February 2017, we found that the Commercial Crew Program was using contractually defined mechanisms to gain a high level of visibility into the contractors’ crew transportation systems, but also found that the Commercial Crew Program’s workload was an emerging schedule risk. At that time, program officials told us that one of their greatest upcoming challenges will be to keep pace with the contractors’ schedules so that the program does not delay certification. Specifically, they told us they are concerned about an upcoming “bow wave” of work because the program must complete two oversight activities—phased safety reviews and verification closure notices—concurrently in order to support the contractors’ design certification reviews, uncrewed and crewed flight test missions, and final certification. The Commercial Crew Program is working to complete its three-phased safety review, which will ensure that the contractors have identified all safety-critical hazards and implemented associated controls, but it is behind schedule. Both the contractors and the program have contributed to these delays. In phase one, Boeing and SpaceX identified risks in their designs and developed reports on potential hazards, the controls they put in place to mitigate them, and explanations for how the controls will mitigate the hazards. In phase two, which is ongoing, the program reviews and approves the contractors’ hazard reports, and develops strategies to verify and validate that the controls are effective. In phase three, the contractors plan to conduct the verification activities and incrementally close the reports. The Commercial Crew Program’s review and approval of the contractors’ hazard reports have taken longer than planned. The program originally planned to complete phase two in early 2016, but through our ongoing work, we have found that as of October 2017, neither contractor had completed this phase. At that time, Boeing had completed 90 percent and SpaceX had completed 70 percent of the Phase 2 reports. The Commercial Crew Program’s verification closure notice process, which is used to verify that the contractors have met all requirements, is one of the other key oversight activities and potential workload challenges for the program. The program is completing that process concurrently with the phased safety reviews. The verification closure process is initiated by the contractor when it provides the program with data and evidence to substantiate that it has met each requirement, and is completed when the program has reviewed and approved the contractor’s evidence to verify that each requirement has been met. The Commercial Crew Program must also approve a subset of verification closure notices before key tests or milestones can occur. For example, the ISS requirements and a portion of the Commercial Crew Program requirements must be met before Boeing and SpaceX’s uncrewed flights to the ISS, which are currently planned for the third quarter of 2018. The program’s ability to smooth its workload is limited because the contractors generally control their development schedules. In February 2017, we found, however, that proposed changes to the Boeing and SpaceX schedules could help alleviate some of the concurrency between the program’s phased safety reviews and verification closure process. We will continue to monitor the efforts as part of our ongoing work. In conclusion, Boeing and SpaceX continue to make progress developing crew transportation systems to help the United States re-establish its domestic ability to provide crew access to the ISS. But, when the current phase of the Commercial Crew Program began, there was widespread acknowledgment that the contractors’ development and certification schedules were aggressive and the anticipated schedule risks have now materialized. Further, programmatic and safety risks remain with schedules that frequently change making a final certification date uncertain. Delays and uncertain final certification dates raise questions about whether the United States will have uninterrupted access to the International Space Station beyond 2019, and may lessen NASA’s return on investment with the contractors. We look forward to continuing to work with NASA and this subcommittee as we assess the contractors’ and program’s progress to final certification. Chairman Babin, Ranking Member Bera, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Cristina T. Chaplain, Director, Acquisition and Sourcing Management at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this statement include Molly Traci, Assistant Director; Susan Ditto; Lisa Fisher; Laura Greifner; Juli Steinhouse; Roxanna Sun; and Kristin Van Wychen. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Since the Space Shuttle was retired in 2011, the United States has been relying on Russia to carry astronauts to and from the space station. NASA's Commercial Crew Program is facilitating private development of a domestic system to meet that need safely, reliably, and cost-effectively before the seats it has contracted for on a Russian spacecraft run out in 2019. In 2014, NASA awarded two firm-fixed-price contracts to Boeing and SpaceX worth a combined total of up to $6.8 billion to develop crew transportation systems and conduct initial missions to the space station. In February 2017, GAO found that both contractors had made progress, but their schedules were under mounting pressure. This statement provides preliminary observations on the extent to which the contractors and the program are making progress toward meeting NASA's standards for human spaceflight, a process called certification. This statement is based on ongoing work and information contained in GAO's February 2017 report on this program ( GAO-17-137 ). To do this work, GAO analyzed contracts, schedules, and other documentation. What GAO Found Both Boeing and Space Exploration Technologies (SpaceX) are making progress toward their goal of being able to transport American astronauts to and from the International Space Station (ISS). However, both continue to experience schedule delays. Such delays could jeopardize the ability of the National Aeronautics and Space Administration's (NASA) Commercial Crew Program to certify either company's option—that is, to ensure that either option meets NASA standards for human spaceflight—before the seats the agency has contracted for on Russia's Soyuz spacecraft run out in 2019. (See figure.) GAO's ongoing work has identified three key risks, which are consistent with challenges reported in February 2017 that could further delay certification of each contractor's crew transportation system: Aggressive schedules —NASA, Boeing, SpaceX, and independent review bodies have all noted that the contractors' schedule plans are aggressive. The anticipated schedule risks have since materialized. Programmatic and safety risks —SpaceX and Boeing are addressing technical risks, which is not uncommon for NASA projects as they often push the state of the art in space technology. In addition, the contractors' systems must meet a standard for crew safety. Additional work remains to determine whether the contractors will meet this requirement. Program office workload —Program officials told GAO that one of their greatest upcoming challenges will be to complete two oversight activities—conducting phased safety reviews and verifying that contractors meet requirements—concurrently. The program's ability to smooth its workload is limited, as the contractors generally control their development schedules. In February 2017, GAO found that proposed schedule changes could alleviate some overlap. Delays and uncertain final certification dates raise questions about whether the United States will have uninterrupted access to the ISS after 2019, and may lessen NASA's return on investment with the contractors. GAO will continue to assess the contractors' and program's progress. What GAO Recommends GAO is not making any new recommendations. In February 2017, GAO recommended that NASA develop a contingency plan to maintain access to the ISS beyond 2018, when its contract with Russia for seats on the Soyuz was scheduled to end. NASA agreed with this recommendation and purchased Soyuz seats through 2019.
gao_GAO-19-13
gao_GAO-19-13_0
Background VBA Disability Benefits Process VA pays monthly disability compensation to veterans with service- connected disabilities according to the severity of the disability. VA’s disability compensation claims process starts when a veteran submits a claim to VA (see fig. 1). A claims processor then reviews the claim and helps the veteran gather the relevant evidence needed to evaluate the claim. Such evidence includes the veteran’s military service records, medical exams, and treatment records from VHA medical facilities and private medical service providers. If necessary to provide support to substantiate a claim, VA will also provide a medical exam for the veteran, either through a provider at a VHA medical facility or through a VBA contractor. According to VBA officials, VBA monitors a VHA facility’s capacity to conduct exams and in instances when the facility may not have capacity to conduct a timely exam, VBA will send an exam request to one of its contractors instead. For exams assigned to a VBA contractor, VBA sends an exam request to the contractor, who then rejects or accepts the exam request. Once the contractor accepts the exam, it assigns a contracted examiner to conduct the exam and complete an exam report designed to capture essential medical information for purposes of determining entitlement to disability benefits. The contractors send the completed report to VBA, which uses the information as part of the evidence to evaluate the claim and determine whether the veteran is eligible for benefits. According to contractor officials, if they need clarification on an exam request, they might reject the request and send it back to VBA who, in turn, will revise the request before sending it back to the contractor. Use of Contracts to Complete Disability Compensation Exams VA has used contracted examiners—through VBA and VHA contracts—to supplement VHA-provided exams for at least two decades. VBA began using contractors to conduct disability compensation exams at 10 VBA regional offices in the late 1990s through a pilot program authorized under federal law. In 2014, federal law authorized VBA to expand the pilot to all its regional offices starting in fiscal year 2017. Before fiscal year 2017, VHA and VBA both administered disability exam contracts. However, since fiscal year 2017, all such contracts have been administered by VBA and none have been administered by VHA. VBA awarded 12 contracts to five contractors to begin providing exams in 2016. According to VA officials, performance under 10 of these contracts was delayed until late September 2017 due, in part, to multiple contract bid protests. During this delay, VA officials told us that the agency awarded short-term contracts to allow existing contractors to perform exams until the bid protests were resolved. VBA’s current contracts cover exams for veterans in five U.S. geographic districts, one district for overseas exams, and one district for servicemembers participating in special programs, such as the Benefits Delivery at Discharge and Integrated Disability Evaluation System programs (see fig. 2). VBA awarded two contracts in each of its five U.S. geographic districts and one contract each in districts 6 and 7, which include special programs and overseas exams, respectively. VBA also awarded two additional short- term contracts in December 2017 to help address workload issues in districts 1-5. With the addition of these two contracts, VBA has a total of 14 contracts currently in place. According to agency officials, because VBA wanted to update performance measures for its contractors, VA issued a Request for Proposals in May 2018 with plans to award new contracts in fall 2018 for its U.S. geographic districts. Until it awards the new contracts, VBA will continue to use the current contracts. According to VBA officials, VA plans to continue using VBA contractors in the long term to conduct exams that exceed VHA’s capacity. In recent years, VBA contractors have completed an increasing number of exams, from roughly 178,000 in fiscal year 2012 to almost 600,000 in fiscal year 2017, according to VBA- provided data. VA estimates that in fiscal year 2019, contractors will complete over 1.8 million exam reports for almost 800,000 veterans. However, VBA officials noted that future projections for contracted exams might change based on the need to supplement VHA capacity to ensure timely exams. VBA Contract Exam Office and Requirements for Contractors In 2016, VBA established an exam program office to manage and oversee contractors, monitor their performance, and ensure that they meet contract requirements. For example, the contracts require that contractors develop plans outlining how they will ensure examiners are adequately trained. Contractors are also required to provide VBA with monthly exam status reports, which include the number of canceled, rescheduled, and completed exams, among other things. VBA also has an office dedicated to completing quality reviews of contractors’ exam reports, which are used to assess contractor performance. The contracts require that VBA conduct quality reviews of a sample of contractors’ exam reports. According to VA documents and officials, the results of these quality reviews, and contractor timeliness scores in completing exams, are included in quarterly performance reports. The contracts require that VBA provide these performance reports to the contractors. VBA holds quarterly meetings with the contractors to discuss their quarterly performance based on these reports. VBA Licensing and Training Requirements for Contracted Examiners VBA contracts require that contracted examiners have full, current, valid, and unrestricted licenses, and current and valid State Medical Board certifications, before conducting any exams—the same requirements that apply to VHA medical providers. According to agency officials, VBA also requires that contracted examiners complete the same training that VHA providers must take before they can conduct any disability medical exams. The required training consists of a set of online courses developed by VHA’s Disability Medical Assessment Office, such as courses on VA’s disability claims process and one on completing exam reports. In addition, examiners who provide some specialized exams, such as posttraumatic stress disorder exams and traumatic brain injury exams, are required to take additional courses. In addition to VHA- developed training, VBA contracts require that contractors provide examiners with a basic overview of VA programs. VBA Quarterly Contractor Performance Targets for Quality and Timeliness The contracts also outline quality and timeliness performance targets that VBA uses to assess contractor performance (see table 1). VBA can use contractors’ performance in meeting these targets to determine financial incentives. VBA’s performance measures are as follows: Contractor quality: VBA calculates quality scores for each contractor based on a sample of exam reports that VBA’s quality office selects for review on a quarterly basis for each contract. According to VBA documents, the quality score represents the percentage of exam reports reviewed that had no errors as measured against specific criteria. Errors identified in quality reviews could range from incomplete information (e.g., an examiner’s medical specialty information is not listed on exam report) to completing the wrong exam report for a given condition. Contractor timeliness: VBA calculates timeliness scores for each contractor based on the average timeliness of all exams completed in a given quarter for each contract. VBA measures timeliness as the number of calendar days between the date the contractor accepts an exam request and the date the contractor initially sends the completed exam report to VBA. VBA Reported Contractors Missed Exam Quality Targets, and VBA Could Not Accurately Measure Performance On Timeliness Targets Contractors Missed Quality Targets in First Half of 2017; More Recent Data Are Not Yet Available for Most Districts VBA reported that almost all contractors missed VBA’s quality target of 92 percent in the first half of calendar year 2017, and more recent data are not yet available for most districts. More specifically, VBA-determined quarterly quality scores—the percentage of disability compensation exam reports with no errors as measured against VBA criteria—for the seven contracts used by VBA in calendar year 2017 showed that contractors were frequently well below the quality target. Quarterly quality scores ranged from 62 percent to 92 percent (see fig. 3). According to VBA data, only one contractor’s quality score in one quarter met VBA’s target of 92 percent while the vast majority of contractors’ scores were classified by VBA as “unsatisfactory” performance. VBA has not yet completed all of the quality reviews used to calculate contractor quality scores, particularly for exams that were completed in the second half of 2017. VBA is hiring and training additional quality review staff to complete these reviews and help manage the workload moving forward. According to VBA officials, staff will complete the remaining quality reviews and finalize the quality scores for 2017 by December 2018. VBA Could Not Accurately Measure Contractor Timeliness Against Targets, but Our Aggregate Analysis Shows About Half of Exams Were Completed Within 20 Days According to agency officials, VBA has not calculated contractor timeliness as it is outlined in the contracts. VBA measures timeliness as the number of days between the date the contractor accepts an exam request and the date the contractor initially sends the completed exam report to VBA. According to officials, this measure does not include any time contractors may spend correcting an exam report returned to them by VBA. Returned exam reports are few in number, VBA officials said. However, once a contractor submitted a corrected or clarified exam report, VBA officials said the exam management system did not preserve the date the exam was initially completed. At that point, the system only tracked the date VBA received the corrected or clarified report. As a result, the number of days in VBA’s system could include time contractors took to correct any issues identified by VBA after submitting the initial report. While VBA’s data does not allow it to reliably assess contractor performance against the targets in the contracts, VBA’s data can be used to measure timeliness in other ways. For example, we were able to use the data to calculate the entire amount of time it took to complete exams, which includes time contractors took to correct any issues identified by VBA. As such, the results of our analysis should not be interpreted as reflecting contractor compliance with timeliness targets under the contracts. However, to provide timeframes that are similar to VBA’s targets, we chose 20 days for districts 1-5 and 30 days for districts 6-7 as timeframes for our analysis. Moreover, we analyzed timeliness across all contractors rather than for individual contractors. In particular, we analyzed VBA data on 646,005 contracted exams completed from February 2017 to January 2018, which included 575,739 exams in districts 1-5 and 70,266 exams in districts 6-7. Our analysis of VBA data shows that 53 percent of exams were completed within 20 days for districts 1-5, and 56 percent were completed within 30 days for districts 6-7. However, some exams took at least twice as long to complete. For example, 12 percent of exams in districts 1-5 took more than 40 days to complete (see fig. 4). Contractor officials described a number of reasons why exams might take longer in some cases. For example, they said that scheduling delays might occur due to a veteran’s availability or severe weather, and that it can be challenging to find specialists for certain exam types in rural locations. Our analysis of timeliness focused on exams that were completed, and it did not include exams that have been requested and not yet completed by a contractor. For example, a contractor may have accepted an exam request from VBA, but not yet scheduled an appointment with the veteran. Alternatively, a contractor may have conducted an exam with the veteran, but not yet sent the exam report to VBA. As of late June 2018, VBA-calculated data showed that 87,768 requested exams had not yet been completed, including 37,077 exams that had already exceeded VBA’s timeliness targets. Tracking these exams is important because a large volume of such exams could ultimately increase the amount of time veterans have to wait for their claims to be processed. VBA officials stated that the agency closely monitors contractors’ workloads and helps expedite requested exams that have exceeded VBA’s targets for completing exams. In addition, VBA included a performance measure in its May 2018 Request for Proposals to track the percentage of requested exams that have been with a contractor for more than seven days. Such a measure could help VBA identify whether contractors have a backlog of exams and better assess whether veterans are receiving timely exams. Delayed Quality Reviews and Performance Reports, and Data Limitations, Hinder VBA’s Monitoring of Contractors VBA Identified Some Contractor Performance Problems but Was Delayed in Completing Quality Reviews and Performance Reports VBA Identification of Performance Problems VBA’s contract exam program office, primarily through its Contracting Officer’s Representatives (COR), has identified some contractor performance problems, such as delays in completing specific exams, through its oversight of contractor performance. This oversight includes day-to-day monitoring of contractor workloads and frequent contact with contractor officials. Through such contact and reviews of contractors’ daily and weekly exam status updates, the CORs work with contractor officials to identify ways to expedite disability compensation exams for veterans who have been waiting longer than VBA’s 20-day or 30-day targets. In addition, VBA contract quality staff who review samples of contractor exam reports hold teleconferences with the CORs and contractor officials to provide feedback and discuss issues arising from their reviews, such as specific types of errors. The VBA contract exam program office also oversees and manages contractors through supplemental guidance memos, contractor site visits, and reviews of veteran customer satisfaction surveys. For example, in November 2017, VBA sent a supplemental guidance memo to all contractors to clarify guidance on conducting and documenting hearing loss exams. Further, VBA has conducted site visits to all five contractors’ headquarters or clinic sites since September 2017. Headquarters visits include reviews of contractors’ procedures, such as those for assigning exam requests, and contractors’ information systems, such as those for tracking the status of exams. VBA visits to contractor clinics focus on facility issues, such as accessibility and safety. According to VBA officials, the CORs also review reports on satisfaction surveys completed by veterans after their exam appointments to identify veterans’ concerns regarding contractors and to follow up with contractors, when needed. For example, in response to one veteran’s survey comment regarding a contracted examiner who did not show up to conduct a scheduled exam, VBA officials told us they followed up with the contractor and learned that the examiner’s car broke down. According to VBA, it reimbursed the veteran for round-trip transportation costs to the clinic. Additionally, VBA’s contract quality review staff have conducted special focused reviews to investigate concerns raised by veterans and by staff in VBA regional offices and VHA medical facilities. For example, VBA conducted a review of one contracted examiner who had high rates of diagnosing severe posttraumatic stress disorder. After reviewing this examiner’s reports, VBA found their overall quality to be poor. As a result, VBA requested that the contractor no longer use this examiner. In addition to identifying and addressing problems with individual exams and examiners, VBA has identified broader challenges faced by contractors in meeting VBA’s demand for exams and providing timely reports. For example, VBA identified two contractors who were not prepared to perform all of their assigned exams because they did not have enough examiners, particularly in rural locations, which led to delays and a backlog of exam requests, according to VBA officials. VBA officials described how they worked with these contractors over several months to adjust and closely monitor the volume of exams sent to the contractors to address the backlog. However, according to VBA officials, by December 2017, VBA determined that one of the contractors was not able to meet the demand for exams, and the agency stopped sending new exam requests to this contractor. According to VBA, by late June 2018, it had discontinued all work with this contractor. VA officials said that to obtain additional exam capacity to make up for the two contractors’ shortages, they awarded short-term contracts in December 2017 to two other contractors who were providing exams in other VBA districts. VBA Delays in Assessing Quality and Completing Reports VBA has not completed all required quarterly quality reviews and accompanying quarterly performance reports on contractors, according to VBA officials. These reviews and reports are key components to effectively assessing contractor performance in a timely manner. Specifically, in late June 2018, VBA officials said that they had conducted almost all their quality reviews for contracted exams completed in districts 1-5 during the second half of 2017, but that they needed to finalize the quality scores. They also said that they were beginning their quality reviews for contracted exams completed in 2018. At the time of our review, VBA had released one quarterly performance report for the fourth quarter of calendar year 2017, and officials said they were drafting others. VBA officials attributed delays in completing quality reviews and quarterly performance reports primarily to a lack of VBA quality review staff. The quarterly performance reports provide contractors with information on their performance against VBA quality and timeliness targets. For example, prior reports included detailed breakouts of quality errors by type and suggestions for performance improvements. As officials of one contractor said, delays in receiving quarterly performance reports limit VBA’s ability to provide contractors with timely and valuable feedback they can use to improve the quality of their exams. The delay in completing the quarterly reviews and reports also has implications for VBA’s ability to allocate exam requests across contractors and administer potential financial incentives across contractors. More specifically, VBA can use performance data to help determine how to allocate exams in each district that has two contractors, as outlined in the contracts. For example, VBA can decide to allocate more exams to the contractor with higher performance results. Further, the contracts outline how VBA can use performance data to administer financial incentives linked to performance targets. For example, VA is to provide a bonus to a contractor who meets or exceeds the 92 percent quality standard for a quarter, and meets or exceeds the 20- or 30-day timeliness standard. However, because of its delays in completing quality reviews and the lack of reliable data on contractor timeliness, VA has not yet administered these incentives. VA officials told us that the agency will determine if it will administer the 2017 incentives after it completes its performance assessments of contractors. VBA officials said they are currently hiring more staff to address the lag in quality reviews and subsequent reports to contractors, as well as to provide more oversight of contractors. At the time of our review, VBA did not have its authorized level of 15 quality analysts and 2 senior quality reviewers, but VBA officials said that they expected to complete hiring to bring the quality reviewer staff up to 17 full-time positions by the end of fiscal year 2018. In addition, VBA officials acknowledged that they did not have enough CORs in VBA’s exam program office to oversee the 14 exam contracts (including the two short-term contracts). As of April 2018, VBA officials said the office had 3 CORs, but hiring was expected to bring the number up to 14 by the end of fiscal year 2018. VBA officials said that they determined staffing levels for VBA’s contract exam program office—including CORs and exam quality reviewers—based on an assessment of the resources needed to expand the program, among other factors. Although VBA did not provide documentation outlining how it determined its workforce needs, the agency provided us with updated organizational charts in June 2018 demonstrating increased staff levels for the exam program office. VBA’s Data Limitations Hinder Its Ability to Oversee Certain Contract Provisions, and VBA Has Not Conducted Comprehensive Performance Analysis VBA’s lack of reliable data on the status of exams, including insufficient exams—exam reports that VBA returns to contractors to be corrected or clarified—limits its ability to effectively oversee certain contract provisions. VBA officials acknowledged that they could not calculate the number of completed exams that were once marked as insufficient or how long they had remained in that status due to the data limitations of the exam management system the agency used until spring 2018. The contracts require that contractors correct insufficient exams within a certain number of days and bill VBA for these exams at half price. However, VBA’s lack of complete and reliable information on insufficient exams hinders its ability to ensure that either of these requirements is met. VBA officials also indicated that they were unable to fully assess individual contractor timeliness against VBA’s performance targets because the exam management system did not include the date the initial exam report was submitted to VBA, which is needed to calculate timeliness as outlined in the contracts. In March 2018, VBA began implementing a new exam management system designed to collect more comprehensive and accurate information on the status of exams. VA documentation on the new system shows that it will include detailed data on insufficient exams, which, according to VBA officials, should allow VBA to track whether contractors are properly discounting their invoices for those exams. However, in June 2018, VBA stated that three of its five contractors did not have complete functionality with VBA’s new exam management system. As a result, VBA officials said the agency still did not have complete data in the new system that would allow it to track insufficient exams. Officials said they were working to address these issues. More broadly, as described in VA system documents, the new system is designed to allow VBA to track more detailed data on exam completion dates and on other points throughout the exam process, such as dates for initial requests for clarification from contractors, and dates when appointments are scheduled. However, VBA is in the early stage of this transition, and agency officials stated that unexpected technical issues have affected communication between the new exam management system and other VBA systems. While they work to resolve the issues, VBA officials said that they have been manually moving some exam requests through the system each day. Further, VBA has not documented how it plans to ensure the additional data is accurate and use it to oversee contractor performance as outlined in the contracts, particularly for insufficient exams. Federal internal control standards state that management should use quality information to achieve key objectives. In addition, management should formulate plans to achieve those objectives. For example, agencies should assess collected data and ensure it is accurate so that it can be used to provide quality information to evaluate performance. In the absence of a plan for how it will capture and use data in its new exam management system to assess performance, VBA risks overpaying contractors for insufficient exams and continuing to inaccurately measure contractor timeliness. Further, according to agency officials, VBA has not conducted comprehensive analyses of performance data that would allow it to identify and address higher-level trends and program-wide challenges across contractors, geographic districts, exam types, or other relevant factors. Agency officials told us they have no plans to conduct such analyses. Federal internal control standards state that management should establish and operate monitoring activities and evaluate the results of those activities. In addition, management should evaluate deficiencies both at the individual and aggregate level. While VBA officials acknowledged that higher-level analyses could improve program oversight, they explained that analyzing performance data has been challenging due to the limitations of the exam management system. Thus, VBA has prioritized addressing contractor-specific problems and resolving long-standing pending exams over in-depth analysis of the performance data. However, with the expected improvements provided by VBA’s new exam management system and increased staff to manage the program and conduct quality reviews, VBA should be better positioned to conduct analyses of performance data in the future. By conducting higher-level analyses across contractors, geographic districts, exam types, or other relevant factors, VBA could make a more informed assessment of the challenges contractors and examiners face and where additional workload capacity and training may be needed. In addition, better analyses would allow VBA to determine if the contract exam program is achieving its quality and timeliness goals in a cost effective manner. Auditor Verifies Contracted Examiner Licenses, but VBA Does Not Verify Training Completion or Collect Information on Training Effectiveness VBA Uses a Third-Party Auditor to Verify Contracted Examiner Licenses VBA has a third-party auditor who verifies that all active contracted examiners have a current, valid, and unrestricted medical license in the state where they examined a veteran. The auditor provides regular reports of its audits to VBA. Specifically, the auditor verifies the license numbers of all active contracted examiners in the states where they perform VA disability compensation exams; National Provider Identifiers; and any prior or current sanctions or restrictions resulting in a revoked or suspended license at the time of a VA exam. In addition, contractors send VBA monthly reports of examiners’ medical license, specialty, and accreditation based on the contractors’ verification of this information. Every 2 months, VBA sends the auditor a consolidated report of this information covering all five contractors. The auditor verifies examiners’ information in that report before sending a final audit report to VBA, noting if the auditor was or was not able to verify examiners’ licenses. After reviewing the report, VBA contacts the contractors to gather additional information to resolve any issues, and in cases in which licensing requirements are not met, VBA stops using the examiner and offers new exams to veterans who have been seen by the examiner. VBA and auditing firm officials noted that audit results show that almost all examiners have current and valid licenses, and contractors are required to stop using those who do not meet licensing requirements. VBA and auditing firm officials said that issues identified in the audits are usually due to typos or differences in how information is captured across different licensing databases. However, based on an audit, VBA provided an example of an examiner with a restricted medical license who had completed exams for one contractor. In this case, VBA notified the contractor, who then stopped using the examiner and said it was taking action to prevent errors in its license verification process from occurring again. In addition, the contractor reimbursed VBA for the cost of exams conducted by the examiner and also offered new exams to veterans who had been seen by the examiner. VBA Relies on Exam Contractors to Verify Training is Completed and Does Not Review Training Records for Accuracy VBA relies on contractors to verify that their examiners complete required training, and agency and contractor officials told us that VBA does not review contractors’ self-reported training reports for accuracy or request supporting documentation, such as training certificates, from contractors. As required by the contracts, contractors must track and maintain records demonstrating each examiner has completed required training. Each of VBA’s five contractors has its own process for ensuring that required training is provided to and completed by their examiners, but generally, contractors export the courses from VA’s online training system into their own online training systems for their examiners to access. The contractors, rather than VBA, access the contractor training systems to verify that examiners have completed the required training before they are approved to conduct exams. When requested by VBA, contractors are required to send VBA reports demonstrating that their examiners have met training requirements. As stated in the latest version of the contracts, contractors must immediately stop using any examiner found to have not completed required training, notify VBA, and re-examine the involved veterans at no cost to VBA, if requested by the agency. Although VBA currently does not verify the accuracy of training self- reported by contractors to the agency, VBA officials said that they plan to enhance monitoring through spot checks of training records and a new training system. Specifically, in fiscal year 2019, VBA officials said they plan to start conducting spot checks of some examiners’ training records for accuracy and compliance during site visits to contractor headquarters and clinics. However, VBA has not provided details or documentation on these planned checks, such as how it will determine which records to review or the steps it will take to verify the accuracy of training records. VBA officials also said they are planning to develop an online system that would allow VBA to certify that examiners have completed required training, rather than relying on contractors for this information. However, as of July 2018, VBA had yet to determine when this system would be developed and had not documented plans to do so in order to use such information for monitoring training. VBA also said it would hire staff to manage contractor training, but has yet to do so. GAO’s prior work has emphasized tracking and other control mechanisms to ensure that all employees receive appropriate training. While VBA said it would enhance its monitoring of training records, documenting and implementing a plan and processes to verify training could help ensure examiners have met training requirements. Without such a plan, VBA risks using contracted examiners who are unaware of the agency’s process for conducting exams and reporting the results, which could lead to delays for veterans as a result of poor-quality exams that need to be redone and insufficient exam reports that need to be corrected. VBA Does Not Collect Information to Determine if Training Effectively Prepares Examiners VBA does not collect information from contractors or examiners to help determine if required training effectively prepares examiners to conduct high quality exams and complete exam reports. VBA has provided additional guidance to contractors for some specialty exams. However, VBA identified these issues after some contractors requested guidance in monthly meetings, rather than through VBA efforts to proactively or regularly collect information from contractors or examiners to inform potential changes to training. VBA is considering including a component in the online training system that would collect information on the effectiveness of required training. However, VBA has not outlined additional details on collecting such information. VBA officials said that VBA did not collect such information in the past, in part, because staff were focused on program oversight. To assess progress toward achieving results and to make changes to training if needed, GAO has found that evaluation is a key component of any training program. Given that VBA officials told us that the agency plans to issue new contracts in fall 2018, the number of contracted examiners who are new to VA processes may increase. Thus, collecting and assessing regular feedback on training from contractors and examiners, such as through surveys, discussion groups, or interviews, could help VBA determine if training effectively prepares examiners to conduct exams and complete exam reports. Further, information on the effectiveness of training could supplement data on contractor performance and results from VBA’s quality reviews to help assess if additional training courses are needed across contractors or for specific exam types. Conclusions As VBA increasingly relies on contractors to perform veterans’ disability compensation exams, it is important that the agency ensures proper oversight of these contractors. VBA’s lack of accurate and up-to-date data and reports on contractor performance hampers its ability to oversee the quality and timeliness of exams provided through contractors. VBA’s new exam management system provides opportunities to improve oversight through more comprehensive and accurate data. These improvements might be limited, however, without a plan to use the data to produce the quality information needed by VBA to monitor insufficient exams, ensure it pays contractors the correct amount for those exams, and help it accurately calculate contractor timeliness. Further, the new system provides an opportunity for VBA to conduct analyses that could identify high-level trends and challenges facing the program across contractors and districts, such as delays in completing exams in specific parts of the country or contractor performance issues related to specific exam types. Despite these capabilities, VBA has not outlined plans for using improved information in this manner. Without doing so, the agency may miss opportunities to improve the program and, ultimately, its service to veterans. VBA could better prepare contracted examiners for their role by taking actions to ensure required training has been completed and by collecting information to assess and improve training. Such actions could help improve the quality of exams and exam reports, which could mitigate the need for exam rework and, ultimately, delays in determining veterans’ benefits. With VBA planning to award new contracts and potentially more new contracted examiners coming on board, verifying that required training is completed and collecting information on the effectiveness of training are critical. As VA continues to rely on contracted examiners, it is important that the agency is well positioned to carry out effective oversight of contractors to help ensure that veterans receive high-quality and timely exams. Recommendations for Executive Action We are making the following four recommendations regarding contracted disability compensation exams to VA. The Under Secretary for Benefits should develop and implement a plan for how VBA will use data from the new exam management system to oversee contractors, including how it will capture accurate data on the status of exams and use it to (1) assess contractor timeliness, (2) monitor time spent correcting inadequate and insufficient exams, and (3) verify proper exam invoicing. (Recommendation 1) The Under Secretary for Benefits should regularly monitor and assess aggregate performance data and trends over time to identify higher-level trends and program-wide challenges. (Recommendation 2) The Under Secretary for Benefits should document and implement a plan and processes to verify that contracted examiners have completed required training. (Recommendation 3) The Under Secretary for Benefits should collect information from contractors or examiners on training and use this information to assess training and make improvements as needed. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of our report to the Department of Veterans Affairs (VA) for its review and comment. VA provided written comments, which are reproduced in appendix II. VA concurred with all our recommendations and described the Veterans Benefits Administration’s (VBA) plans for taking action to address them. Regarding our first recommendation, VA outlined improvements in the information collected through VBA’s new exam management system, and said that VBA is currently testing a mechanism to validate exam invoices submitted by contractors. We noted these improvements to the system in our draft report sent to the agency for comment. We maintain that it will be important for VBA to take the next step of developing and implementing a plan for how it will use information from the new system to ensure both accurate timeliness data and proper exam invoicing. Regarding our second recommendation, VA stated that VBA will use improved data in the new exam management system to regularly monitor and assess aggregate performance data, identify error trends, and monitor contractor performance and program-wide challenges. Regarding our third and fourth recommendations, VA stated that VBA plans to develop and implement a training plan for contractors that will include a mechanism to validate that required training has been completed and to assess the effectiveness of this training through feedback from trainees, contractors, and quality review staff in VBA’s contract exam program office. VA stated that VBA will use this data to improve the implementation and content of training. VA requested that GAO combine these two recommendations into one. However, we believe they are two distinct recommendations and have kept them as such. VBA could meet the intent of each recommendation with the development and implementation of one plan that covers both training verification and assessment, as outlined in its comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in Appendix III. Appendix I: Additional Information on Selected Methodologies Review of VBA Contracts To evaluate VBA monitoring of contractor performance and VBA oversight of contracted examiners’ qualifications and training, we reviewed relevant federal laws, regulations, and VA guidance on the use of contracted examiners for disability compensation exams. To identify relevant contract provisions and requirements related to contractor performance, monitoring of such performance, licensing, and training, among other areas, we reviewed selected provisions of selected versions of the 12 current VA Medical Disability Examination contracts originally awarded in 2016, of 5 short-term contracts VA awarded in early 2017, and of 2 short-term contracts VA awarded in December 2017. With regard to the 12 current contracts, we reviewed the selected provisions in the originally awarded contract from 2016 and in the most recently amended version of the contract (as provided to us by VBA officials). Based on our review of these two versions of the contract, the selected provisions appeared to remain in place, unless noted otherwise in this report. However, we did not review the various contract modifications that, according to VBA, occurred in the interim period to confirm whether the selected provisions we focused on in our review actually remained in place during the period between the original contract and the most recent amendment. With regard to the 2 short-term contracts awarded in December 2017, we reviewed the selected provisions in the original December contract. According to VBA officials, there have been no subsequent modifications to these short-term contracts. With regard to the 5 short-term contracts awarded in early 2017, we only reviewed selected provisions relating to contractor quality and timeliness performance. Thus, any statements in this report relating to other aspects of the contracts are not based on these short-term contracts. Further, we only reviewed such provisions in the originally awarded short-term contract, and we did not review the various contract modifications that, according to VBA, occurred subsequently, to confirm that those provisions remained in place over time. However, we found that those selected provisions were generally in place in all of the various contracts we reviewed. Analysis of VBA Data on the Timeliness of Contracted Exams To answer what is known about the timeliness of VBA contracted exams, we analyzed VBA data on disability compensation exams completed by five contractors between February 2017 and January 2018. VBA’s Office of Performance Analysis and Integrity provided exam-level data that it maintains in the agency’s Enterprise Data Warehouse, including data on the exam request date, the date the contractor accepted the request, the date the contractor completed the exam, and the VBA district where the exam was conducted, among other information. These data were created from data originally collected in VBA’s Centralized Administrative Accounting Transaction System (CAATS), which is the system that VBA used to request exams from contractors until spring 2018. According to VBA officials, the status of exam requests (e.g., pending, completed, cancelled) was not always accurate in CAATS. To create more reliable data and identify the most current information on the status of exams, the Office of Performance Analysis and Integrity identified and replaced missing or incorrect data in CAATS by running checks against other VBA systems, including the Veterans Benefits Management System, which maintains veterans’ benefits claims records. We assessed the reliability of the data we received from VBA by conducting electronic testing for missing data and errors, and by interviewing VBA officials about their data collection and quality control procedures. We determined that the data were sufficiently reliable for our purposes of reporting the time it took to complete exams within districts. Our analysis included 646,005 contracted exams completed between February 2017 and January 2018. We selected February 2017 as our starting point because it was the first full month of data available that covered most of VBA’s current contractors. To allow for 12 full months of data, we selected January 2018 as our ending point. In addition, we limited our population to include exams that were requested on or after January 13, 2017 in districts 1-5 or on or after April 1, 2016 in districts 6- 7, based on the periods of performance in the contracts for those districts. We calculated timeliness at the level of the exam request. We calculated the number of days between the date an exam request was accepted by the contractor and the date the exam report was completed by the contractor. The timeliness values we calculated may include additional time needed to request and receive contractors’ corrections or clarifications on previously submitted exam reports. In our report, we refer to these exams as “insufficient exams.” VBA officials acknowledged that due to data limitations the new exam management system is intended to resolve, VBA’s CAATS system did not retain data on the number of exams that were once marked as insufficient or how long they remained in that status. While VBA officials acknowledged that this data limitation affects the agency’s ability to assess individual contractor timeliness on VBA’s performance targets outlined in the contracts, the limitation did not prevent us from analyzing the timeliness of contracted exams overall. The overall timeliness values we calculated represent the total time taken to complete exams regardless of whether additional time was needed for corrections. To put the timeliness values we calculated in context, we calculated the percentage of exams that were completed within VBA’s timeliness targets of 20 days for districts 1-5 and 30 days for districts 6-7 for the entire 12- month period of our analysis. We also calculated the percentage of exams that were completed within other timeframes (e.g., 21-40 days, more than 40 days). According to the contracts, contractors are not expected to complete all exams within the timeliness target, but rather should meet the timeliness target on average in a given quarter, so our analysis was different from one that VBA might conduct in order to determine contract compliance. Because VBA does not retain detailed data on exam completion dates necessary to assess contractor performance against VBA’s timeliness targets, and because we calculated timeliness across contractors, the percentages we calculated do not represent an assessment of whether contractors met VBA’s timeliness targets. GAO did not conduct a legal analysis of the various contractors’ compliance with the contract requirements. Alternate Timeliness Values Given that the start of VBA’s timeliness measure is the date the contractor accepts the exam request (rather than the date VBA requests the exam), we calculated alternate timeliness values to account for potential delays in accepting exam requests. VBA officials stated that VBA requests contractors accept or reject exam requests within 3 days. For all exam requests that contractors took more than 3 days to accept, we calculated alternate totals that included the additional days. For example, if a contractor took 5 days to accept the exam request and completed the exam 20 days later, we calculated an alternate total of 22 days to complete the exam. We used these alternate values to calculate adjusted percentages for each category presented in Figure 4 of our report. For example, using the alternate timeliness values, about 50 percent of exams in districts 1-5 would have been completed in 20 days and 53 percent in districts 6-7 would have been completed within 30 days, rather than the respective 53 percent and 56 percent shown in Figure 4. Moreover, we found that about 82 percent of exam requests during our period of analysis were accepted within 3 days. Pending Exams To report more recent data on exams that were accepted but not yet completed by contractors—pending contracted exams—VBA provided aggregate data on the number of pending exams as of June 25, 2018. For example, for districts 1-5, it provided data on the number of exams that had been pending for 20 days or fewer, 21-40 days, 41-60 days, 61- 100 days, and more than 100 days. We calculated percentages based on the VBA-provided totals. Appendix II: Comments from the Department of Veterans Affairs Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Elizabeth Curda, (202) 512-7215 or [email protected]. Staff Acknowledgments In addition to the contact named above, Nyree Ryder Tee (Assistant Director); Teresa Heger (Analyst-in-Charge); Alex Galuten; Justin Gordinas; and Greg Whitney made key contributions to this report. Also contributing to this report were James Bennett, Matthew T. Crosby, Teague Lyons, Sheila R. McCoy, Jessica Orr, Claudine Pauselli, Samuel Portnow, Monica Savoy, Almeta Spencer, and April Van Cleef.
Why GAO Did This Study In 2016, VBA awarded 12 contracts to five private firms for up to $6.8 billion lasting up to 5 years to conduct veterans' disability medical exams. Both VBA contracted medical examiners and medical providers from the Veterans Health Administration perform these exams, with a growing number of exams being completed by contractors. Starting in 2017, VBA contracted examiners conducted about half of all exams. GAO was asked to review the performance and oversight of VBA's disability medical exam contractors. This report examines (1) what is known about the quality and timeliness of VBA contracted exams; (2) the extent to which VBA monitors contractors' performance; and (3) how VBA ensures that its contractors provide qualified and well-trained examiners. GAO analyzed the most recent reliable data available on the quality and timeliness of exams (January 2017 to February 2018), reviewed VBA and selected contract documents and relevant federal laws and regulations, and interviewed agency officials, exam contractors, an audit firm that checks examiners' licenses, and selected veterans service organizations. What GAO Found The Veterans Benefits Administration (VBA) has limited information on whether contractors who conduct disability compensation medical exams are meeting the agency's quality and timeliness targets. VBA contracted examiners have completed a growing number of exams in recent years (see figure). VBA uses completed exam reports to help determine if a veteran should receive disability benefits. VBA reported that the vast majority of contractors' quality scores fell well below VBA's target—92 percent of exam reports with no errors—for the first half of 2017. Since then, VBA has not completed all its quality reviews, but has hired more staff to do them. VBA officials acknowledged that VBA also does not have accurate information on contractor timeliness. VBA officials said the exam management system used until spring 2018 did not always retain the initial exam report completion date, which is used to calculate timeliness. In spring 2018, VBA implemented a new system designed to capture this information. VBA monitoring has addressed some problems with contractors, such as reassigning exams from contractors that did not have enough examiners to those that did. However, the issues GAO identified with VBA's quality and timeliness information limit VBA's ability to effectively oversee contractors. For example, VBA officials said they were unable to track the timeliness of exam reports sent back to contractors for corrections, which is needed to determine if VBA should reduce payment to a contractor. The new system implemented in spring 2018 tracks more detailed data on exam timeliness. However, VBA has not documented how it will ensure the data are accurate or how it will use the data to track the timeliness and billing of corrected exam reports. VBA also has no plans to use the new system to analyze performance data to identify trends or other program-wide issues. Without such plans, VBA may miss opportunities to improve contractor oversight and the program overall. A third-party auditor verifies that contracted examiners have valid medical licenses, but VBA does not verify if examiners have completed training nor does it collect information to assess training effectiveness in preparing examiners. While VBA plans to improve monitoring of training, it has not documented plans for tracking or collecting information to assess training. These actions could help ensure that VBA contractors provide veterans with high-quality exams and help VBA determine if additional training is needed. What GAO Recommends GAO recommends VBA (1) develop a plan for using its new data system to monitor contractors' quality and timeliness performance, (2) analyze overall program performance, (3) verify that contracted examiners complete required training, and (4) collect information to assess the effectiveness of that training. The Department of Veterans Affairs agreed with GAO's recommendations.
gao_GAO-18-402
gao_GAO-18-402_0
Background This section describes (1) electricity grid functions, operations, and planning; (2) energy storage operational characteristics, technologies, and deployment; and (3) the electricity regulatory framework. Electricity Grid Functions, Operations, and Planning The electricity grid involves four distinct functions: generation, electricity transmission, electricity distribution, and grid operations (see fig. 1). Electricity is generated at power plants by burning fossil fuels; through nuclear fission; or by harnessing renewable sources such as wind, solar, geothermal, or hydropower. Once electricity is generated, it is sent through the electricity grid, which consists of high-voltage, high-capacity transmission systems, to areas where it is transformed to a lower voltage and sent through the local distribution system for use by residential and other customers. Throughout this process, a grid operator, such as a local utility, must constantly balance the generation and consumption of electricity. To do so, grid operators monitor electricity consumption from a centralized location using information systems, and send minute-by- minute signals to power plants to adjust their output to match changes in the demand for electricity. As we previously reported, continuously balancing the generation and consumption of electricity can be challenging for grid operators because customers may use sharply different amounts of electricity over the course of a day and throughout the year. For example, in many areas, customer demand for electricity rises throughout the day and reaches its highest point—or peak demand—in late afternoon or early evening. Throughout the day, grid operators direct power plants to adjust their output to match changes in demand for electricity. Grid operators typically first use electricity produced by baseload power plants that are the least expensive to operate, then progressively increase the supply of electricity generated by power plants that are more expensive to operate as needed to match increases in electricity demand. As a result, providing electricity to meet peak demand is generally more expensive than during other parts of the day, because to do so, grid operators use power plants that are more expensive to operate. Peak periods are generally short and account for only a few hours per day and, overall, a small percentage of the hours during a year, but can significantly contribute to the overall costs of serving customers. Grid operators conduct planning to assess the adequacy of existing grid infrastructure, identify capacity needs, and evaluate the cost and effectiveness of potential solutions to address these needs. As we previously reported, to ensure that grid infrastructure has sufficient capacity to meet future peak demand, grid operators typically develop forecasts of future electricity demand based on historical information about customer electricity use combined with assumptions about how customer demand will change in the future based on population growth, economic conditions, and other factors. Utilities deal with uncertainty partly by producing a range of forecasts based on demographic and economic factors, and by maintaining excess generating capacity, known as reserves. Models help utilities choose the least-cost combination of generating resources to meet demand. If demand forecasts are too high or low, a utility could end up with more or less generating capacity than it needs to serve its customers reliably, or it could end up with a mix of generating capacity that is not cost effective. These outcomes can affect electricity rates as well as the utility’s financial situation. To meet demand for electricity, utilities can construct new plants, upgrade existing plants, purchase power from others, build new transmission and distribution lines, and provide incentives to customers to reduce and shift their demand for electricity through energy efficiency or demand-response programs. In addition, utilities may use time-based pricing—prices that vary throughout the day and year to reflect the costs of serving consumers—to encourage consumers to lower their electricity use at times of high prices or shift their use to times of the day when prices are lower, which can lower their electricity bills. Energy Storage Operational Characteristics, Technologies, and Deployment Energy storage includes a number of different technologies that have the ability to store energy for use at a later time. Energy storage systems can be designed with a range of technologies, such as pumped hydro, compressed air, batteries, and flywheels, according to DOE. Each technology has its own performance characteristics that make it more suitable for certain grid services than for others. Specifically, compressed air and pumped hydro are capable of discharge times—the length of time that a storage device can discharge electricity—in tens of hours and have large capacities that can reach 1,000 megawatts (MW). According to DOE and CRS, storage projects involving these types of technologies generally have unique siting requirements, including specific geographical features, or long construction times. In contrast, other storage technologies such as batteries and flywheels are smaller in terms of capacity and have shorter discharge times, ranging from a few seconds to several hours, and these technologies can generally be built without specific geographical features at the site. These energy storage systems are comprised of storage technologies and other system components such as inverters, wiring, temperature regulation, and other equipment. According to DOE’s Global Energy Storage Database, about 24 gigawatts (GW) of grid-connected energy storage were in operation in the United States and about 2 GW of storage capacity was under development as of March 20, 2018. Pumped hydro comprises about 93 percent of this storage capacity in operation. Many of the operational pumped hydro systems in the United States were commissioned during the 1960s through the 1980s; the most recent became operational in 2012. See figure 2 for information about the proportion of energy storage capacity in operation or under development in the United States that comes from certain types of technology. While pumped hydro comprises the majority of energy storage in operation, batteries are driving the recent growth in energy storage. Since 2013, the capacity of utility-scale (1 MW or greater) battery deployments grew by 283 percent (from about 185 MW to about 709 MW), though such utility-scale batteries comprised about 0.07 percent of utility-scale generating capacity on the U.S. electric grid, according to data from the Energy Information Administration. See figure 3 for information on the capacity of utility-scale battery installations each year from 2003 through 2017. Figure 4 shows how grid-connected storage of all technology types was distributed nationwide as of March 20, 2018, according to DOE’s Global Energy Storage Database. The Electricity Regulatory Framework Responsibility for regulating the electricity industry is divided between the states and the federal government. Most electricity customers are served by electric utilities on the retail level that are regulated by the states, generally through state public utility commissions or equivalent organizations. As the primary regulator of electricity on the retail level, state public utility commissions have a variety of responsibilities, such as approving utility investments in generation and distribution assets, the rates retail customers pay, and how those rates are set. Before electricity is sold to retail customers, it may be bought, sold, and traded in wholesale electricity markets that the federal government oversees through FERC. FERC is responsible for overseeing regional transmission organizations’ (RTO) development and operation of markets to ensure that wholesale electric rates are “just and reasonable” and not “unduly discriminatory or preferential.” To do so, FERC reviews and approves RTO market rules and monitors the competitiveness of RTO markets. Figure 5 indicates the location of major RTOs that have developed in certain regions of the United States. RTOs serve as grid operators by managing regional networks of electric transmission lines and also operate wholesale electricity markets to buy and sell services needed to maintain a reliable grid. These markets include capacity markets—auctions through which owners of power plants can be compensated for agreeing to make their plants available to provide electricity at a specified time in the future—designed to incentivize the building and retention of enough generation and other resources to meet future power demands; energy markets for scheduling which power plants will generate electricity throughout the day to maintain the balance of electricity generation and consumption, and at what prices; and ancillary services markets, which are designed to maintain electric reliability and ensure that supply and demand remain in balance from moment to moment so that grid operators can deliver electricity within technical standards, such as at the right voltage and frequency. RTOs are responsible for developing and implementing market rules, approved by FERC, that provide the framework for the design and operation of wholesale electricity markets. RTO market operations encompass multiple services that are needed to provide reliable and economically efficient electric service to customers. Each of these services has its own parameters and pricing. The RTOs use markets to determine the providers and prices for many of these services. In regions of the country without RTOs, electric utilities generally serve in the role of grid operator. In these regions, the local utility often integrates the delivery of electricity services—energy to maintain the balance of electricity generation and consumption, capacity to meet demand, and a range of ancillary services. Utilities in these regions may build and operate power plants to provide electricity to serve their retail customers. These utilities may also buy electricity from other power plant owners. Energy Storage Can Be Used in Various Ways to Enhance the Reliability, Resilience, and Efficiency of Grid Operations Energy storage can be used in various ways to enhance the reliability, resilience, and efficiency of grid operations, according to studies we reviewed and stakeholders we interviewed. Storage can be deployed throughout the electricity system and act as a generation, transmission, distribution, or customer-sited asset to provide various services, address operational challenges and needs, and potentially reduce costs. For example, storage can help grid operators address supply disruptions, relieve transmission congestion during periods of high demand, defer the need for transmission or distribution system upgrades, and provide backup power during a power outage. Figure 6 illustrates examples of potential applications across the electricity grid. Energy storage can support the reliability of grid operations by helping grid operators respond to fluctuations in electricity supply resulting from the variability of renewable energy resources, such as solar or wind, or disruptions to the grid, such as the loss of a transmission line or a generating unit. Specifically, according to some studies we reviewed, the fast-ramping nature of some storage technologies that can change generation output quickly—within a few seconds or minutes—makes them suitable for addressing short-term changes in variable energy generation resources (referred to as variable resources) such as when the sun sets and output from solar resources quickly declines. Moreover, storage can provide ancillary services needed to maintain system reliability and support the transmission of electricity. Specifically, according to some studies we reviewed, storage can provide frequency regulation services—which entail moment-to-moment reconciliation of the difference between supply and demand—to maintain the stability of the system. The services that storage provides can be performed by traditional assets but because certain storage technologies are fast- ramping they can be better-suited to provide certain services, according to several studies we reviewed and stakeholders we interviewed. Systems with a large portion of generating capacity from variable resources can face reliability challenges because the intermittent nature of these sources can cause fluctuations in voltage and frequency, according to some studies we reviewed. Grid operators are adopting storage to support increasing use of renewable energy and address the associated challenges. For example, in 2017, San Diego Gas & Electric deployed a 30 MW energy storage facility at its Escondido substation to help improve regional reliability and support greater amounts of renewable energy in the region’s energy supply (see fig. 7). According to San Diego Gas & Electric, the Escondido storage facility is helping to enhance grid reliability and increase the use of renewable energy; the facility is capable of the equivalent of serving 20,000 customers over a period of 4 hours. Similarly, in 2017, according to Tucson Electric Power documents, the utility installed two 10 MW battery storage projects to support its ability to achieve long-term renewable energy goals without compromising the reliability of service. According to representatives from the utility, the projects provide frequency control and voltage support and the deployment shortened the reaction time to system disruptions and supported the utility’s compliance with reliability standards in its role as balancing authority. Storage can also provide services that support resilience by helping the grid adapt to changing conditions and potentially disruptive events and, if a disruptive event occurs, to rapidly recover, according to several studies we reviewed and stakeholders we interviewed. Specifically, in the event of an outage during which power sources or power lines become unavailable, storage can respond quickly to provide backup power or black start services—the provision of the power necessary to restore a generation plant when power from the grid is unavailable during a major outage. In addition, storage can also support microgrids—systems that can connect and disconnect from the grid depending on operating conditions—that could maintain power for a small area independent of the grid. For example, in 2015 a Vermont utility installed a 4 MW energy storage system in conjunction with a 2.5 MW solar project at a school that serves as an emergency shelter. In case of grid failure or an extended emergency, the facility can separate from the rest of the grid and operate independently. In addition, in 2016 in Massachusetts, the Sterling Municipal Light Department installed a storage system that can isolate from the main grid in the event of a power outage and provide emergency backup power to the Sterling police station and dispatch center, a facility providing first responder services. In the event of an outage, the 2 MW storage system could provide the police station with up to 12 days of power, according to the utility. Storage also has the potential to improve efficiency of grid operations and help reduce operating costs, according to studies we reviewed and stakeholders we spoke with. For example, storage has the potential to reduce costs by capturing energy generated during low-cost periods to be used to meet demand later during more expensive periods, according to studies we reviewed. Specifically, energy time-shift, also referred to as arbitrage, involves utilities purchasing inexpensive electric energy, available during periods when prices or system marginal costs are low, to charge the storage system so that the stored energy can be used or sold at a later time when the price or costs are high. In addition, storage can help make the capacity of variable resources more consistent by storing electricity during periods of high generation, such as a sunny afternoon, and releasing it later during periods of high demand, such as the early evening. Moreover, storage can provide similar energy time-shift by storing excess energy production from renewable sources, which could otherwise be curtailed. Storage also has the potential to reduce costs by avoiding or delaying investments in infrastructure. Specifically, storage may be used to reduce the capacity demands on existing generation, transmission, and distribution infrastructure. As a result, according to many studies and stakeholders we interviewed, utilities may be able to avoid or delay investments in generation, transmission, and distribution infrastructure that would otherwise be necessary to maintain adequate supply. For example, in 2017, a utility that serves customers in Massachusetts announced plans to install a 6 MW energy storage system with an 8-hour duration alongside a new diesel generator on Nantucket Island to provide backup power and postpone the need to construct a costly submarine transmission cable to bring electricity from the mainland to meet anticipated growth in electricity demand. In some cases, an investment in a storage system could be a more cost-effective way to manage peak demand, and in such cases, utilities could reduce the need for operation of peaking resources or investment in new peaking resources, such as a natural gas plant. Additionally, according to many studies we reviewed and stakeholders we interviewed, storage can help customers reduce demand charges. Demand charges are fees included on electricity bills in many parts of the country to cover the cost of ensuring that sufficient generation and transmission resources are available to serve customers during periods of peak demand. Energy storage provides an opportunity for potential savings by helping a customer to manage their peak demand. Using storage can also allow some utilities to avoid charges that they might incur when purchasing wholesale electricity to serve their customers during a system’s peak demand; this could allow them to pass savings on to their customers in the form of lower rates. For example, although the Sterling, Massachusetts, Municipal Light Department installed its storage system to provide emergency backup power, the primary benefits of the project since its installation have resulted from using it to reduce peak demand which has reduced the utility’s transmission charges, and, in turn, has allowed it to reduce rates paid by its customers, according to utility representatives. Various Factors Affect the Deployment of Storage for Grid Operations, Including Industry and Technology Readiness, Cost- Competitiveness, and the Regulatory Environment Studies we reviewed and stakeholders we interviewed identified a variety of factors that affect energy storage deployment. These factors include industry and technology readiness, safety concerns and stringency of siting requirements, increasing use of renewable resources, the cost- competitiveness of storage and challenges with quantifying the value of storage, and the regulatory environment. Industry and Technology Readiness Grid operators and utilities have limited experience with storage and face technical challenges integrating storage into existing systems, according to studies we reviewed and stakeholders we spoke with. For example, according to some studies and stakeholders, grid operators may not have experience planning for the integration and operation of storage and they may not consider it as an option. The models that grid operators typically use to help make decisions about investments in generation, transmission and distribution infrastructure are based on traditional resources with better-understood capabilities. Moreover, storage can be more challenging to integrate than other resources, such as solar, because it changes its function in the system from charging—consuming electricity— to discharging—generating electricity, according to some stakeholders. Because storage must provide power when called upon but also must be recharged from another resource at a later time, tools that planners rely on must keep an accurate accounting of the amount of energy stored and available to supply power to meet demand. According to one stakeholder, installation of storage requires grid operators to develop operating requirements and identify control and mitigation strategies for proper coordination with larger grid operations. In addition, existing utility systems may not be designed to incorporate storage and may require customization to integrate storage, according to several stakeholders. Industry and technical challenges affecting deployment of storage include uncertainty about the performance of certain storage technologies over time and in various operating conditions. Energy storage systems generally are expected to last for a decade or more, but the actual degradation of battery storage under various conditions is still largely unknown, according to some studies we reviewed and stakeholders we spoke with. The electric utility industry has historically been slow to adopt new technologies and, unless new storage technologies prove highly reliable, utilities may be slow to deploy these assets, according to several studies we reviewed and stakeholders we spoke with. Safety Concerns and Stringency of Siting Requirements Although the adoption of storage has been increasing, safety codes and standards for storage are still under development, and questions have been raised about safety risks and how to mitigate those risks, according to studies we reviewed and stakeholders we interviewed. Efforts are under way to ensure that safety codes and standards address energy storage systems, but these types of standards tend to lag behind the development of storage technologies, according to some studies and stakeholders. Until existing codes and standards are updated, or new ones are developed and adopted, entities seeking to deploy energy storage or needing to verify a storage system’s safety may face challenges with applying existing codes and standards, according to some studies we reviewed. In addition, concerns about the operational safety of large storage systems as a fire hazard can be a barrier to their deployment in urban areas or in proximity to other grid resources such as substations, and local entities such as fire departments may not allow the deployment of storage on certain sites. Moreover, local jurisdictions and emergency responders, along with storage system installers, insurers, and others may not have a complete understanding of the hazards associated with storage and best approaches to addressing these hazards, such as the appropriate fire protection measures, according to some studies and stakeholders. In addition, local entities’ review of energy storage systems, for example, can add additional time to the permitting process, given that these entities may not be familiar with storage systems and potential safety concerns, according to some studies. On the other hand, in some locations siting requirements may be less stringent for some types of energy storage projects than for other resources such as a large power plant that must comply with more stringent environmental requirements, according to some studies and stakeholders. In some cases, according to some studies, the permitting process may be simpler for storage projects and construction timelines considerably shortened for a variety of reasons, including that energy storage systems do not need to complete modifications to comply with air quality standards because they do not produce emissions. In addition, certain storage projects require a much smaller footprint than conventional power plants, whereas building new power plants or transmission lines can involve large land requirements. Increasing Use of Renewable Resources As mentioned previously, storage can help address reliability issues associated with the variability of renewable energy generation resources making them attractive to grid operators. Consequently, the increased use of solar and other renewable energy resources has in turn encouraged the installation of storage, according to some studies we reviewed and stakeholders we interviewed. According to the Energy Information Administration, utility-scale solar installations grew at an average rate of 72 percent per year between 2010 and 2016, faster than any other generating technologies. Moreover, increasing use of these resources is expected to continue, which could drive the adoption of storage deployment in the future, according to some studies we reviewed and stakeholders we spoke with. The Energy Information Administration estimated in January 2018 that nearly half of the approximately 25 GW of new utility-scale electric generating capacity added to the grid in 2017 used renewable technologies, particularly wind and solar. Moreover, according to some studies and stakeholders, states with aggressive renewable portfolio standards—such as Hawaii, which aims to achieve 100 percent renewable sources by 2045—will need to adopt storage resources to meet those goals. In addition, California’s renewables portfolio standard includes targets of 33 percent by 2020 and 50 percent by 2030. According to some stakeholders and documents we reviewed, California is experiencing excess solar and wind generation and curtailment at certain times of the day and year and, as the state moves toward a target of 50 percent renewables, storage could help address these challenges. According to some stakeholders we spoke with, long-duration technologies will support greater integration of renewable energy on the grid. As mentioned previously, pumped hydro and compressed air energy storage can provide long duration storage, and other technologies, including flow and lithium ion batteries, have the potential to provide for long duration storage, according to some studies we reviewed and stakeholders we spoke with. Cost-Competitiveness and Challenges with Quantifying the Value of Storage Grid operators’ decisions to invest in energy storage must consider both costs and benefits. While the cost of some technologies has fallen in recent years, the cost of storage systems—including all the system components, installation, and integration costs—is still high when compared to more traditional resources available to electric utilities, according to many studies we reviewed and stakeholders we spoke with. On the other hand, the adoption of storage for certain purposes, such as supporting increased use of renewable resources or providing backup power, includes potential benefits such as reducing greenhouse gas and other harmful emissions, or enhancing the resilience of the grid. While the cost of lithium-ion batteries has declined in recent years, the storage device is one component of a storage system, and estimates of the device’s share of the total cost of an energy storage system range from about 25 percent to 50 percent of the total costs, according to studies we reviewed. According to some stakeholders, the cost of the system components and other costs to integrate storage with the grid can be substantial and are not declining as quickly as the cost of storage devices. In addition to the cost of the storage device, other system component costs include power conversion electronics, software, and monitoring and control systems, among others, that are essential to maintain the health and safety of the entire system, according to some studies. Moreover, valuing investments in energy storage must consider both the cost and benefits, but assessing the potential benefits and costs of storage can prove challenging, according to several studies we reviewed and stakeholders we spoke with. These challenges identified in these studies and by these stakeholders include the following: Quantifying benefits. Benefits can be difficult to quantify, as they depend on the application, location, and ability to capture multiple benefits. Specifically, the compensation for services that storage can provide reflects local market conditions, and these vary across regions. In addition, the value of certain storage applications can be harder to quantify than for others. For example, if a utility is considering deployment of storage in order to defer an investment in a transmission and distribution infrastructure upgrade, then determining the value of the storage asset involves analyzing the avoided cost of that investment, which is quantifiable. However, it is more difficult to quantify the value of less tangible benefits of storage, such as improvements to operational flexibility and grid resilience, which are not monetized and therefore are difficult to quantify. Life expectancy. For certain storage technologies, much is still unknown about their useful life, which depends on the number of charge and discharge cycles, among other things. Reliable estimates of the expected life of an asset are necessary for accurately estimating lifecycle costs and benefits. Given the fact that battery technologies are evolving, the lack of data makes it more challenging for utilities to estimate expected costs and benefits to justify their investment expense. Limited information on cost. Sufficient information on the cost of storage systems is not readily available, limiting utilities’ ability to include storage in modeling and investment decisions, according to some stakeholders. Energy storage price and cost data vary among sources because of aggregation to protect proprietary interests, which unit is chosen to present price and cost data, and limited information about how projects operate. Specifically, information on the operational conditions, specifications, and performance of energy storage systems is difficult to obtain. In addition, according to some studies we reviewed, uncertainty exists about the future cost outlook and pace of technological maturity. Regulatory Environment The regulatory environment can pose barriers to the deployment of energy storage. Specifically, market rules and regulations do not always clearly address whether entities may own and operate storage assets and how, if at all, the cost of investments in storage assets can be recovered, according to several studies we reviewed and stakeholders we interviewed. In addition, each RTO establishes the rules in a different way, and their implementation of reforms to accommodate storage varies, according to studies and documents we reviewed and stakeholders we spoke with. According to a FERC document, under current market rules, resource bidding parameters—the physical and operational constraints that a resource identifies when submitting offers to sell services in electricity markets—vary greatly among the RTOs. Moreover, state regulators and RTOs may be slow to change their policies and rules to address energy storage, and delays in such changes hinder deployment, according to some studies we reviewed. In RTO regions, some states do not allow utilities to own generation assets, and when storage is classified as a generation asset, an electric utility can be prevented from owning storage. Moreover, when market rules do not clearly define what type of asset they consider storage to be, this can make it difficult to determine whether storage can participate in the market or to receive compensation, making storage in that market financially unviable, according to some studies and stakeholders. One RTO, the California Independent System Operator (ISO), has established participation models to accommodate resources, such as storage, that are operationally unique. In addition, uncertainty exists about the ability of storage project owners to recover costs of storage used for multiple applications, according to documents and studies we reviewed and stakeholders we spoke with. Moreover, the variation in rules and regulations across regions makes it difficult for energy storage project developers to navigate different potential markets because each has its own characteristics, stakeholders, regulations, and market designs, according to some stakeholders. Storage project developers must keep abreast of the activities of multiple regulatory agencies and the variation by region makes potential revenue streams difficult to predict. In addition, according to one study we reviewed, the inconsistency of rules adds a level of complexity for project developers that want to deploy storage resources across multiple markets because they must conduct separate analyses to determine the regulatory outlook, market requirements, and profit potential in each region. Various Federal and State Policies and Other Efforts Aim to Encourage the Deployment of Energy Storage and Address Market Barriers Federal and state policymakers have used a variety of policies and other efforts to encourage the deployment of storage and address market barriers. For example, DOE has undertaken various efforts in response to several challenges to the deployment of storage, but funding to continue these efforts is uncertain. In addition, FERC has taken steps to address market barriers to storage deployment in wholesale markets but the final impact of these steps depends on implementation by RTOs. Moreover, the Department of the Treasury and the Internal Revenue Service (IRS) are considering changes that could clarify the eligibility of energy storage for a tax credit. Lastly, state policies and other efforts aim to encourage the deployment of energy storage or to address market barriers; these include establishing mandates and targets for storage adoption, revising interconnection rules and planning requirements, and offering financial incentives and funding. DOE Has Undertaken Various Efforts to Address Challenges Affecting Storage Deployment, but Funding to Continue these Efforts Is Uncertain According to documents we reviewed, DOE has undertaken various efforts in response to the challenges to deploying energy storage identified in a 2013 report, including challenges concerning the safety and reliability of such storage, its acceptance by industry, the regulatory environment, and cost-competitiveness. Efforts to Address Safety and Reliability Challenges. In 2017, DOE developed, through its Pacific Northwest National Laboratory (PNNL) and Sandia National Laboratories, the DOE safety roadmap, which established a goal to foster confidence in the safety and reliability of energy storage systems. The roadmap built on previous efforts including an Energy Storage Safety Forum that Sandia held in 2017 for stakeholders to share information and identify future needs. The objectives of the roadmap include research and development, codes and standards, and collaborative resources with a focus on electrical safety, fire and smoke hazard detection and mitigation, health and environmental hazards, natural and man-made disasters, ventilation and thermal management, and system controls. The roadmap aims to cover the development of energy storage systems through their decommissioning or refurbishment and includes design, installation, commissioning, operation and maintenance, repair, decommissioning, and reuse. DOE has also supported efforts to develop and deploy energy storage safety codes with industry groups, according to documents we reviewed. For example, DOE established working groups focused on safety and standards, including the Energy Storage Systems Safety Working Group, which aims to facilitate the timely development and deployment of safe energy storage systems by implementing the DOE safety roadmap through collaboration with stakeholders. In addition, as part of these efforts, a DOE working group on codes, standards, and regulations monitors the development of standards and model codes and provides input to those activities. Additionally, DOE coordinates with industry-led and international code-setting agencies such as the National Fire Protection Association and the International Code Council, as well as companies that conduct testing. In addition, PNNL published several resources including an inventory of codes and standards, an overview of the development and deployment of codes and standards, and a compliance guide. The compliance guide prepared by PNNL and Sandia, which includes safety codes and standards, aims to facilitate the timely deployment of storage systems and assist with documenting compliance with current safety- related codes and standards and verifying compliance with codes and standards. Efforts to Support Industry Acceptance. DOE has provided technical assistance and funded demonstration projects to help utilities and other entities install, procure, and evaluate storage projects, according to documents we reviewed. For example, DOE provided funding and technical support in the deployment of a storage project at an emergency shelter in Vermont that can separate from the rest of the grid and operate independently in case of an emergency. DOE also supported the development of documentation and tools to assist utilities in the design, deployment, and operation of energy storage systems including valuation models, procurement guidelines, commissioning procedures, and data acquisition guidelines. In addition, Sandia published guidance to provide information for municipalities on the elements that should be included in a solicitation for procurement and installation of an energy storage project and a handbook to provide information and tools to guide investors’ evaluations of energy storage opportunities. DOE has a proposal under way for a study to gather pricing information for energy storage technologies that will be used as part of future updates to the handbook. DOE also held a financial summit in June 2017 to provide information to the financial community on solicitations and contracts. In addition, to evaluate storage projects, DOE and the Washington Department of Commerce established a memorandum of understanding to have PNNL characterize and analyze the technical and economic attributes of storage projects. DOE also supports new deployments through funding, including the Grid Modernization Laboratory Consortium awards aimed at integrating conventional and renewable sources with energy storage. Provide Technical Assistance to Regulators. According to documents we reviewed, DOE has hosted workshops and provided technical assistance for several state public utility commissions and other entities aimed at providing them with information on storage technology development, project procurement, and valuation. In addition, in 2012 Sandia developed guidance for state regulatory authorities and planning personnel to provide information about opportunities for energy storage to play a greater role in the electricity grid. Research and Development to Improve Cost-Competitiveness. DOE’s Energy Storage Program’s research and development activities focus on improving materials and system factors that affect the cost, efficiency, and capacity of certain energy storage technologies, including flow batteries. DOE’s fiscal year 2018 budget request includes a performance goal to improve the cost- benefit ratio of storage to compete with current peak generation resources and, by 2020, increase to 5 percent the commercial use of grid-scale storage to buffer renewables. A DOE advisory committee in 2016 conducted an assessment of DOE’s energy storage-related research, development, and deployment programs that produced 15 recommendations. The recommendations included, among others, improving the visibility of DOE’s efforts; addressing the need for storage models and studies of market impediments; and providing additional funding and resources for energy storage research, development, and deployment programs. While DOE has undertaken a range of efforts over the past several years to address challenges to deployment, future funding of these efforts is uncertain. In 2017, DOE allocated $31 million to work on energy storage within its Office of Electricity Delivery and Energy Reliability. DOE’s fiscal year 2018 budget request proposed reducing this funding by about 74 percent, to $8 million, and proposed eliminating, among other efforts, work related to engagements with states, utilities, and storage providers for conducting tests and trials; engagements with state and federal regulatory officials on efforts to understand regional market barriers to deployment; validation of system performance and analysis of regional use cases; support to states and regional entities for the procurement, commissioning, and analysis of deployed systems; the development of enhanced tools and data for sharing with industry for the development and use of grid-scale batteries; and participation in both industry-led and international codes and standards development. Because fiscal year funding through March was provided under continuing resolutions, energy storage funding remained on par with fiscal year 2017 levels for the first half of the fiscal year, and the Consolidated Appropriations Act, 2018, increased funding for energy storage to $41 million. However, DOE’s fiscal year 2019 budget request again proposes reducing the funding for energy storage work to $8 million. According to DOE’s fiscal year 2019 budget request, DOE plans to focus on accelerating the development of new materials and technologies that can lead to improvements in the cost and performance of utility-scale energy storage systems and accelerate the adoption of energy storage systems into the grid infrastructure. FERC Has Taken Steps to Address Market Barriers to Storage Deployment, but the Final Impact of these Efforts Depends on Implementation by RTOs FERC has taken several steps to address market barriers to energy storage deployment, but the impact of these efforts will depend on implementation by RTOs. In March 2018, FERC published a final rule that aims to address barriers to integrating storage into organized wholesale markets. The rule requires that RTOs establish participation models consisting of market rules that recognize the physical and operational characteristics of electric storage resources to facilitate their participation in the RTO markets. In prior years, FERC issued several orders that also aimed to address barriers to storage participation in organized wholesale electric markets. For example, FERC Order 792—issued in 2013—revised the definition of a small generating facility in the pro forma Small Generator Interconnection Agreement—which establishes the terms and conditions for interconnection of resources no larger than 20 MW—to specifically include energy storage devices. In addition, FERC Order 755—issued in 2011—required RTOs to compensate frequency regulation resources in a manner that acknowledges the performance of faster-ramping resources, such as batteries and flywheels. Additionally, in May 2018, FERC published a final rule that revised the definition of a generating facility in the pro forma Large Generator Interconnection Procedures and pro forma Large Generator Interconnection Agreement—which establishes the terms and conditions for interconnection of resources larger than 20 MW—to explicitly include electric storage resources. FERC also published guidance in February 2017 on the ability of electric storage resources to provide transmission or grid support services at cost-based rates, while providing other electric storage services, such as power sales, at market-based rates. According to some studies we reviewed and stakeholders we spoke with, FERC orders have helped alleviate some of the barriers to storage participation in wholesale markets, but the impact of these orders depends on RTO implementation. Moreover, RTO implementation of FERC’s requirement to establish participation models to accommodate storage may not occur until the end of 2019 or later. Figure 8 shows the timeline of key FERC efforts that aim to address market barriers to the deployment of storage and time frames for implementation from November 2016 through 2019. According to FERC’s final rule, RTO implementation of the requirement to establish participation models could take 21 months from the publication of the final rule. RTOs will need to develop the participation models, obtain input through their stakeholder review processes, and may need to update modeling and dispatch software. IRS May Revise Regulations to Clarify the Eligibility of Storage for a Tax Credit Treasury and the IRS are considering changes that could clarify the eligibility of energy storage for a business tax credit under section 48 of the Internal Revenue Code, according to IRS documents. Currently, customers who install storage systems may be eligible for this tax credit when they use the storage system to store energy from renewable energy systems more than 75 percent of the time; however, at this time there is no federal tax incentive for stand-alone storage. Since 2011, the IRS has issued some written determinations that the storage portion of a renewable energy system would be eligible for the credit. However, only the specific taxpayer addressed by a determination can rely on it as precedent. In October 2015, Treasury and IRS solicited comments from the public on how to define certain types of property that qualify for this tax credit, including whether property such as storage devices may also be considered energy property. According to IRS documents, comments filed in response requested revisions to the tax credit that include, among other things, providing a technology-neutral definition of energy storage property, providing a specific list of types of energy storage property that qualify for the credit, and determining that storage is eligible for the credit on a stand-alone basis. According to some stakeholders we interviewed, the requirement for storage to be paired with renewable energy to obtain the tax credit is limiting because there are other potential applications and benefits storage can provide to the grid that are unrelated to renewable energy integration. Additionally, one stakeholder we spoke with said that regions with relatively small renewable energy resource capacities are unable to receive federal support for energy storage, even though it may benefit their grid. State Policies and Other Efforts Include Mandates and Targets, Revision of Rules and Planning Requirements, and Financial Incentives and Funding Through interviews with stakeholders and our review of documents, we identified examples of state policies and other efforts that have encouraged the deployment of energy storage or aim to address market barriers. Appendix I includes a detailed list of state policies and other efforts encouraging deployment of energy storage we identified. In summary, these policies and other efforts include: Mandates and Targets. Several states have established or proposed mandates or targets that require or encourage electric utilities to procure a specific amount of energy storage capacity. States have taken a range of approaches to implementing these mandates and targets. For example: The California Public Utilities Commission requires investor-owned utilities to collectively procure 1.3 GW of energy storage by 2020. Oregon is in the process of implementing a requirement for certain utilities serving more than 25,000 retail customers to procure energy storage systems with at least 5 megawatt hours of energy storage capacity by January 1, 2020. The Massachusetts Department of Energy Resources adopted a 200 megawatt-hour energy storage target for electric distribution companies to collectively meet by January 2020. In November 2017, New York State enacted legislation requiring the state public service commission to adopt an energy storage target. In January 2018, the Governor of New York announced an energy storage goal of 1.5 GW by 2025. A number of other states are also considering the adoption of targets for storage capacity in the state. Mandates and targets that require or encourage utilities to procure energy storage can help create certainty in the market for energy storage by assuring that there is a demand for storage, according to some stakeholders we interviewed. Additionally, according to one document we reviewed, mandates and targets may impact deployment by encouraging the development of model regulatory frameworks that serve as examples to other states. States with storage mandates and targets may also serve as case studies to demonstrate the impact of energy storage deployment on a large scale and provide the industry with operational experience, examples of how to best integrate storage, and opportunities to evaluate storage. Changes to Interconnection Rules. Some states have changed or are considering changes to interconnection rules to account for energy storage. States are taking a number of approaches to revising interconnection rules. For example: In 2015, Hawaii’s Public Utility Commission made changes to interconnection standards and energy policies to provide for the interconnection of energy storage to the grid. The Arizona Corporation Commission is developing statewide interconnection rules for distributed generation. Draft rules include interconnection requirements for energy storage systems and Commission officials told us that stakeholders are debating the scope and nature of those requirements. Planning. Some states allow for the inclusion of energy storage in integrated resource and transmission planning processes; grid operators and utilities undertake these planning processes to ensure that the grid infrastructure has sufficient capacity and grid operators are able to meet future power demands. For example: The New Mexico Public Utility Commission’s integrated resource planning rules require investor-owned utilities to evaluate all feasible energy resources as part of their resource planning process. When the Commission’s integrated resource planning rules were originally implemented, energy storage was not commercially feasible; however, the state commission recently amended these rules to include energy storage as a resource in planning. The Oregon Public Utility Commission directed Portland General Electric to address energy storage in its future integrated resource plans. Washington’s Utilities and Transportation Commission directs utilities to demonstrate that, when considering a new resource acquisition, their analysis should include an evaluation of the costs and benefits of a storage alternative. The Commission also directs utilities procuring resources to issue requests for proposals that are technology neutral, allowing energy storage to bid. Several states are also incorporating storage into broader energy planning efforts, including conducting research to identify the benefits of and opportunities for storage in the state. For example: North Carolina passed legislation directing the North Carolina Policy Collaboratory, at the University of North Carolina, to conduct a study on energy storage to address how and if storage may benefit consumers, the feasibility of storage in the state, and policy recommendations. Massachusetts has also undertaken a number of efforts including launching the Energy Storage Initiative, an initiative administered by the Massachusetts Department of Energy Resources and the Massachusetts Clean Energy Center to facilitate the deployment of storage and provide environmental and ratepayer benefits. As part of this initiative the 2016 State of Charge report was released and, among other things, identified barriers to energy storage adoption in the state and made recommendations to increase deployment of storage, setting a target of 600 MW of energy storage capacity by 2025. Financial Incentives and Funding. Several states offer financial incentives including tax credits, tax exemptions, and rebate programs that encourage the deployment of residential, commercial and industrial energy storage systems by offsetting costs. For example: California’s Self Generation Incentive Program—designed to help reduce emissions, demand, and customer electricity costs—provides rebates to support existing, new, and emerging distributed energy resources installed on the customer’s side of the utility meter. This program is open to many different technologies, but according to the California Public Utilities Commission, the largest share of funding is allotted for energy storage projects. In 2017, Maryland established a state tax credit for a percentage of certain installed costs of energy storage systems on residential and commercial property. Legislation has also been proposed in New York that would create a state tax credit for residential energy storage systems equal to 25 percent of costs up to $7,000. A number of states offer funding for energy storage pilot and demonstration projects. For example: Massachusetts launched a $20 million grant program to pilot energy storage use cases to increase deployment of storage. The Washington Clean Energy Fund supports demonstration projects, including projects at utilities working with the Pacific Northwest National Laboratory to support understanding approaches to integrate and optimize storage control systems and development of a framework for evaluating the technical and financial benefits of storage. In addition to the efforts described above, we found that several states have proposed or undertaken a range of other efforts that may encourage the deployment of energy storage or address market barriers. For example, the Arizona Corporation Commission required two electric utilities to develop residential battery storage programs in order to lower customers’ energy use during peak demand. In addition, Maryland’s Public Service Commission initiated a grid modernization rulemaking that, among other things, will define residential energy storage, determine a classification for storage in the Commission’s rules, and create criteria to evaluate storage investments. Similarly, state legislation directs Oregon’s Public Utility Commission to create a framework for utilities to use when conducting storage evaluations. Moreover, the California Public Utility Commission has approved rules that increase the ways for energy storage systems to obtain revenue for multiple uses, or grid services, for example, from frequency regulation, capacity, or other services. Agency Comments We provided a draft of this report to DOE, FERC, and IRS for review and comment. In its comments, reproduced in appendix II, FERC generally agreed with our findings. DOE and FERC provided technical comments which we incorporated as appropriate. IRS did not provide written or technical comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Chairman of FERC, the Commissioner of IRS, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix III. Appendix I: State Policies and Other Efforts Encouraging Deployment of Energy Storage Through interviews with stakeholders and our review of documents, we identified examples of policies and other efforts that have encouraged the deployment of energy storage or aim to address market barriers, including the establishment of mandates and targets for storage adoption, the revision of interconnection rules and planning requirements, financial incentives, and funding. Table 1 describes examples of a range of state policies and other efforts that may encourage the deployment of energy storage. Appendix II: Comments from the Federal Energy Regulatory Commission Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Frank Rusco, (202) 512-3841 or [email protected]. Staff Acknowledgments In addition to the contact named above, Karla Springer (Assistant Director), Antoinette Capaccio, Janice Ceperich (Analyst-in-Charge), Philip Farah, Kristen Farole, Paul Kazemersky, and Daniel Kojetin made key contributions to this report. Also contributing to this report were Tara Congdon, R. Scott Fletcher, Cindy Gilbert, and Dan C. Royer.
Why GAO Did This Study Power plants' electricity output must be matched continuously with demand, which varies depending on the time of day and year. To maintain a reliable supply of electricity, operators of the electricity grid—a complex network of power plants and power lines managed by utility companies and other operators—take steps to ensure power plants are available to generate electricity when needed. Increasingly, renewable sources of energy, such as solar and wind, are being integrated into the grid. Energy storage allows for electricity to be stored and used later when it is needed and could change the operating capabilities of the electricity grid. Batteries and other energy storage technologies can store energy in one form—such as chemical, mechanical, or thermal energy—and transform that energy to generate electrical power at a later time. GAO was asked to provide information on the role of energy storage in grid operations. This report describes (1) how energy storage can be used to enhance grid operations and performance; (2) factors that affect the deployment of energy storage for grid operations; and (3) federal and state policies and other efforts that address the deployment of energy storage. GAO reviewed studies published from 2012 through 2017; and interviewed 41 stakeholders, including officials from government agencies and representatives of industry and other groups based on their knowledge of energy storage and grid operations. What GAO Found Energy storage can be used in various ways to enhance the reliability, resilience, and efficiency of grid operations, according to studies GAO reviewed and stakeholders GAO interviewed. Such storage can be deployed throughout the electricity system and act as a generation, transmission, distribution, or customer-sited asset to provide various services, address operational challenges and needs, and potentially reduce costs, as shown in the figure below. For example, storage can help grid operators address supply disruptions and the variability of renewable energy resources, such as solar and wind; relieve transmission congestion; defer the need for transmission or distribution system upgrades; and provide backup power during a power outage. Examples of Potential Storage Applications on the Electricity Grid Various factors affect energy storage deployment. These include industry and technology readiness, safety concerns and stringency of siting requirements, increasing use of renewable resources, cost-competitiveness of storage and challenges with quantifying the value of storage, and the regulatory environment, according to studies GAO reviewed and stakeholders GAO interviewed. For example, industry and technical challenges include uncertainty about the performance of certain technologies over time and in various operating conditions. Federal and state policymakers have used various policies and other efforts to encourage the deployment of storage and address market barriers. For example, the Department of Energy has undertaken various efforts, including research and development focused on improving factors that affect the cost and capacity of certain storage technologies. In addition, the Federal Energy Regulatory Commission has issued proposed and final rules to address market barriers to storage deployment in wholesale markets. Lastly, state policies and other efforts that aim to encourage the deployment of storage or to address market barriers include establishing mandates and targets for storage adoption, revising planning requirements, and offering financial incentives and funding.
gao_GAO-17-799
gao_GAO-17-799_0
Background Since DHS’s creation in 2003, significant internal control and financial management system deficiencies have hampered its ability to reasonably assure effective financial management and to manage operations. These deficiencies contributed to our decision to designate DHS’s management functions, including financial management, as high risk. To help address these deficiencies, DHS initiated a decentralized approach to upgrade or replace legacy financial management systems and has been evaluating various options for modernizing them, including the use of SSPs. DHS initiated three projects for modernizing the systems of selected DHS components, including its TRIO modernization project. The TRIO project has focused on migrating the financial management systems of Coast Guard, DNDO, and TSA to a modernized solution provided by IBC. DHS’s efforts to effectively assess and manage risks associated with this project are essential to DHS’s realizing its modernization goals. In 2013, OMB issued a memorandum directing agencies to consider federal SSPs as part of their AAs. Also, in May 2014, Treasury and OMB designated IBC as one of four federal SSPs for financial management to provide core accounting and other services to federal agencies. This designation was based on Treasury and OMB’s evaluation of the four service providers’ ability to assist federal agencies in meeting their accounting and financial management needs, including experience with implementing financial management systems and providing other financial management services to customers, cost of services provided, compliance with financial management and internal control requirements, commitment to shared services, capacity, and long-term growth strategy. FIT’s responsibilities related to the governance and oversight of federal SSPs were subsequently transferred to USSM after USSM was established in October 2015. TRIO Modernization Project Because of concerns that its Core Accounting System (CAS) Suite was outdated, inefficient, and did not reliably meet requirements, Coast Guard completed an AA in January 2012 to assist in developing a path forward for modernizing its financial management system. In August 2012, Coast Guard established its CAS Replacement project team to further evaluate two of the alternatives considered in its AA and develop a recommended course of action. In addition, Coast Guard determined that hosting, owning, operating, and managing a financial management system were not among its core competencies. Because TSA and DNDO also relied on CAS as their primary accounting system, they also conducted AAs to identify the best alternative for transitioning to a modernized financial management system solution. The AAs conducted by the TRIO components during 2012 and 2013 considered the use of federal and commercial SSPs and other options. In addition, Coast Guard completed additional market research including further analysis of commercial SSPs in June 2013. In July 2013, the TRIO components determined that migrating to a federal SSP was the best course of action and subsequently conducted discovery phase efforts with IBC from November 2013 through May 2014 to further explore the functional requirements for procurement, asset, and financial management services. Based on these efforts, in July 2014, the TRIO components recommended that they proceed with implementation of the IBC shared services solution. In August 2014, FIT and OMB concurred with this recommendation, and DHS entered into an interagency agreement (IAA) with IBC for implementation. Figure 1 shows a timeline of these key events. The IAA for implementation and related performance work statement included a description of the services that IBC is to provide and the roles and responsibilities of DHS, the TRIO components, and IBC. The IAA also required IBC to prepare a detailed project management plan describing how the requirements would be managed and updated and an integrated master schedule (IMS) for identifying tasks to be completed, duration, percentage completed, dependencies, critical path, and milestones. According to the February 2015 project management plan, DNDO, TSA, and Coast Guard were expected to go-live on the IBC solution in the first quarter of fiscal years 2016, 2017, and 2018, respectively. However, in May 2016, DHS and IBC determined that TSA’s and Coast Guard’s planned implementation dates were not viable because of various challenges impacting the TRIO project and recommended a 1-year delay for their respective implementation dates. Figure 2 summarizes planned and completed key implementation events for the TRIO project as of May 2016. Best Practices for Conducting Analysis of Alternatives and Managing Risks GAO, SEI, and other entities have developed and identified best practices to help guide organizations in effectively planning and managing various activities, including acquisitions of major information technology systems. These include GAO’s identified best practices for the AOA process and best practices identified by SEI for risk management. GAO-identified best practices for AOA process. GAO identified 22 best practices for a reliable, high-quality AOA process that can be applied to a wide range of activities in which an alternative must be selected from a set of possible options, as well as to a broad range of capability areas, projects, and programs. These practices can provide a framework to help ensure that entities consistently and reliably select the project alternative that best meets mission needs. Not conforming to these best practices may lead to an unreliable process, and the entity will lack assurance that the preferred alternative best meets the mission needs. Appendix II provides additional details on GAO’s identified AOA process best practices and how they can be applied to a wide range of activities in which an alternative must be selected from a set of possible options, as well as to a broad range of capability areas, projects, and programs. SEI’s risk management practices. SEI’s practices for the risk management process area call for the identification of potential problems before they occur so that risk-handling activities can be planned throughout the life of a project to mitigate adverse impacts on achieving objectives. These practices are determining risk sources and categories, defining parameters used to analyze and categorize risks and to control the risk management effort, establishing and maintaining the strategy to be used for risk identifying and documenting risks, evaluating and categorizing each identified risk using defined risk categories and parameters and determining its relative priority, developing a risk mitigation plan in accordance with the risk monitoring the status of each risk periodically and implementing the risk mitigation plan as appropriate. DHS Did Not Always Follow Best Practices for Analyzing Alternatives for TRIO Components’ Choice of Modernized Financial Management System Although the TRIO components conducted AAs to identify the preferred alternative for modernizing their financial management systems, their efforts did not always follow best practices. For example, Coast Guard’s and TSA’s AAs supporting their selection of migrating to a federal SSP for modernizing their financial management systems did not fully or substantially meet all four characteristics of a reliable, high-quality AOA process. In addition, we found that DHS guidance did not fully or substantially incorporate five of GAO’s identified best practices for conducting an AOA process. The TRIO components’ AAs included descriptions of the key factors, such as scores for each alternative against the selection criteria used to assess it. Based on these AAs, DHS and the TRIO components selected the federal SSP alternative as their preferred choice and subsequently selected IBC as their federal SSP. However, because Coast Guard’s and TSA’s AAs did not fully or substantially meet all four characteristics of a reliable, high-quality AOA process, they are at increased risk regarding their decision on the solution that represents the best alternative for meeting their mission needs. DNDO Substantially, and Coast Guard and TSA Partially, Met Best Practices for Conducting AOAs Based on the extent to which the DHS TRIO components followed the GAO-identified 22 best practices for conducting an AOA process, we found that DNDO’s AA process substantially met the four characteristics of a reliable, high-quality AOA process while the Coast Guard and TSA AA processes both substantially met one and partially met three of these four characteristics. For example, we found that TSA’s AA partially met the “well-documented” characteristic, in part, because risk mitigation strategies, assumptions, and constraints associated with each alternative were not discussed in its AA. In addition, we found that Coast Guard’s AA partially met the “credible” characteristic, in part, because there was no indication that it contained sensitivity analyses, an evaluation of the impact of changing assumptions on its overall costs or benefits analyses. Our overall assessment is summarized in table 1. Appendix III provides additional details on our assessment of the TRIO components’ AAs for each of the GAO-identified 22 AOA best practices. Further, in comparing DHS AOA and AA guidance to the GAO-identified 22 AOA process best practices, we found that although DHS’s guidance for conducting both AOAs and AAs fully or substantially incorporated 17 of the identified best practices, the guidance did not fully or substantially incorporate 5 of these practices. For example, although the guidance addressed risk management in general terms, it did not detail the need to document risk mitigation strategies for each alternative. Not documenting the risks and related mitigation strategies for each alternative prevents decision makers from performing a meaningful trade-off analysis necessary to choose a recommended alternative. In addition, while DHS guidance describes the need for an AA or AOA review, it describes reviews conducted within the organizational chain of command and does not address the need for an independent review—one of the most reliable means to validate an AOA process. Further, although the guidance noted that weights for selection criteria may become more subjective when they cannot be derived analytically, additional guidance on weighting selection criteria was limited. Our overall assessment is summarized in table 2. Because of these limitations in guidance, and because Coast Guard and TSA did not fully adhere to the GAO-identified best practices, Coast Guard’s and TSA’s AAs did not fully or substantially reflect all four characteristics of a reliable, high-quality AOA process. As a result, Coast Guard and TSA increased their risk of selecting a solution that may not represent the best alternative for meeting their mission needs. TRIO Components Used Key Factors, Metrics, and Processes to Analyze Alternatives and Related Results Documentation supporting TRIO components’ AA efforts included descriptions of the key factors, metrics, and processes involved in conducting their analyses, including the (1) alternatives considered, (2) market research conducted, (3) three alternatives evaluated, (4) selection criteria used by each and how the criteria were weighted, (5) scores for each alternative against the selection criteria, and (6) alternatives that scored the best under the AOA evaluation. The TRIO components conducted market research to develop reasonable alternative solutions for consideration. For example, through its market research, TSA identified OMB-designated federal SSPs and commercial entities as potential alternatives for hosting and implementing a modernized and integrated financial management system. According to its AA, TSA was able to gain an understanding of the offerings, capabilities, and related costs associated with these alternatives through reviews of documentation and interviews. After developing a diverse range of financial system modernization alternatives for consideration, each of the TRIO components assessed them for viability using various factors—such as measures of effectiveness, cost, risk, and value—and identified the three top-rated alternatives for further evaluation. For example, Coast Guard identified nine alternatives for consideration and analyzed, scored, and ranked them to determine its top three alternatives for further analysis: incrementally improve the current CAS Suite and remove certain outdated components, host the financial management system internally using software and tools already owned, and use an SSP to host the financial management system. Each component identified its three alternatives for further evaluation and used defined selection criteria to rate them. For example, DNDO’s selection criteria included four categories of operational effectiveness that were weighted according to their level of importance. Based on their evaluations, each component identified the best alternative for its respective financial management system needs. According to Coast Guard’s November 2012 decision memorandum, Coast Guard further narrowed the alternatives it focused on to (1) using an SSP to host its financial management system and (2) hosting the system internally using already-owned software and tools, and it also gathered rough order of magnitude cost estimates for both alternatives. Based on its evaluation, Coast Guard determined that the two alternatives were comparable. According to this memorandum, Coast Guard further determined that owning, hosting, operating, and managing a financial management system were not among its core competencies. Based on this determination, OMB direction to agencies to use (with limited exceptions) shared services, and other factors, Coast Guard decided that migrating to an SSP was the best alternative. TSA found in its February 2013 analysis that the differences between federal and commercial SSP alternatives were not significant and, as a result, recommended that a competitive procurement be conducted to better evaluate each alternative. However, DHS officials told us that TSA subsequently determined that a competitive procurement was not warranted and chose to migrate to a federal SSP. This determination was based on additional OMB guidance issued in March 2013 directing agencies to consider federal SSPs as part of their AAs and stating that commercial SSPs are an appropriate solution and would be funded by OMB only in instances in which the agency’s business case demonstrates that a commercial SSP can provide a better value for the federal government. In addition, DNDO determined that migrating to a federal SSP was its best alternative in May 2013. Because its preliminary research focused primarily on the federal SSP marketplace, Coast Guard conducted additional market research to include a more robust analysis of commercial SSPs. Coast Guard’s June 2013 market research report described the results of this effort, including its evaluation of responses from 11 commercial SSPs. Coast Guard reported that none of the commercial SSPs that responded could meet all 44 specific financial management system requirements and the extent to which they could meet them varied significantly. Based on these results, Coast Guard determined that there was a lack of maturity in the commercial SSP market for federal financial management. According to the report, this overall assessment was based on various considerations of information provided by commercial SSP respondents, including the wide variety of proposed configurations, solutions, prices, and implementation schedules, the lack of federal experience and service for agency-wide capabilities, and insufficient length of service to establish positive trends in audit performance; the lack of similar offerings that implied a lack of strong competition between comparable products that would exert downward pressure on cost; and the lack of like product offerings, which increases the likelihood of higher switching costs in the case of poor performance because of increased difficulty in moving from one “turnkey” service to another. In July 2013, the TRIO components and DHS selected the federal SSP alternative as their preferred choice and subsequently selected IBC as their federal SSP. DHS officials told us that IBC was selected based on (1) DHS’s reliance on OMB and Treasury’s designation of IBC as a federal SSP, (2) OMB guidance to consider the use of federal SSPs, and (3) a review of the availability of the four federal SSPs indicating that IBC was the only one available to meet the requirements and implementation schedule at that time. In August 2013, DHS notified OMB that the TRIO components had performed extensive market research and finalized their respective AAs and independently concluded that migrating to a federal SSP was in the best interests of the government. Also, in August 2013, FIT notified OMB regarding the TRIO components’ AA efforts and that the TRIO components would proceed to the discovery phase with IBC. According to FIT’s notification memorandum to OMB, the TRIO components’ AAs demonstrated that migrating to a federal SSP was the best value to the federal government and that the components identified IBC as a suitable partner based on the results of their market research into federal SSPs. DHS Met Three and Partially Met Four Best Practices for Managing the Risks of Using IBC for the TRIO Project Risk management best practices call for the identification of potential problems before they occur so that risk-handling activities can be planned throughout the life of the project to mitigate adverse impacts on achieving objectives. These best practices involve (1) preparing for risk management, (2) identifying and analyzing risks, and (3) mitigating identified risks. Preparing for risk management involves determining risk sources and categories and developing risk mitigation techniques. Identifying and analyzing risks includes determining those that are associated with cost, schedule, and performance and evaluating identified risks using defined risk parameters. Mitigating risks includes determining the levels and thresholds at which a risk becomes unacceptable and triggers the execution of a risk mitigation plan or contingency plan; determining the costs and benefits of implementing the risk mitigation plan for each risk; monitoring risk status; and providing a method for tracking open risk-handling action items to closure. Based on our evaluation, we found that DHS processes generally reflected three of seven specific risk management best practices and partially reflected the remaining four practices. Table 3 summarizes the extent to which DHS followed these seven best practices for managing TRIO project risks. Additional details on DHS and TRIO component efforts to address these practices are summarized following this table. Prepare for risk management. Key aspects of processes established by DHS and TRIO components related to the three best practices associated with preparing for risk management: Determine risk sources and categories. This practice calls for a basis for systematically examining circumstances that affect the ability of the project to meet its objective and a mechanism for collecting and organizing risks. DHS and the TRIO components established processes that met this best practice. For example, DHS reviewed the integrated master schedule that IBC prepared to identify sources of risk and defined risk categories in TRIO project policies. Define risk parameters. Risk parameters are used to provide common and consistent criteria for comparing risks to be managed. The best practice includes defining criteria for evaluating and quantifying risk likelihood and severity levels and defining thresholds for each risk category to determine whether risk is acceptable or unacceptable and to trigger management action. DHS partially met this best practice. DHS’s risk management program defined rating scales to provide consistent criteria for evaluating and quantifying risk likelihood and severity levels. However, DHS’s Risk Management Planning Handbook and related template for developing risk management plans for projects did not address the need for thresholds relevant to each category of risk to facilitate review of performance metrics in order to determine when risks become unacceptable or to invoke selected risk-handling options when monitored risks exceed defined thresholds. Establish a risk management strategy. A risk management strategy addresses specific actions and the management approach used to apply and control the risk management program, including identifying sources of risk, the scheme used to categorize risks, and parameters used to evaluate and control risks for effective handling. DHS met this best practice. DHS and IBC established risk management policies and plans for the TRIO project based on DHS acquisition guidance, which provided a framework for a risk management program. Collectively, these policies and plans constitute a risk management strategy. DHS and IBC have periodically updated these documents to maintain the scope of the risk management effort; the methods and tools to be used for risk identification, risk analysis, risk mitigation, risk monitoring, and communication; the prioritization of risks; and the allocation of resources for risk mitigation. Identify and analyze risks. Key aspects of processes established by DHS and the TRIO components related to the two best practices associated with identifying and analyzing risks: Identify risks. Risk identification should be an organized, thorough process to seek out probable or realistic risks to achieving objectives. This practice recognizes that risks should be identified and described understandably before they can be analyzed and managed properly. Using categories and parameters developed in the risk management strategy and identified sources of risk guides the identification of risks associated with cost, schedule, and performance. To identify risks, best practice elements include reviewing the work breakdown structure (WBS) and project plan to help ensure that all aspects of the work have been considered. Best practices for documenting risks include documenting the context, conditions, and potential consequences of each risk and identifying the relevant stakeholders associated with each risk. DHS partially met this best practice. DHS’s July 2016 risk register contained a wide range of risks associated with defined risk categories. It also reflected DHS’s review of the TRIO project’s integrated master schedule that IBC prepared based on the WBS and work plans that IBC also developed. The risk register documented the context, conditions, potential consequences, and relevant stakeholders associated with each risk. However, DHS’s documented risk management processes did not identify all significant risks or reflect its efforts to revisit risks that had previously been closed. For example, DHS officials told us that IBC was unable to provide sufficient, reliable cost and schedule information for project monitoring; however, a risk reflecting these concerns was not included on its July 2016 risk register. Further, the risk register included certain closed risks related to the need for a governance structure and strategy for ensuring that IBC met performance, cost, and schedule objectives. Although DHS had ongoing concerns about its ability to ensure that IBC met these objectives, the risk register did not reflect efforts to revisit these risks to determine whether their status needed revision or if other risks should be included on the risk register to address its accountability concerns. In addition, DHS did not always take timely action to document its consideration of risks identified by its independent verification and validation (IV&V) contractor for potential inclusion on its risk register. For example, the IV&V contractor identified a risk related to inefficiencies in DHS’s document review process in June 2015 that was not included on DHS’s risk register until February 2016. DHS officials indicated that a crosswalk between the DHS risk register and IV&V contractor risk management observations was performed weekly; however, results of these weekly reviews were not documented. Evaluate, categorize, and prioritize risks. Risk assessment uses defined categories and parameters to determine the priority of each risk to assist in determining when appropriate management attention is required. Best practices for analyzing risks include categorizing risks according to defined risk categories, evaluating identified risks using defined risk parameters, and prioritizing risks for mitigation. DHS’s processes met this practice. For example, the documented risk management program included application of defined risk categories and parameters for all identified risks, providing a means for reviewing risks and determining the likelihood and severity of risks being realized. The TRIO project’s Joint Risk Management Integrated Project Team provided consistency to the application of parameters by reviewing risk assessments when risks were first identified. By determining exposure ratings for each identified risk, DHS prioritized risks for monitoring and allocation of resources for risk mitigation. Mitigate risks. Key aspects of processes established by DHS and the TRIO components related to the two best practices associated with mitigating risks: Develop risk mitigation plans. Risk mitigation plans are developed in accordance with the risk management strategy and include a recommended course of action for each critical risk. The risk mitigation plan for a given risk includes techniques and methods used to avoid, reduce, and control the probability of risk occurrence; the extent of damage incurred should the risk occur; or both. Elements of this practice include determining the levels and thresholds that define when a risk becomes unacceptable and triggers the execution of a risk mitigation plan or contingency plan, identifying the person or group responsible for addressing each risk, determining the costs and benefits of implementing the risk mitigation plan for each risk, developing an overall risk mitigation plan for the work to orchestrate the implementation of individual risk mitigation plans, and developing contingency plans for selected critical risks in the event impacts associated with the risks are realized. DHS partially met this best practice. DHS’s risk management program documentation reflected the development of risk response plans for most risks, including all those determined to be of medium and high exposure level. DHS identified those responsible for addressing each risk. However, DHS and IBC did not always develop sufficiently detailed risk mitigation plans including specific risk-handling action items, determination of the costs and benefits of implementing the risk mitigation plan for each risk, and developing contingency plans for selected critical risks in the event that their impacts are realized. For example, a risk associated with IBC’s capacity and experience for migrating large agencies the size of Coast Guard and TSA was identified in July 2014. Although DHS developed plans to help mitigate this risk, a contingency plan was not developed prior to realizing the adverse impact of not implementing Coast Guard and TSA on IBC’s modernized solution. Rather, a contingency plan working group (CPWG) to address this and other concerns was established in January 2017, over 2 years after the risk was initially identified. Further, thresholds were not used within the risk management program to define when a risk becomes unacceptable, triggering the execution of a risk mitigation plan or contingency plan. Implement risk mitigation plans. Risk mitigation plans are implemented to facilitate a proactive program to regularly monitor risks and the status and results of risk-handling actions to effectively control and manage risks during the work effort. Best practice elements include revisiting and reevaluating risk status at regular intervals to support the discovery of new risks or new risk-handling options that can require reassessment of risks and re-planning of risk mitigation efforts. Elements also include providing a method for tracking open risk-handling action items to closure, establishing a schedule or period of performance for each risk-handling activity, invoking selected risk-handling options when monitored risks exceed defined thresholds, and providing a continued commitment of resources for each risk mitigation plan. DHS partially met this best practice. Risk monitoring of the TRIO project consisted of reviews performed by DHS and TRIO component officials responsible for risk management and oversight functions. These reviews considered significant risks, risks approaching realization events, and the effect of management intervention on the resolution of risks. These reviews also relied, in part, on data contained in DHS’s risk register, which represents the official repository of TRIO project risks and information on the status of risks and related risk mitigation efforts. However, other aspects of DHS’s efforts to implement risk mitigation plans did not fully adhere to certain elements associated with this best practice. For example, we identified certain issues that raised questions concerning the accuracy of data contained in the risk register, such as (1) the lack of clear markings indicating when the accuracy of data on each risk was last confirmed, including risk records that had not been modified in the previous 3 months, and (2) certain risks for which the estimated risk impact date had already occurred but its status risk according to DHS’s risk register did not reflect that it had been realized and become an issue. In addition, DHS officials stated that IBC did not provide sufficiently detailed, reliable cost and schedule information that could have been used to monitor TRIO project risks more effectively. DHS’s ability to monitor cost, schedule, and other performance metrics was also limited because of the lack of thresholds for management involvement, as noted above. DHS’s implementation of risk monitoring plans was further limited by other issues, including (1) a period of performance for each risk-handling activity, which includes a start date and anticipated completion date to control and monitor risk mitigation efforts, was not always established and (2) an inability to fully track open risk-handling action items to closure existed because of the lack of sufficient detail on specific risk-handling activities in the DHS risk register. According to DHS officials, DHS relied heavily on IBC to manage risks associated with the TRIO project and, in particular, those for which IBC was assigned as the risk owner. They also acknowledged DHS’s responsibility for overseeing IBC’s TRIO project risk management efforts and described various actions taken to address growing concerns regarding IBC’s efforts. For example, DHS created the Joint Risk Management Integrated Project Team, in part, to provide a forum in which IBC could obtain assistance in developing risk responses and discuss DHS’s risk mitigation concerns. Further, to help reduce exposure of underlying risks, DHS offered assistance to IBC’s project management functions, such as developing the integrated master schedule and performing quality control checks on project deliverables. Despite these efforts, DHS officials stated that challenges associated with the IAA structure and terms of the performance work statement with IBC on the TRIO project limited DHS’s visibility into IBC’s overall cost, schedule, and performance controls and ability to oversee IBC’s risk management efforts. For example, they stated that the performance work statement did not specify the level of reporting to be provided by IBC on cost, schedule, and performance in sufficient detail to effectively monitor progress on achieving key project objectives. Further, the limitations to managing risks related to the best practices we assessed as partially met were largely attributable to limitations in DHS and TRIO project guidance and policies. For example, DHS’s Risk Management Planning Handbook and related template for developing risk management plans for projects does not address the need to define thresholds to facilitate review of performance metrics to determine when risks become unacceptable. Also, TRIO project policies did not address the need to periodically revisit consideration of risk sources other than IMS-related milestones, specify periods of performance for specific risk- handling activities, or define an interval for updating and certifying risk statuses. In addition, DHS guidance and TRIO project policies did not describe the need to consider and document risks specifically related to the lack of sufficient, reliable cost and schedule information to properly manage and oversee the project or for timely disposition of risks that its IV&V contractor identified. Further, TRIO project risk management policies and management tools used to implement them address best practice elements such as determination of the costs and benefits of implementing risk mitigation plans, developing contingency plans, and developing specific risk-handling action items. However, these policies do not require, and the risk register was not designed to specifically capture, these elements in documented risk mitigation plans. By not adopting important elements of risk management best practices into project guidance, DHS and the TRIO components increase the risk that potential problems would not be identified before they occur and that activities to mitigate adverse impacts would not be effectively planned and initiated. Key Factors and Challenges Impacting the TRIO Project and DHS’s Path Forward Although DHS has taken various actions to manage the risks of using IBC for the TRIO project, including some that were consistent with best practices, the TRIO project has experienced challenges raising concerns regarding the extent to which its objectives will be achieved. In connection with these challenges, the TRIO components notified DHS during April 2016 through January 2017 that certain baseline cost and schedule objectives had not been, or were projected to not be, achieved as planned. According to these notifications and DHS officials we interviewed, several key factors and challenges significantly impacted DHS’s and IBC’s ability to achieve TRIO project objectives as intended. In addition, IBC, FIT, and USSM officials identified similar issues impacting the TRIO project. In connection with these challenges, DHS and IBC began contingency planning efforts in January 2017 to identify and assess viable options for improving program performance and addressing key TRIO project priorities. Plans for DHS’s path forward on the TRIO project, as of May 2017, involve significant changes, such as transitioning away from using IBC and a 2-year delay in completing Coast Guard and TSA’s migration to a modernized solution. Key Factors and Challenges Impacting the TRIO Project We grouped the key factors and challenges impacting the TRIO project that DHS, IBC, FIT, and USSM officials and OMB staff identified into five broad categories: (1) project resources, (2) project schedule, (3) complex requirements, (4) project costs, and (5) project management and communications. The key factors and challenges related to each category are summarized below. Project resources: Concerns about IBC’s experience and its capacity to handle a modernization project involving agencies the size of Coast Guard and TSA were identified as significant risks in July 2014, resulting from discovery phase efforts completed prior to DHS and IBC’s entering the implementation phase in August 2014. According to DHS officials, status reports, and other documentation, key TRIO project challenges related to resources included concerns that (1) IBC encountered federal employee hiring challenges and was unable to ramp up and deploy the resources necessary to meet required deliverables, and (2) IBC experienced significant turnover of key stakeholders which adversely impacted its ability to achieve TRIO project objectives. In connection with DHS’s decision to use IBC for the TRIO project, DHS officials told us that DHS relied heavily on OMB and Treasury’s designation of IBC as a federal SSP and their related assessment of IBC’s capacity and experience. DHS officials also told us that DHS relied on FIT’s federal agency migration evaluation model during discovery phase efforts that focused on assessing the functionality of the software rather than assessing IBC’s (1) capacity, experience, and capability; (2) ability to address more complex software configurations and interfaces associated with large agencies; and (3) cost, schedule, and performance metrics. DHS officials stated that issues related to IBC’s capacity and experience represented the most significant challenge impacting the TRIO project. IBC officials acknowledged that IBC was unable to ramp up its resources until after the project had begun and that the IBC project team experienced significant turnover in key leadership and TRIO project positions over the course of the project. IBC officials also acknowledged that during its early efforts on the TRIO project, assigned IBC staff lacked the experience and expertise necessary for managing large-scale projects and, as a result, many of the risks initially identified were not effectively addressed. FIT and USSM officials and OMB staff also acknowledged that resource challenges significantly impacted the TRIO project. A FIT official acknowledged that assessing software functionality, rather than implementation, was emphasized during the discovery process. Although DHS relied on OMB and Treasury’s designation of IBC as a federal SSP, this FIT official also told us that because agencies’ specific needs can vary significantly, agencies are responsible for conducting sufficient due diligence to assess a federal SSP’s ability to meet their requirements. Project schedule: DHS, IBC, FIT, and USSM officials acknowledged that migrating the TRIO components to IBC within original time frames was a significant challenge given the overall magnitude and complexity of the TRIO project. According to DHS officials and TRIO project documentation, DHS identified delays in completing various tasks and milestones including providing design phase technical documentation and design processing proposed change requests; meeting proposed baseline schedules for implementing Coast Guard and TSA on the modernized IBC solution; and achieving initial operating capability requirements and stabilizing the production environment after DNDO’s migration to IBC because of various issues related to reporting, invoice payment processing, contract management processes, and resolving help desk tickets in a timely manner. DHS officials also stated that IBC did not consistently update the IMS to ensure that it accurately reflected all required tasks, the completion status, and the resources required to complete them. Concerns related to meeting milestones and updating the IMS were discussed during periodic status update meetings that included DHS, IBC, OMB, FIT, and USSM officials. IBC and DHS officials acknowledged that processes for communicating and resolving issues were not always efficient and contributed to schedule delays. In addition, in November 2016, USSM noted several concerns based on its review of a draft IMS supporting TSA’s re-planning efforts to go-live in October 2017. USSM’s concerns included an incomplete project scope and schedule and need for additional discovery to determine cost and level of effort, an extremely aggressive schedule with very limited contingencies for the lack of interim checkpoints or oversight on tasks exceeding 30 days, the need for a resource-loaded IMS that incorporates an appropriate level of detail, and the need for an expedited program governance strategy and escalation path that DHS and IBC leadership could use to make program decisions within the time allotted on the schedule. Complex requirements: DHS, IBC, FIT, and USSM officials acknowledged the overall complexity of the TRIO project and that the lack of a detailed understanding of the components’ requirements earlier in the project impacted IBC’s and DHS’s ability to satisfy the requirements as planned. For example, USSM and FIT officials told us that under the shared services model, the approach for onboarding new customers usually involves migrating to a proven configuration of a solution that is already being used by the provider’s existing customers. However, rather than taking this approach, DHS and IBC agreed to implement a more recent version of Oracle Federal Financial software (version 12.2) with integrated contract life cycle and project modules. Under this approach, IBC’s plans included migrating other existing customers to this upgraded environment. USSM officials told us that migrating TRIO components to a new solution that required configuring new software and related applications and developing related interfaces introduced additional complexities that contributed to issues on the TRIO project. According to a FIT official, the functionality of this more recent version of software is very different than that of the version IBC’s existing customers used. This official stated that IBC did not have the needed government personnel with knowledge and experience associated with this new software, a condition that likely contributed to the challenges experienced on the TRIO project. IBC officials acknowledged that IBC’s lack of familiarity with Oracle 12.2 increased the complexity of the TRIO project. In addition, DHS and IBC perspectives on the need for changes differed because of the lack of clarity regarding TRIO project requirements. DHS officials told us that many change requests on the TRIO project reflected the need for required functionality based on previously stated requirements. They also told us that they did not consider DNDO-related requirements to be overly complex when compared to those associated with IBC’s similarly sized customers. However, DHS officials stated that as of June 2017, IBC has not yet met DNDO’s needs to deliver a functioning travel system interface and other requirements. According to IBC officials, TRIO project change requests to address components’ requirements were extensive and included significant customizations to meet unique requirements that were not aligned with the federal shared service model. IBC officials noted additional challenges in addressing TRIO project requirements related to DHS’s efforts to address certain organizational change management and business process reengineering responsibilities. According to IBC officials, in some instances, the TRIO components provided conflicting requirements related to the same process that would have been more consistent had DHS completed more of its business process reengineering efforts prior to providing them to IBC. Project costs: According to the July 2014 discovery report, proposed implementation costs for the TRIO project totaled $89.9 million. However, according to DHS officials and TRIO project documentation, estimated costs significantly increased because of schedule delays, unanticipated complexities, and other challenges. In January 2017, DHS prepared a summary of estimated TRIO project implementation costs associated with its IAA with IBC. According to this summary, estimated IBC-related TRIO project implementation costs through fiscal year 2017 increased by approximately $42.8 million (54 percent) from the $79.2 million provided in the original August 2014 IAA with IBC as a result of modifications required, in part, to address challenges impacting the project. DHS officials also expressed concerns regarding increases in estimated operations and maintenance costs for the IBC solution. For example, according to a December 2016 memorandum to DHS on action items associated with failing to meet the baseline schedule date for initial operational capability, DNDO stated that IBC’s updated projected costs of operations and maintenance of its system were unaffordable. In connection with these costs, DHS officials also stated that IBC determined that separate, rather than shared, help desk resources were required to support the TRIO project because it was significantly different from the solution that IBC’s existing customers used. As a result, the officials indicated that these costs were more than originally expected. However, IBC officials told us that a portion of the increase in help desk- related costs was also due to DNDO employees not using the system properly because they were not sufficiently trained on it before it was implemented. In addition, challenges impacting the TRIO project have contributed to significant changes in the path forward on the project; as a result, the extent to which overall TRIO project modernization costs will be impacted going forward has not yet been determined. Project management and communication: According to DHS officials, various program management-related challenges impacted the TRIO project. For example, they expressed concerns regarding the effectiveness of IBC’s project management efforts including cost, schedule, and change management as well as IBC’s allocation of resources and slow decision-making process. They also stated that DHS provided significant time and resources to make up for fundamental project management activities that were under IBC’s control and not performed. In addition, DHS officials identified limitations associated with (1) poorly defined service level agreements and program performance metrics, (2) poor quality control plan, and (3) the lack of mechanisms for measuring delivery and addressing concerns regarding IBC’s performance. DHS officials told us that although various mechanisms can be used to hold commercial vendors accountable—such as cure notices, quality assurance surveillance plans, and incentives or disincentives to monitor performance—few mechanisms are available to hold federal agency service providers accountable for performance concerns. DHS officials also acknowledged challenges in their project management and communication efforts and identified lessons learned to help improve future efforts, including the need to establish a performance-based contract to determine objective and enforceable activity level metrics; be more prepared for organizational changes; improve vendor, project, and schedule management efforts; better understand SSP resource plans and monitor SSP efforts to help ensure that sufficient resources are secured timely; and centralize program management for financial system modernization functions, rather than continuing with the structure used on the TRIO project—for example, the TRIO project’s program management structure consisted of program management offices at the component level performing cost, schedule, and technical monitoring activities with DHS headquarters’ involvement focused on governance and oversight, resulting in duplicate efforts across components. IBC officials acknowledged challenges concerning IBC’s lack of sufficient resources and turnover, as described above. However, they told us that DHS’s approach to project management often resulted in duplicative meetings and a lengthy decision-making process involving several officials and multiple review and approval processes. According to USSM officials, the TRIO project team focused an unbalanced portion of its efforts on the delivery of technology at the expense of organizational change management, communication management, and other project management areas. For example, the failure to incorporate lessons learned from DNDO’s deployment adversely affected subsequent TRIO project implementation efforts, as change management activities did not address previously encountered risks. An OMB staff member concurred with the lessons learned that DHS identified, including those indicating the need for stronger project management. While the project is ongoing, the OMB staff member noted the importance of DHS having well-defined requirements for the project and better coordination to achieve the desired outcomes. Significant TRIO Project Changes Resulting from Challenges and Steps Implemented for the Path Forward In connection with TRIO project challenges, DHS officials told us that IBC notified DHS in April 2016 that it would not be able to meet the planned October 2016 implementation date for TSA. In response, DHS and IBC established the TSA Replan Tiger Team to perform a detailed assessment of potential courses of action. According to DHS officials, DHS and IBC subsequently took various actions to help address these and other challenges impacting the TRIO project, as summarized below. May 2016: IBC requested additional funding for fiscal year 2016 for 14 additional IBC and contractor personnel to strengthen program coordination and management support. According to DHS officials, DHS provided this requested funding along with additional funding to establish a business integration office to help strengthen cross organizational communication. DHS determined that plans for migrating TSA and Coast Guard to IBC during the first quarter of fiscal years 2017 and 2018, respectively, were not viable. As a result, their planned migrations were each extended an additional year. June 2016: DHS and IBC developed a comprehensive remediation plan to track progress on efforts to resolve numerous issues associated with DNDO’s production environment that continued to hamper its stability since going live in November 2015. According to DHS officials, these issues related to invoice payment and interest accruals, contract life cycle management, reporting, and other activities and have required numerous work-arounds to execute business processes. August to October 2016: DHS, Coast Guard, and IBC determined that a similar replanning effort was needed for Coast Guard’s successful migration to IBC. According to DHS officials, IBC indicated that it was unable to simultaneously provide DNDO production and TSA implementation support while also addressing the complexities related to Coast Guard. DHS officials told us that another Tiger Team established to address Coast Guard issues failed to complete the scope of its charter, and as a result, Coast Guard was forced to assume a minimum of a 2- year delay (rather than the 1-year delay previously determined in May 2016) and that this significantly increased program costs. They further stated that some of the team’s deliverables have not been initiated or remain outstanding as of June 2017. December 2016: IBC communicated to DHS that it cannot support the discovery phase with DHS’s CUBE modernization project. In addition, DHS approved the establishment of a Joint Program Management Office to serve as the overarching program management for DHS financial systems modernization projects. According to DHS officials, using a department-wide approach will enable DHS to more effectively leverage the resources and expertise across all modernization projects. January 2017: IBC communicated to DHS that it cannot support Coast Guard implementation in October 2018, and DHS and IBC established a joint CPWG to assess viable options for improving program performance and addressing stakeholder concerns and key TRIO project priorities. February 2017: DHS and IBC issued a joint memorandum to provide an update on contingency planning discussions. DHS and IBC shared commitments and determinations included (1) stabilizing the DNDO production environment and executing TSA implementation activities, (2) delivering the best value for the government and ensuring mutual success to the greatest extent possible, (3) preserving and protecting the current investment, and (4) making TSA implementation the first priority. In addition, DHS and IBC presented two options as representing the best opportunities for success in improving program performance and addressing stakeholder concerns: (1) continue with the status quo plan for Coast Guard implementation in October 2019, with significant improvements to program management and overall support capability and capacity, or (2) platform replacement. Platform replacement was presented as the preferred path toward meeting the needs of both DHS and IBC. Under this option, DHS and IBC would proceed with TSA implementation and work toward an orderly transition of TRIO components to an alternate service provider, hosting location, or both. March 2017: According to DHS officials, DHS, IBC, and USSM officials met to review certain critical success criteria for TSA’s implementation. Based on these discussions, it was determined that TSA would not go live with IBC in fiscal year 2018 given the high-risk schedule and critical criteria involved and the Coast Guard implementation would also be delayed accordingly. Further, TSA release 3.0 would be delivered in October 2017 or as soon as possible thereafter. In addition, the CPWG would continue working to identify an alternative path forward, and DHS and IBC would identify and evaluate critical transition activities and timelines. April 2017: The CPWG recommended moving away from IBC to a commercial service provider leveraging the cloud as the best course of action to complete TRIO project implementation and as the most fiscally responsible approach from a long-term sustainment and cost perspective. The CPWG’s recommendation was based on its analysis of six options and proposed a transition timeline, including key activities, as shown in figure 3. May 2017: During its May 3, 2017 briefing of the Financial Systems Modernization Executive Steering Committee, DHS indicated that two of the options that the CPWG considered were no longer viable, including the CPWG’s recommendation to transition to a commercial cloud service provider because the software was not yet cloud-ready. DHS ranked the remaining four options using 13 OMB risk factors as selection criteria and determined that migrating the solution to a DHS data center represented the best option going forward. In addition, DHS decided to move forward with discovery efforts related to this option. According to its briefing presentation and DHS officials, the notional timeline of planned key events for the TRIO project included various items, as shown in figure 4. DHS officials indicated that DHS expects to present the findings and recommendations resulting from discovery efforts associated with this new path forward to USSM and OMB for concurrence. As of August 2017, results of this effort were under review by DHS leadership. Conclusions The TRIO project represents a key element of DHS’s efforts to address long-standing deficiencies in its financial management systems and further improve financial management. Following best practices to manage risks effectively can help provide increased assurance that large, complex projects—such as the TRIO project—will achieve planned objectives. DNDO’s AA process substantially met the four characteristics of a reliable, high-quality AOA process. However, Coast Guard’s and TSA’s AAs substantially met one and partially met three of these four characteristics. Further, DHS did not always follow best practices for managing the risks of using IBC for the TRIO project. As a result, TRIO components faced an increased risk that the solution they chose would not represent the best alternative for meeting their mission needs and that the risks impacting the TRIO project would not be effectively managed to mitigate adverse impacts. In addition, significant challenges have impacted the TRIO project, raising concerns about the extent to which objectives will be achieved as planned. Plans for DHS’s path forward on the TRIO project, as of May 2017, involve significant changes, such as transitioning away from IBC and a 2-year delay in completing Coast Guard’s and TSA’s migration to a modernized solution. Without greater adherence to best practices for analyzing alternatives and managing project risks, DHS continues to face increased risk that its financial management system modernization project will not provide reasonable assurance of achieving its mission objectives. Recommendations for Executive Action We are making the following two recommendations to DHS: The DHS Under Secretary for Management should develop and implement effective processes and improve guidance to reasonably assure that future AAs fully follow AOA process best practices and reflect the four characteristics of a reliable, high-quality AOA process. (Recommendation 1) The DHS Under Secretary for Management should improve the Risk Management Planning Handbook and other relevant guidance for managing risks associated with financial management system modernization projects to fully incorporate risk management best practices, including defining thresholds to facilitate review of performance metrics to determine when risks become unacceptable; identifying and analyzing risks to include periodically reconsidering risk sources, documenting risks specifically related to the lack of sufficient, reliable cost and schedule information needed to help properly manage and oversee the project, and timely disposition of IV&V contractor-identified risks; developing risk mitigation plans with specific risk-handling activities, the costs and benefits of implementing them, and contingency plans for selected critical risks; and implementing risk mitigation plans to include establishing periods of performance for risk-handling activities and defining time intervals for updating and certifying the accuracy and completeness of information on risks in DHS’s risk register. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this product to DHS and the Department of the Interior for comment. In its comments, reprinted in appendix IV, DHS concurred with our recommendations and provided details on its implementation of the recommendations as discussed below. In addition, DHS provided technical comments, which we incorporated as appropriate. The Department of the Interior only provided technical comments, which we incorporated as appropriate. DHS stated that it remains committed to its financial system modernization program. Specifically, regarding our first recommendation to develop and implement effective processes and improve guidance to reasonably assure that future AAs fully follow AOA process best practices and reflect the four characteristics of a reliable, high-quality AOA process, DHS stated that it agrees that effective processes and guidance are necessary to assure best practices. DHS also stated that it is important to note that the GAO-identified best practices were published more than 2 years after the TRIO components’ AAs were completed. While this is the case, as discussed in our report, these best practices are based on long- standing, fundamental tenets of sound decision making and economic analysis and were identified by compiling and reviewing commonly mentioned AOA policies and guidance that are known to and have been used by government and private sector entities. DHS also stated that it has already implemented this recommendation through its issuance of guidance and instructions in 2016 and that a copy of this additional guidance and instructions was provided to GAO. However, the documentation provided by DHS does not fully address our recommendation. As part of our recommendation follow-up process, we will coordinate with DHS to obtain additional information on its efforts to address our recommendation. With regard to our second recommendation to improve the Risk Management Planning Handbook and other relevant guidance, DHS stated that it concurred and agreed that the Risk Management Planning Handbook required updating to fully incorporate risk management best practices. In addition, DHS described actions it will take, and has taken, to revise and publish an updated handbook. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Homeland Security, the DHS Under Secretary for Management, the Acting DHS Chief Financial Officer, the Secretary of the Interior, and the Director of the Interior Business Center. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9869 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Appendix I: Scope and Methodology To determine the extent to which the Department of Homeland Security (DHS) followed best practices in analyzing the alternatives used in choosing the preferred alternative for modernizing TRIO components’ financial management systems, we reviewed information that the TRIO components provided as part of their alternatives analysis (AA) process, referred to as the AA body of work, which includes the AA and other supporting documentation that is not specifically included in the AA. In addition, we discussed the DHS AA process with the TRIO components and DHS officials. We evaluated each TRIO component’s AA body of work and assessed this information against the GAO-identified 22 analysis of alternatives (AOA) process best practices. We then scored each AA against those best practices. In appendix II, these GAO- identified best practices are described in detail. Our evaluation comprised the following steps: (1) two GAO analysts separately examined the AA information received for each component, providing a score for each of 18 best practices; (2) a third GAO analyst resolved any differences between the two analysts’ initial scoring; and (3) a GAO specialist on AOA best practices, independent of the audit team, reviewed the team’s AA documentation, scores, and analyses for consistency. The GAO specialist also assessed the four best practices related to cost estimating. We used the average scores for each best practice to determine an overall score for four summary characteristics—well-documented, comprehensive, unbiased, and credible—of a reliable, high-quality AOA process at each TRIO component. Next, we shared our preliminary analysis with the TRIO components and DHS, and requested their technical comments and any additional information for our further consideration. For those characteristics of the AA process that received a score of partially met or below, we met with TRIO component and DHS officials to discuss potential reasons that an AA did not always conform to best practices. Finally, using the same methodology and scoring process explained above, we performed a final assessment based on our preliminary analysis and the comments and additional information received. The best practices were not used to determine whether DHS made the correct decision in selecting Department of the Interior’s Interior Business Center (IBC) to implement the financial management systems modernization solution or whether the TRIO project would have arrived at a different conclusion had it more fully conformed to these best practices. We also reviewed DHS guidance for conducting AOAs and AAs against the GAO-identified 22 AOA process best practices using the same methodology described above for reviewing the TRIO components’ AAs. In the course of applying these best practices to a TRIO component’s AA and to DHS guidance for the AA process, we assessed the reasonableness of the information we collected. We determined that the information from the DHS AA process was sufficiently reliable to use in assessing the TRIO components’ AAs and DHS guidance against these 22 best practices. To determine the key factors, metrics, and processes used by the TRIO components in developing and evaluating DHS’s alternative solutions and final choice for financial system modernization, we reviewed each component’s AA, including a description of (1) the alternatives considered, (2) the market research conducted, (3) the three alternatives evaluated, (4) the selection criteria used and how the criteria were weighted, (5) how each alternative scored against the selection criteria, and (6) the alternative that scored the best according to the component’s evaluation. To determine the extent to which DHS managed the risks of using IBC consistent with risk management best practices, we reviewed DHS’s and TRIO components’ risk management guidance and other documentation supporting their risk management efforts, including risk registers, mitigation plans, status reports, and risk management meeting minutes. We also met with officials to gain an understanding of the key processes and documents used for managing and reporting on TRIO project risks. We assessed the processes against best practices that the Software Engineering Institute (SEI) identified. The practices we selected are fundamental to effective risk management activities. These practices are identified in SEI’s Capability Maturity Model® Integration (CMMI®) for Acquisition, Version 1.3. In particular, the key best practices for preparing for risk management are determine risk sources and categories, define risk parameters, and establish a risk management strategy. The key best practices for identifying and analyzing risks are evaluate, categorize, and prioritize risks. The key best practices for mitigating identified risks are develop risk mitigation plans and implement risk mitigation plans. We applied the criteria from the CMMI risk management process area to determine the extent to which the expected practices were implemented, or future activities were planned for, by the program office. The rating system we used is as follows: (1) meets, or generally satisfies all elements of the specific practice; (2) partially meets, or generally satisfies a portion of specific practice elements; and (3) does not meet, or does not satisfy specific practice elements. In the context of the best practices methodology, we assessed the reliability of TRIO project risk data contained in DHS’s risk register. We interviewed officials on how the risk register was developed and maintained, including key control activities used to provide reasonable assurance of the accuracy of the information reported in the register. We reviewed DHS’s July 2016 risk register and minutes from risk management committee meetings (one meeting per quarter, randomly selected). Of 120 TRIO project risks on the July 2016 risk register, we found 13 risks with missing data. Of 47 active risks identified, 28 risk records had not been modified in the previous 3 months and the register did not indicate when their accuracy was last confirmed and 35 risks were beyond their indicated impact dates but had not been marked as issues. We concluded that the pervasiveness of these data reliability problems decreased the usefulness of the risk register in connection with managing TRIO project risks. To determine the key factors or challenges that have impacted the TRIO project and DHS’s plans for completing remaining key priorities, we met with DHS, IBC, Office of Financial Innovation and Transformation, and Unified Shared Services Management office officials and Office of Management and Budget staff to obtain their perspectives. In addition, we reviewed documentation provided by these officials, including TRIO project status reports and memorandums, leadership briefings, and other presentations. We conducted this performance audit from March 2016 to September 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Best Practices for the Analysis of Alternatives Process Many guides describe an approach to an analysis of alternatives (AOA); however, there is no single set of practices for the AOA process that has been broadly recognized by both the government and private sector entities. GAO has previously identified 22 best practices for an AOA process by (1) compiling and reviewing commonly mentioned AOA policies and guidance used by different government and private sector entities and (2) incorporating experts’ comments on a draft set of practices to develop a final set of practices. These practices are based on longstanding, fundamental tenets of sound decision making and economic analysis. In addition, these practices can be applied to a wide range of activities in which an alternative must be selected from a set of possible options, as well as to a broad range of capability areas, projects, and programs. These practices can provide a framework to help ensure that entities consistently and reliably select the project alternative that best meets mission needs. The guidance below is an overview of the key principles that lead to a successful AOA process and not as a “how to” guide with detailed instructions for each best practice identified. The 22 best practices that GAO identified are grouped into the following five phases: 1. Initialize the AOA process: Includes best practices that are applied before starting the process of identifying, analyzing, and selecting alternatives. This includes determining the mission need and functional requirements, developing the study time frame, creating a study plan, and determining who conducts the analysis. 2. Identify alternatives: Includes best practices that help ensure that the alternatives to be analyzed are sufficient, diverse, and viable. 3. Analyze alternatives: Includes best practices that compare the alternatives to be analyzed. The best practices in this category help ensure that the team conducting the analysis uses a standard, quantitative process to assess the alternatives. 4. Document and review the AOA process: Includes best practices that would be applied throughout the AOA process, such as documenting all steps taken to initialize, identify, and analyze alternatives and to select a preferred alternative in a single document. 5. Select a preferred alternative: Includes a best practice that is applied by the decision maker to compare alternatives and to select a preferred alternative. The five phases address different themes of analysis necessary to complete the AOA process, and comprise the beginning of the AOA process (defining the mission needs and functional requirements) through the final step of the AOA process (selecting a preferred alternative). We also identified four characteristics that relate to a reliable, high-quality AOA process—that the AOA process is well-documented, comprehensive, unbiased, and credible. Table 4 shows the four characteristics and their relevant AOA best practices. Conforming to the 22 best practices helps ensure that the preferred alternative selected is the one that best meets the agency’s mission needs. Not conforming to the best practices may lead to an unreliable AOA process, and the agency will not have assurance that the preferred alternative best meets mission needs. Appendix III: GAO Assessment of TRIO Components’ Alternatives Analyses The Department of Homeland Security’s TRIO components—the U.S. Coast Guard (Coast Guard), Transportation Security Administration (TSA), and Domestic Nuclear Detection Office (DNDO)—conducted alternatives analyses (AA) during 2012 and 2013 to determine the best alternative for transitioning to a modernized financial management system solution. We evaluated the TRIO components’ AA processes against analysis of alternatives (AOA) best practices GAO identified as necessary characteristics of a reliable, high-quality AOA process (described in app. II). GAO’s assessment of the extent to which Coast Guard’s, TSA’s, and DNDO’s AAs met each of the 22 best practices is detailed in tables 5, 6, and 7. Appendix IV: Comments from the Department of Homeland Security Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, James Kernen (Assistant Director), William Brown, Courtney Cox, Eric Essig, Valerie Freeman, Matthew Gardner, Jason Lee, Jennifer Leotta, and Madhav Panwar made key contributions to this report.
Why GAO Did This Study To help address long-standing financial management system deficiencies, DHS initiated its TRIO project, which has focused on migrating three of its components to a modernized financial management system provided by IBC, an OMB-designated, federal SSP. House Report Number 3128 included a provision for GAO to assess the risks of DHS using IBC in connection with its modernization efforts. This report examines (1) the extent to which DHS and the TRIO components followed best practices in analyzing alternatives, and the key factors, metrics, and processes used in their choice of a modernized financial management system; (2) the extent to which DHS managed the risks of using IBC for its TRIO project consistent with risk management best practices; and (3) the key factors and challenges that have impacted the TRIO project and DHS's plans for completing remaining key priorities. GAO interviewed key officials, reviewed relevant documents, and determined whether DHS followed best practices identified by GAO as necessary characteristics of a reliable, high-quality AOA process and other risk management best practices. What GAO Found The Department of Homeland Security's (DHS) TRIO project represents a key effort to address long-standing financial management system deficiencies. During 2012 and 2013, the TRIO components—U.S. Coast Guard (Coast Guard), Transportation Security Administration (TSA), and Domestic Nuclear Detection Office (DNDO)—each completed an alternatives analysis (AA) to determine a preferred alternative for modernizing its financial management system. GAO found that DNDO's AA substantially met the four characteristics—well-documented, comprehensive, unbiased, and credible—that GAO previously identified for a reliable, high-quality analysis of alternatives (AOA) process. However, Coast Guard's and TSA's AAs did not fully or substantially meet three of these characteristics, and DHS guidance for conducting AAs did not substantially incorporate certain best practices, such as identifying significant risks and mitigation strategies and performing an independent review to help validate the AOA process. Based on these analyses and other factors, the TRIO components determined that migrating to a federal shared service provider (SSP) represented the best alternative, and in 2014, DHS selected the Department of the Interior's Interior Business Center (IBC) as the federal SSP for the project. However, because Coast Guard's and TSA's AAs did not fully or substantially reflect all of the characteristics noted above, they are at increased risk that the alternative selected may not achieve mission needs. DHS also did not fully follow best practices for managing project risks related to its use of IBC on the TRIO project. Specifically, DHS followed three of seven risk management best practices, such as determining risk sources and categories and establishing a risk management strategy. However, it did not fully follow four best practices for defining risk parameters, identifying risks, developing risk mitigation plans, and implementing these plans largely because its guidance did not sufficiently address these best practices. For example, although DHS created joint teams with IBC and provided additional resources to IBC to help address risk mitigation concerns, it did not always develop sufficiently detailed risk mitigation plans that also included contingency plans for selected critical risks. As a result, although IBC's capacity and experience for migrating large agencies the size of Coast Guard and TSA was identified as a risk in July 2014, a contingency plan working group to address this concern was not established until January 2017. By not fully following risk management best practices, DHS is at increased risk that potential problems may not be identified or properly mitigated. DHS, IBC, Office of Management and Budget (OMB), and other federal oversight agencies identified various challenges that have impacted the TRIO project and contributed to a 2-year delay in the implementation of Coast Guard's and TSA's modernized solutions. These challenges include the lack of sufficient resources, aggressive schedule, complex requirements, increased costs, and project management and communication concerns. To help address these challenges, DHS and IBC established review teams and have taken other steps to assess potential mitigating steps. In May 2017, DHS determined that migrating the solution from IBC to a DHS data center represented the best option and initiated discovery efforts to further assess this as its path forward for the TRIO project. What GAO Recommends GAO recommends that DHS more fully follow best practices for conducting an AOA process and managing risks. DHS concurred with GAO's recommendations and described actions it will take, or has taken, in response.
gao_GAO-18-94
gao_GAO-18-94_0
Background States operate and administer several types of private school choice programs. This report focuses exclusively on vouchers and ESAs. Vouchers: These programs generally provide interested parents with funds for tuition at a participating private school. The first voucher program began in 1990. ESAs: These programs are typically designed to fund a broader set of educational expenses, such as online learning programs, private tutoring, or education therapies. The first ESA program began in 2011. The size of voucher and ESA programs varies widely (see fig. 1 and appendix II for more details). In school year 2016-17, student participation in individual programs ranged from fewer than 10 to more than 34,000, for a total of 181,624 students across all programs. Design of Private School Choice Programs States establish the eligibility criteria for students to participate in choice programs as well as any accountability requirements for participating private schools. As noted in our prior work, these requirements can vary considerably across states. Eligibility criteria: Almost all private school choice programs use a student’s disability status or family income as eligibility criteria (see table 1). Accountability mechanisms: For purposes of this report, we define accountability mechanisms as requirements that private school choice programs place on private schools as a condition for participation. These mechanisms act as minimum participation requirements for private schools (see fig. 2). See appendix II for more details on accountability mechanisms by program. Individuals with Disabilities Education Act IDEA Part B requires each state to ensure that a free appropriate public education (FAPE) is made available to all eligible children with disabilities. An eligible child with a disability in a public school setting, or placed in a private school by a public agency as a means of providing special education and related services, is entitled to FAPE. FAPE means special education and related services that (1) have been provided at public expense, under public supervision, and without charge; (2) meet the standards of the state educational agency, including the requirements of IDEA; (3) include an appropriate preschool, elementary school, or secondary school education in the state involved; and (4) are provided in conformity with an individualized education program (IEP). When a parent of a child with a disability chooses to enroll their child in a private elementary or secondary school, whether or not through a private school choice program, that child is considered a “parentally placed” private school child under IDEA. A school district’s obligations to parentally placed private school children with disabilities are not as extensive as those for children enrolled in public schools or for children with disabilities placed in a private school by a public agency, according to Education documents. Under IDEA, a child with a disability who is parentally placed in a private school does not have a right to FAPE, or an individual right to receive some or all of the special education and related services that the child would be entitled to receive if enrolled in a public school. However, parentally placed children must be included in the population whose needs are considered for services under IDEA’s “equitable services” provisions. See table 2 for a summary of key differences in rights under IDEA for children with disabilities in public school and private school. The Role of the U.S. Department of Education Education has two offices that can address questions and provide information related to parentally placed private school children with disabilities, including those in private school choice programs. Education’s Office of Special Education and Rehabilitative Services (OSERS) administers IDEA in all of its aspects. It also supports programs that help educate children and youth with disabilities, including developing and distributing evidence-based products, publications, and resources to help states, local school district personnel, and families improve results for children with disabilities. Education’s Office of Non-Public Education (ONPE) fosters maximum participation of nonpublic school students and teachers in federal education programs and initiatives. ONPE’s activities include providing parents with information regarding education options for their children, and providing technical assistance, workshops, and publications to states, school districts, private schools, and other education stakeholders. State Private School Choice Programs Emphasize Different Accountability Mechanisms and Approaches to Monitoring Most Programs Have Academic and Administrative Accountability Mechanisms; Fewer Have Financial Accountability Mechanisms Academic Accountability Mechanisms Most private school choice programs have academic accountability mechanisms, which can include requirements for participating private schools to administer tests, report testing results, obtain accreditation, and teach core subjects, according to our analysis of information from program documents and officials. (See fig. 3.) We found that testing is the most common academic accountability mechanism in private choice programs, and that programs design this requirement in different ways. Academic testing and reporting requirements can help the public compare the academic achievement of private school choice students with students in public schools. Two-thirds of private choice programs (18 of 27)—which represented 78 percent of all students participating in voucher and ESA programs in school year 2016-17—require private schools to test voucher or ESA students. Of the 18 programs that require testing, nine programs require participating schools to administer their state’s standardized test and six require schools to administer some type of norm-referenced test. See appendix II for more information on testing requirements by program. Private schools appeared to have mixed experiences implementing the testing requirements in private school choice programs. For example, officials from four of the six programs we examined in depth noted that most private schools in their programs did not experience challenges administering the testing requirements, and said that many private schools had testing practices in place before joining the programs. However, officials in two programs also said that some private schools were unfamiliar with or unequipped to administer standardized tests. Officials from several state and national private school choice organizations also told us that smaller private schools sometimes lack the staff and budgets to administer standardized tests. One-third (9 of 27) of programs require that schools publicly report test results, including three of the four largest voucher programs—Wisconsin’s Milwaukee Parental Choice Program, Indiana’s Choice Scholarship Program, and Ohio’s EdChoice Scholarship Program—which publicly report test results via online systems. However, in our interviews, officials from two voucher programs noted some private schools experienced challenges administering standardized tests or providing the program offices with data. For example, according to officials in one program, most private schools did not have systems for administering the state’s standardized tests electronically. Officials in another program also noted that protecting student privacy in small private schools can be challenging. Few of the 15 choice programs that are designed specifically for students with disabilities have accountability mechanisms related to special education and related services. For example, Arkansas’s Succeed Scholarship Program requires schools to meet accreditation requirements for providing services to severely disabled individuals. Mississippi’s Dyslexia Therapy Scholarship for Students with Dyslexia Program requires schools to provide a specific learning environment for dyslexia therapy; and Louisiana’s School Choice Program for Certain Students with Exceptionalities requires schools to provide special education services for at least 2 years prior to joining the program. Administrative Accountability Mechanisms Most private school choice programs have some administrative accountability mechanisms, and these varied across programs, according to our analysis of information from program documents and officials. Administrative accountability mechanisms include requirements that participating private schools employ teachers, paraprofessionals, and/or specialists who have minimum qualifications, conduct background checks on employees, comply with state and local health and safety standards, and comply with site visits by program officials. (See fig. 4.) Most programs (25 of 27) require participating private schools to comply with state and local health and safety standards. Eight of the 25 programs rely on other state agencies to oversee the safety of school facilities rather than impose separate health and safety requirements on participating schools. In addition, about half of all voucher and ESA programs (17 of 27)—including three of the largest programs, which represented 73 percent of all students participating in voucher and ESA programs in school year 2016-17—require participating private schools to conduct background checks on all employees, or all employees with direct and unsupervised contact with children. About two-thirds (19 of 27) of programs require participating private schools to employ teachers and other staff with specific qualifications or credentials. For example, 13 programs require teachers to have a degree and/or state teaching license. Other programs, such as Florida’s John M. McKay Scholarships for Students with Disabilities Program, require private schools to employ teachers with either a bachelor’s degree, three years of experience, or specific credentials or special skills, knowledge, or expertise to provide instruction in certain subjects. Similarly, about half (14 of 27) of programs require schools to hire paraprofessionals and/or specialists with specific qualifications or credentials. More than half (15 of 27) of programs require site visits to participating private schools, and program officials we interviewed described various ways of implementing this requirement. For example, officials in three programs told us they conduct site visits to verify information submitted by participating private schools. Officials in one program noted that site visits are routine for entities that receive state funds; officials coordinate with the school beforehand, meet with the principal and staff, and perform spot checks on student files. Some program officials told us they also monitor participating schools using risk-based school reviews, requesting graduation rates, or by requiring schools to meet an attendance rate benchmark. Financial Accountability Mechanisms Although financial accountability mechanisms are the least common mechanisms used by private choice programs, more than half of programs had at least one such requirement. (See fig. 5.) Just over half (15 of 27) of programs require private schools to provide proof of fiscal soundness in order to participate. Most of these programs give private schools two options: schools must either submit proof they have been in operation for a specified length of time (ranging from 1 to 5 years) or provide a surety bond to the state to insure against any losses. For example, in Florida’s John M. McKay Scholarships for Students with Disabilities Program, schools must have been operating for at least 3 years or provide the Florida Department of Education with a surety bond or letter of credit equal to the amount of voucher funds the private school receives quarterly. Less than a third (8 of 27) of programs—which represented fewer than a quarter of all students participating in voucher and ESA programs in school year 2016-17—require participating schools to provide annual audits. Officials in two programs we examined in depth described concerns about the limited financial accountability provisions in their programs’ statutes. In one program with no financial accountability mechanisms, program officials said they would prefer to have the authority to remove private schools with financial issues from the program. Similarly, officials in the other program stated that they had some concerns about the financial stability of some of their participating schools but do not have authority to deny participation in the program based on financial criteria. In addition, all ESA programs, which generally provide funds directly to eligible individuals, have financial accountability mechanisms for parents. For example, Florida’s Gardiner Scholarship Program is administered by two organizations that review parents’ expenditures for compliance with program requirements and reimburse parents accordingly. Certain categories of purchases are pre-approved, but generally approvals are made on a case-by-case basis. In contrast, Arizona’s ESA program provides parents with a debit card for educational purchases. Parents are expected to use the debit card appropriately and retroactively submit itemized expense reports to the program each quarter. If program staff reviewing expenditures find any that do not meet statutory requirements, families are directed to reimburse the program. Private Choice Programs Described Various Approaches to Monitoring Participating Schools The six private choice programs we examined in depth took various approaches to monitoring participating schools’ compliance with their programs’ academic, administrative, and financial accountability requirements. Officials from several of these programs also described coordinating with accrediting agencies, other state departments, and independent auditors to help monitor private schools and ensure quality and safety. For example, officials in one program told us they received a number of complaints about a lack of adult supervision at a participating private school and asked local Child Protective Services to intervene. Program officials in two states said they use their state’s private school accreditation process to help enforce program accountability requirements because private schools must be accredited to participate. Programs often require participating schools to attest to meeting accountability requirements, although some program officials said they have limited resources to independently verify this information. For example, program officials in one state said they have limited resources to independently verify the information submitted by schools in their annual applications because processing voucher payments takes priority. Program officials in another state said financial constraints prevented them from visiting all of the schools that were flagged for not complying with program requirements last year. Finally, some program officials we spoke to told us that their states provide programs with limited authority to intervene with participating private schools when there are concerns. For example, officials in one program described being concerned that a particular school’s buildings were unsafe. However, they said that the choice program’s statute does not contain requirements related to the safety of participating schools, and the city must issue a safety notice before program staff could remove the school from the program. Private School Choice Programs and Participating Schools Provide a Range of Information to the Public and Prospective Families Most Private School Choice Programs Provide Directories of Participating Schools, Which Include Varying Information Almost all private school choice programs provide a directory of participating private schools for the public and prospective families, although the information included—and the way it is provided—varies. For example, 21 of 27 programs provide contact information, and 20 programs provide information on grades served. Far fewer programs provide information on school accreditation status (6 programs), student race and ethnicity data (5 programs), and graduation rates (4 programs). (See fig. 6.) Parents we interviewed had mixed responses about the information provided by private school choice programs and reported using other sources of information as well. Some parents mentioned using private choice program websites as key sources of information to identify and narrow their school choice options, while other parents said they wished that the programs would provide more information to help them consider potential schools. Parents also reported consulting family, friends, and other trusted community members or advisers and conducting internet searches when making school decisions. Along with directories, some private school choice programs provide additional guidance for parents on their websites. Just over one-third (10 of 27) of private school choice programs—serving 65 percent of students in choice programs—provide guidance to parents on how to choose a school. For example, the Ohio Department of Education Scholarship program office and Indiana Department of Education provide a checklist of questions parents might ask potential schools. These suggested questions include admission requirements, tuition and other costs, and discipline policies. We also found that one state—Florida—provides a link on its website to a federally-created decision tool on choosing schools. This tool, developed by Education, is designed to help families navigate the process of choosing a school, and includes questions that parents may want to ask as well as a discussion of school choice options. However, the document was last updated in 2007 and does not reference the Elementary and Secondary Education Act of 1965, as amended by the Every Student Succeeds Act. It also has few questions tailored to parents of students with disabilities or about special education/disability services and accommodations in educational settings. During the course of our review, Education officials said they were in the process of reviewing and determining whether to update existing guidance, including this document. As part of this review, Education officials said they plan to issue an updated version of the document in 2018 and may consider including additional questions for parents of students with disabilities. We found that only 3 of the 15 programs designed for students with disabilities provide guidance on their websites on making informed school choice decisions that is specifically tailored to these families. For example, Georgia’s guidance recommends families ask how a school will accommodate their child’s needs and Tennessee’s guidance advises parents to consider whether the school provides inclusive educational settings. Private Schools Participating in Voucher Programs Provide Varying Information on Their Websites; Most Websites Do Not Have Information Related to Special Education Services Much like the private school choice programs in which they participate, private schools vary in the information they provide to the public and prospective families on their websites. In our review of a nationally representative sample of 344 websites of private schools participating in the 23 voucher programs operating as of January 2017, we found notable differences in the type and amount of information on the sites (see fig. 7). For example, we estimate that 85 percent of participating private schools describe their curriculum or teaching philosophy on their websites; 68 percent indicate how long the school has been in operation; 27 percent provide information on the number of students attending the school; and 13 percent provide information on student performance on standardized tests. (See fig. 7.) schools in voucher programs for students with disabilities mention students with disabilities or special education services anywhere on their websites. In addition, we estimate that no more than 21 percent of private schools participating in a voucher program specifically designed for students with disabilities provide certain types of special education/disability-related information on their websites that might be of interest to prospective families choosing a school for their student with a disability. (See fig. 8.) of disabilities schools served as well as the disability-related services schools offer. Parents described attempts to enroll their student with disabilities in multiple schools before finding one that would admit their child or that was the right fit for their child’s needs. Several parents described the process of finding the right school for their children as trial and error. Lack of information can result in parents discovering key information about a school only after enrolling their child. For example, a family who has a student currently enrolled in a private school choice program told us they wished they had known that they would be charged for some of the special education services the private school was providing to their child. One family told us they were surprised to learn that teachers providing special education services to their child were not trained to provide those services, and another parent described changing schools because they learned aspects of their child’s disability could not be accommodated only after enrolling their child in a school. Private Choice Programs Do Not Provide Consistent Information about Changes in IDEA Rights When a Parent Moves a Child from Public to Private School, and IDEA Does Not Require That Parents Be Notified Parents of Students with Disabilities May Not Be Consistently or Correctly Notified about IDEA Rights upon Enrolling in Choice Programs When a parent moves a child with a disability from public school to a private school, the child’s rights under IDEA change. Specifically, when a child with a disability is enrolled in a private school by his or her parents or guardians (i.e., a parentally placed private school student), regardless of participation in a private school choice program, the child is no longer entitled to FAPE and other key rights and protections under IDEA. There is no requirement under IDEA or in Education’s regulations that parents be told about this change in rights to services when enrolling their children in private schools. Private school choice programs are not consistently providing information on changes in rights under IDEA when a child with a disability moves from public to private school, and some programs are providing incorrect information. Specifically, in our review of information provided by all 27 private school choice programs in operation as of January 2017, we found that 9 of the 27 programs did not provide any information about these changes in rights. Moreover, among the 15 programs specifically for students with disabilities, we found that 4 programs provided no information about changes in rights under IDEA when a child with a disability moves from public to private school. As shown in figure 9, these 4 programs enrolled the majority of students participating in disability choice programs in school year 2016-17 (73 percent). Another 5 of these programs—which enrolled 10 percent of students participating in disability choice programs in school year 2016-17—provided information that included inaccurate statements about rights under IDEA, as confirmed by Education officials. Some of these inaccuracies were related to IDEA’s “equitable services” provisions, under which parentally placed private school students with disabilities may be eligible to receive federally funded equitable services. Education officials reiterated that IDEA does not require states to provide notification about changes in disability rights when a parent moves a child from a public school to a private school. However, federal internal control standards state that agencies should provide quality information to external stakeholders. In addition, Education officials stated that, in the past, when the agency has been aware of cases where states are providing inconsistent or inaccurate information, the agency has worked with states to correct the information in order to avoid further dissemination of inaccurate information. Education Recommends but Does Not Require That Parents Be Notified of Changes in Rights Education does not require states or districts to notify parents of key changes in disability rights when a parent moves their child from public to private school, but the agency has recommended that states and districts notify parents of these changes. Specifically, in 2001, Education issued a document—which Education refers to as a policy letter—stating that “in order to avoid parental misunderstanding, the Department strongly recommends that the state and local educational agency notify parents who choose private school placement under [a private school choice program] that the students may retain certain rights under Section 504 and Title II of the ADA, although the student will not be entitled to a free appropriate public education under IDEA, while enrolled in the private school.” In addition, while Education has issued guidance documents explaining the obligations of states and school districts under IDEA to ensure the equitable participation of parentally placed private school children with disabilities, Education has not developed guidance or other documents that could serve as specific notification to parents of changes in IDEA rights when a parent moves a child with a disability from public to private school. When we asked Education officials about this issue, they reiterated that IDEA does not require such notification, and referred us to two publications by ONPE and OSERS regarding the equitable participation requirements in IDEA that apply to parentally placed private school children. The first is a 2011 ONPE document, titled The Individuals with Disabilities Education Act: Provisions Related to Children with Disabilities Enrolled by Their Parents in Private Schools. The second is a 2011 OSERS document, titled Questions and Answers on Serving Children with Disabilities Placed by Their Parents in Private Schools (revised April 2011). While these documents explain how children’s rights under IDEA are affected when parents place their child in a private school, they do not specifically address key IDEA rights and protections— such as discipline procedures and the least restrictive environment requirements—that do not apply when a student with a disability is moved from a public school to a private school by their parent. Further, these documents do not include the agency’s prior recommendation on parental notification, or provide sample language that stakeholders could use to notify parents of these changes in rights. Education also noted that under IDEA and its regulations, a notice of IDEA procedural safeguards must be provided to parents at least once a year and at other specified times, but also is not required to notify parents that if a child is parentally placed in a private school, the child is not entitled to FAPE and that these key rights and protections no longer apply. A wide variety of stakeholders, including officials from national school choice and disability organizations, private school choice programs, and Education told us that parents in private choice programs do not always understand that they will not have all of the same IDEA rights and protections when moving their children from public to private school. For example, some stakeholders said that confusion arises because parents are under the impression that since school choice programs are operated and funded by the state, and are often designed for students with disabilities, their children will have similar protections to those ensured to public school children under IDEA. Other stakeholders told us that because private schools sometimes request a copy of a student’s IEP, parents can mistakenly assume that the private school will provide the services and accommodations outlined in the document. Among the 17 families we interviewed, their views ranged from not being concerned about possible changes in rights—because they felt their students were not being served well in public schools—to echoing the stakeholder concerns described above. These 17 families also had differing understandings of the change in disability rights when enrolling their students in private school choice programs. For example, some families we interviewed said they were not aware that some of the disability services and therapies provided at private schools came at additional costs, because these services at public schools were provided free of charge. Parents of children with physical disabilities said they were surprised that some private schools, including schools for students with disabilities, were not accessible for children with physical disabilities. Education officials told us that IDEA does not provide it with statutory authority to require states and school districts to give parents notice that IDEA rights and protections—such as discipline procedures and least restrictive environment requirements—do not apply when a student with a disability is moved from public to private school by a parent. Absent a requirement that states notify parents about changes in key federal special education rights when a child is moved from public to private school by their parents, states may inconsistently provide information, contributing to confusion about the change in key federal disability rights and protections. Conclusions In the past decade, school choice options, including private school choice programs, have expanded across the country, providing more education alternatives for students and families, and this trend is expected to continue. School choice places more responsibility and decision making in the hands of parents, increasing the importance of high quality information to help parents make informed decisions. As more than half of the current private school choice programs are designed specifically for students with disabilities, it is critical that parents have access to quality information about changes in special education rights when they are considering moving their child from public to private school. Although Education has strongly recommended that states and districts notify parents that IDEA rights change when they move their parentally placed child from public to private school, in 2016-17, more than 80 percent of students in private choice programs designed for students with disabilities were enrolled in a program that either provided no information about changes in IDEA rights or provided some inaccurate information about these changes. Absent a requirement in IDEA that states notify parents of such changes, states are unlikely to begin providing parents with consistent and accurate information about changes that affect some of our nation’s most vulnerable children. Matter for Congressional Consideration Congress should consider requiring that states notify parents/guardians of changes in students’ federal special education rights when a student with a disability is moved from public to private school by their parent. Recommendation for Executive Action The Assistant Secretary for Special Education and Rehabilitative Services should review information provided by states related to changes in federal special education rights when a parent places a student with a disability in a private school and work with states to correct inaccurate information. Agency Comments and Our Evaluation We provided a draft of this report to Education for review and comment. Education’s comments are reproduced in appendix III. Education also provided technical comments, which we incorporated as appropriate. Education generally agreed with our recommendation to correct inaccurate information provided by states related to changes in federal special education rights when a parent places a student with a disability in a private school. During the course of our review, Education confirmed that five private school choice programs provided information that included inaccurate statements about rights under IDEA. However, Education stated that the department believes it is necessary to review the full documents containing information provided by states, so that it can determine the context in which the information was presented. We will coordinate with Education as appropriate to facilitate such a review. Reviewing and evaluating the information provided by states are important first steps. However, we continue to believe that it is critical that Education take the next step to work with states to correct any inaccurate information about the rights of students with disabilities under IDEA being provided by private school choice programs. Our draft report also included a recommendation for Education to require states to notify parents/guardians of changes in students’ federal special education rights, including that key IDEA rights and protections do not apply when a student with a disability is moved from public to private school by their parent. In response, Education stated that IDEA does not include statutory authority to require such notice, and suggested that the department instead encourage states to notify parents. However, as noted in our draft report, Education already strongly encourages states and school districts to provide such notice. Despite these efforts, we found that in 2016-17, more than 80 percent of students nationwide who are enrolled in private choice programs designed for students with disabilities were enrolled in a program that either provided no information about changes in IDEA rights, or provided some inaccurate information about these changes. We therefore continue to believe that states should be required, not merely encouraged, to notify parents/guardians about key changes in federal special education rights when a parent moves a child with a disability from public to private school. To this end, we have converted our recommendation into a Matter for Congressional Consideration to require such notice. In its comments, Education stated that the draft report title could be improved. Because, in the final report, we issued the Matter for Congressional Consideration discussed above, we have revised the title to reflect that federal actions are needed to ensure parents are notified about key changes in rights for students with disabilities. Education also inaccurately asserted that statements in the draft report about the availability of information for parents are based on limited reviews and small samples. As stated in the draft report, our findings about the information for parents are derived from two sources: private school choice programs and private schools participating in these programs. Our findings about information provided by private school choice programs are based on a comprehensive review of all 27 voucher and educational savings account programs operating in the United States during the 2016-17 school year. In addition, as noted in the draft report, we verified these findings with officials from each of these programs. Our findings about information participating private schools make available are based on a nationally representative, generalizable sample of websites from 344 private schools participating in voucher programs during the 2016-17school year. Finally, contrary to Education’s assessment that we based findings on a small sample of 17 families, as stated in the draft report, our discussion groups and interviews with these families provided illustrative examples of the types of information families used when making private school choice decisions. These illustrative examples were not the basis of any findings. We have clarified the language in the final report as appropriate. Further, Education commented that the draft report did not address factors that often lead parents to enroll their children in private schools in state choice programs. These factors were not addressed because they are beyond the scope of our objectives for this report. Finally, Education noted that parents may believe that educational benefits or services provided by private schools to their children with disabilities outweigh any rights conferred by IDEA or services provided by public schools. This is an important point, and this perspective was included in the draft Education reviewed. For example, in the draft Education reviewed, we stated that some families with whom we spoke were not concerned about any changes in rights because they felt their students were not being well served in public schools. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees and to the Department of Education. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0580 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines (1) the academic, administrative, and financial accountability mechanisms in private school choice programs; (2) the information available to the public and prospective families on private school choice programs and participating private schools; and (3) how families of students with disabilities are informed about any changes in their rights under federal law when enrolling in private school choice programs, and how the U.S. Department of Education provides information to families about these rights. Overall Methodology To obtain information for all three objectives, we reviewed relevant federal laws, regulations, and guidance. To determine key program characteristics, including accountability mechanisms these programs had in place and the type of information they provided publicly, we reviewed publically available documents from all 23 voucher programs and four education savings account (ESA) programs, referred to in this report as private school choice programs, operating in the United States as of January 2017 to obtain information about program design and requirements. We confirmed this information with each program. In addition, we reviewed documents and conducted interviews with program officials in six private school choice programs in five states (Arizona, Florida, Indiana, Ohio, and Wisconsin). We also interviewed officials from the U.S. Department of Education (Education) as well as national stakeholder groups and private school choice researchers, which we selected to obtain a range of perspectives on private school choice initiatives. To describe information that participating private schools make available to the public, we conducted a review of a nationally representative stratified random sample of 344 private schools participating in one of the voucher programs to identify information provided on school websites to parents and the public. To obtain information on how parents are informed about changes in their child’s rights under federal law, we reviewed Education guidance and policy documents on the Individuals with Disabilities Education Act (IDEA) and parentally placed private school students. To provide examples of how individual schools and programs make information available to the public and families, we also visited and interviewed officials at two private school choice programs, three private schools, and one school district in Florida. Additionally, we spoke with 17 families who had recently interacted with private school choice programs. We conducted this performance audit from August 2016 to November 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Analysis of Private School Choice Programs’ Accountability Mechanisms We defined “accountability mechanisms” as requirements that private school choice programs place on participating private schools. These requirements are intended to set minimum standards that private schools must meet to participate in the choice program. We compiled our list of mechanisms based on research conducted by national school choice organizations and other organizations, interviews with private school choice researchers, and our previous audit work. The list includes mechanisms likely to be used by multiple programs and is not meant to be exhaustive. We confirmed the appropriateness of our list of selected mechanisms during subsequent interviews with private school choice researchers and national stakeholder groups who confirmed that the mechanisms were common elements in program statutes, and/or standard mechanisms for establishing accountability in education. To identify the presence of each of the mechanisms in a choice program, we reviewed publicly available documents on the program’s website. We also reviewed documents linked to the program’s website. To confirm our assessment of each school choice program, we sent our analysis to the program’s administrators for verification. All programs responded and any changes are reflected in the report. We did not independently verify these requirements in state laws or regulations. Interviews in Selected Private School Choice Programs To obtain a richer understanding of accountability and transparency decisions, and challenges facing private school choice programs, we selected a non-generalizable sample of six private school choice programs in five states (Arizona, Florida, Indiana, Ohio, and Wisconsin) for a more in-depth review. These selected programs collectively served the majority of voucher and ESA students in school year 2016-17. In total, these programs represented about two-thirds of all participating students. For the selected programs, we reviewed program documents, and conducted interviews with programs officials and school choice organizations. In addition, we conducted a site visit to Florida in March 2017. Florida has the second largest school voucher program (the John M. McKay Scholarships for Students with Disabilities Program), and the largest ESA program (the Gardiner Scholarship Program). Collectively the two programs served approximately one-fifth (22 percent) of voucher and ESA students nationwide in school year 2016-17. To gather information on all three objectives, we interviewed officials from program administration offices for both programs. To obtain schools’ perspective on all three objectives, we interviewed officials at three private schools that participate in both school choice programs, and officials at a public school district. To obtain information on how families of students with disabilities are informed about any changes in their rights under federal law when enrolling in private school choice programs and families’ understanding of these changes, we conducted a series of interviews with families of students with disabilities. Private Schools’ Websites Review To determine the extent to which participating private schools provided information to prospective families and the public, we reviewed websites from a nationally representative sample of 344 private schools eligible to participate in one of the 23 voucher programs in operation as of January 2017. We limited our review to voucher school programs because we were unable to determine the universe of schools participating in all of the four ESA programs operating at the time of our review. Our sampling frame consisted of all schools eligible for participation in a private school choice voucher program. To create the frame, we downloaded the most currently available list of eligible schools as of April 3, 2017, from each program’s website. We identified 4,011 schools eligible to participate in at least one of the private school choice voucher programs covered by this review. Ohio’s Jon Peterson Special Needs Scholarship Program and Autism Scholarship Program allow multiple types of providers to receive voucher funds. As such, the lists for these programs included public schools, private companies, individual specialists, chartered private schools, and unchartered private schools. A chartered private school is a private school that has been approved by Ohio’s State Board of Education, according to program officials. As program officials told us, chartering is Ohio’s version of state accreditation. Because chartered schools were the only readily identifiable type of provider included in the downloaded lists from the program’s website, we decided to limit our list to chartered private schools and drop other providers from our schools list. Because web addresses were not always included in programs’ lists of schools, we used information provided in the lists to conduct internet searches to locate school websites. This enabled us to produce an estimate on the number of participating schools without a website. In order to review comparable information across the sampled schools’ websites, we developed a standardized web-based data collection instrument which we used to examine each website for academic, administrative, and financial information and information related to students with disabilities. We used a combination of information from our audit work on identifying accountability mechanisms, Education guidance on choosing a school, and our interviews to develop the questions included in the data collection instrument. We reviewed all websites from April 19 through 27, 2017. An analyst recorded information in the data collection instrument. The information was then checked for completeness by another analyst. We then analyzed the information across schools. We stratified the population using two design variables—one for whether or not the school participated in programs with eligibility limited to students with disabilities, and one for whether or not the school participated in one of the largest four voucher programs. This resulted in four sampling strata. The resulting sample of 344 schools allowed us to make national estimates about the availability of school information by program type. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 6 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Confidence intervals are provided along with each sample estimate in the report. All website review results presented in the body of this report are generalizable to the estimated population except where otherwise noted. Parent Interviews and Questionnaires To obtain information from parents on both our second and third objectives, we conducted interviews with 17 families who had recent experiences with private school choice programs. We also created a short questionnaire that included questions on the type of information families want and use when making private school choice decisions for their children. The questionnaire also included questions on how families of students with disabilities are informed about any changes in their rights under federal law when enrolling in private school choice programs and their understanding of those changes. We worked with private school choice organizations and national stakeholder groups that directly communicate with parents to contact parents on our behalf to answer the questionnaire and be interviewed. The questionnaire was given to each parent we interviewed or who participated in each of the discussion groups conducted during our Florida site visit. Parents completing the questionnaire had at least one child with a disability and either participated or considered participating in a private school choice program designed for students with disabilities. Appendix II: Key Information about Private School Choice Programs, School Year 2016- 17 Appendix III: Comments from the U.S. Department of Education Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Nagla’a El-Hodiri (Assistant Director), Alison Grantham (Analyst-in-Charge), Kelsey Burdick, Cheryl Jones, and Alex Squitieri made key contributions to this report. Also contributing to this report were Susan Aschoff, Carl Barden, James Bennett, Deborah Bland, Sarah Cornetto, Lawrence Malenich, Shelia McCoy, Tom Moscovitch, Kelly Rubin, Andrew Stavisky and Barbara Steel-Lowney.
Why GAO Did This Study Growth of voucher and ESA programs has drawn attention to the ways states ensure accountability and transparency to the public and prospective parents. With over half of voucher and ESA programs specifically designed for students with disabilities, there is interest in the information parents receive about special education services and rights when enrolling in a choice program. GAO was asked to examine these topics in more depth. This report examines (1) academic, administrative, and financial accountability mechanisms in private choice programs; (2) information available to the public and families on private choice programs and participating schools; and (3) how parents of students with disabilities are informed about changes in rights when enrolling in private choice programs. GAO analyzed information from all voucher and ESA programs operating in January 2017 and interviewed officials from Education, national groups, and six of the largest private choice programs. GAO reviewed websites of a nationally representative sample of private voucher schools, and worked with private choice groups and national organizations to contact families that recently interacted with a choice program. GAO interviewed all 17 families that responded. What GAO Found States include different academic, administrative, and financial accountability mechanisms in their voucher and education savings account (ESA) programs—programs that use public funds for private school educational expenses (see figure). Of the 27 programs operating in January 2017, most had academic and administrative accountability mechanisms for participating schools, such as academic testing requirements (18 of 27) or health and safety requirements (25 of 27). In addition, 15 of 27 programs required schools to demonstrate financial soundness and 8 of 27 programs required annual financial audits. Almost all of the 27 private school choice program websites provide a directory of participating schools and some provide guidance on selecting schools. However, GAO estimates that no more than half of all schools participating in any type of voucher program mention students with disabilities anywhere on their websites, according to GAO's review of a nationally generalizable sample of websites of private schools in voucher programs. Further, GAO estimates that no more than 53 percent of private schools in voucher programs designed for students with disabilities provide disability-related information on their websites. GAO found private school choice programs inconsistently provide information on changes in rights and protections under the Individuals with Disabilities Education Act (IDEA) when parents move a child with a disability from public to private school. In 2001, the U.S. Department of Education (Education) strongly encouraged states and school districts to notify parents of these changes, but according to Education, IDEA does not provide it with statutory authority to require this notification. According to GAO's review of information provided by private school choice programs, and as confirmed by program officials, in school year 2016-17, 83 percent of students enrolled in a program designed specifically for students with disabilities were in a program that provided either no information about changes in IDEA rights or provided information that Education confirmed contained inaccuracies about these changes. Officials from national stakeholder groups, private choice programs, and Education told GAO that some parents do not understand that certain key IDEA rights and protections—such as discipline procedures and least restrictive environment requirements—change when parents move their child from public to private school. Ensuring that quality information is communicated consistently and accurately to parents can help address potential misunderstanding about changes in federal special education rights. What GAO Recommends Congress should consider requiring states to notify parents/guardians about changes in federal special education rights when a parent moves a child from public to private school. In addition, GAO recommends Education review and correct inaccurate IDEA-related information provided by states. Education generally agreed with our recommendation.
gao_GAO-18-25
gao_GAO-18-25_0
Background The DEA, within the Department of Justice, is responsible for ensuring the availability of controlled substances for legitimate uses while preventing their diversion through its administration and enforcement of the Controlled Substances Act and its implementing regulations. Under the Controlled Substances Act, all persons or entities that manufacture, distribute, or dispense controlled substances are required to register with DEA, unless specifically exempted. DEA regulates these entities to limit diversion and prevent abuse. For example, DEA regulates pharmaceutical companies that manufacture controlled substances, health care providers who prescribe them to patients, and pharmacies that dispense them. In October 2010, the Disposal Act amended the Controlled Substances Act to allow the public to deliver unused controlled substances to an entity authorized by DEA to dispose of the substances. DEA was given responsibility for promulgating the implementing regulations, and the Disposal Act stipulated that the regulations should prevent diversion of controlled substances while also taking into consideration public health and safety, ease and cost of implementation, and participation by various communities. In addition to disposal bins, DEA’s regulations describe two other options for the public to transfer controlled substances for the purpose of disposal: mail-back programs and take-back events. Law enforcement agencies may use all three methods of drug disposal without the need for authorization by DEA. The Disposal Act stipulates that the regulations cannot require an entity to participate in or establish any of the disposal options. Requirements for Authorized Collectors of Unused Prescription Drugs To participate as authorized collectors of unused prescription drugs, eligible entities—retail pharmacies, hospitals/clinics with an on-site pharmacy, narcotic treatment programs, reverse distributors, distributors, and drug manufacturers that are already authorized by DEA to handle controlled substances—must modify their DEA registration. According to DEA officials, such modification is free and simple to do. Eligible retail pharmacies or hospitals/clinics that become authorized collectors are able to install and maintain disposal bins in long-term care facilities in addition to their own location. DEA’s website contains a public search feature to identify authorized collectors located near a specific zip code or address. Authorized collectors must install, manage, and maintain the disposal bins following DEA regulations. For example, under DEA’s regulations for maintaining the disposal bins, the disposal bin must be securely fastened to a permanent structure, securely locked, substantially constructed with a permanent outer container and removable inner liner, and have a small opening that allows contents to be added but not removed; the bin must also prominently display a sign indicating which types of substances are acceptable; users must dispose the unused prescriptions into the collection receptacle themselves without handing them to staff at the pharmacy; the disposal bin must typically be located in an area where an employee is present and near where controlled substances are stored, and the bin must be made inaccessible to the public when an employee is not present; the inner liner of the disposal bin must meet certain requirements, including being waterproof, tamper-evident, tear-resistant, opaque, and having the size and identification number clearly labeled; and the installation and removal of inner liners must be performed under the supervision of at least two employees of the authorized collector. DEA regulations also require that all controlled substances collected in the disposal bin’s inner liners must be destroyed in compliance with applicable federal, state, and local laws and rendered non-retrievable. According to DEA regulations, non-retrievable means that the physical and chemical conditions of the controlled substance must be permanently altered, thereby rendering the controlled substance unavailable and unusable for all practical purposes. Authorized collectors are permitted to destroy the inner liner on their premises if they have the capacity to do so. If not, the inner liners can be transported to a separate location to be destroyed. Typically, in this case, an authorized collector contracts with a reverse distributor to periodically remove, transport, and destroy the inner liners. DEA regulations require that two reverse distributor employees transport the inner liners directly to the disposal location without any unnecessary stops or stops of an extended duration. Authorized collectors must document certain information, including inner liner identification numbers and the dates that each liner is installed, removed, and transferred for destruction. The authorized collectors must maintain these records for 2 years. Figure 1 summarizes the steps involved in the collection of unused prescription drugs. About 3 Percent of Eligible Pharmacies and Other Entities Voluntarily Participate as DEA-Authorized Collectors of Unused Prescription Drugs About 3 percent of pharmacies and other eligible entities have voluntarily chosen to become DEA-authorized collectors of unused prescription drugs, according to DEA data. As of April 2017, 2,233 of the 89,550 (2.49 percent) of eligible entities—which are already authorized by DEA to handle controlled substances—had registered to use disposal bins to collect unused prescription drugs. Most of the authorized collectors— about 81 percent—were pharmacies, followed by hospitals or clinics. (See table 1). Narcotic treatment programs, reverse distributors, and distributors made up approximately 1 percent of the authorized collectors. We also found that participation rates varied by state, though in most states relatively few of the eligible entities had registered with DEA to become authorized collectors of unused prescription drugs. In 44 states, less than 5 percent of the eligible entities had registered. (See figure 2 and appendix I for more information on the participation rates of authorized collectors in each state). As of April 2017, Connecticut, Missouri, and Maine had the lowest participation rates, with 0.11, 0.22, and 0.70 percent, respectively. In contrast, North Dakota had the highest participation rate, with 32.0 percent of its pharmacies and other eligible entities registered to be authorized collectors. The state with next highest participation rate was Alaska, with 8.96 percent. In North Dakota, the state’s Board of Pharmacy provides funding for authorized collectors to purchase and maintain the disposal bins. According to a board official, the board decided to fund these activities to increase participation rates and plans to continue its funding indefinitely using revenue generated from prescription drug licensing fees it collects. In addition, our analysis shows that about 82 percent of all authorized collectors were located in urban areas as of April 2017. However, when comparing the entities registered to be authorized collectors with the total number of eligible entities, we found that a larger percentage of the eligible entities in rural areas became authorized collectors compared with those in urban areas (see table 2). The data we obtained on the number of eligible and participating authorized collectors and their locations are the only available DEA data on the use of disposal bins to collect unused prescription drugs. According to DEA officials, the agency does not collect any other information on the use of disposal bins, such as the extent to which the bins are used, or the amount and types of prescription drugs deposited into the bins. For example, to minimize the risk of diversion, DEA regulations do not allow authorized collectors to open and inspect the inner liners of the disposal bins, so information on their contents cannot be collected. According to DEA officials, the agency is not responsible for collecting information on the amount and types of prescription drugs destroyed through the disposal bins. DEA officials told us that the agency views its responsibility solely as giving pharmacies and other eligible entities the opportunity to become authorized collectors. Though we do not have information on the extent to which individuals use DEA’s prescription drug disposal bins, we were able to estimate that as of April 2017, about half of the country’s population lived less than 5 miles away from a pharmacy or other DEA-authorized entity offering a prescription disposal bin. In 21 states, at least 50 percent of the state’s population lived within 5 miles of a prescription disposal bin. (See figure 3). While close to half of the nation’s population lived less than 5 miles from a disposal bin as of April 2017, the availability of nearby disposal bins varied significantly for people depending on whether they lived in an urban or a rural area. Specifically, about 52 percent of the population in urban areas lived less than 5 miles away from a disposal bin, compared to about 13 percent of the population in rural areas. Furthermore, about 44 percent of the population in rural areas lived even further away—more than 30 miles away from a disposal bin. An exception to this is North Dakota, where about 86 percent of its urban population and about 64 percent of its rural population lived within 5 miles of a disposal bin. Stakeholders Cited Cost and Other Factors as Affecting Decision to Become DEA-Authorized Collectors of Unused Prescription Drugs According to officials from the 11 stakeholder organizations we interviewed—which represent authorized collectors and long-term care facilities—several factors may explain why relatively few pharmacies and other eligible entities have chosen to become authorized collectors of unused prescription drugs. These factors include the associated costs of participating, uncertainty over proper implementation, and participation in other, similar efforts for disposing of unused prescription drugs. Costs: Stakeholders said that the costs associated with purchasing, installing, and managing the disposal bins is a factor that explains the relatively low rate of participation. One stakeholder told us that many eligible entities may decide that the benefit of participating does not outweigh the costs associated with doing so. Specifically, stakeholders told us that the major costs associated with participating include the one-time cost of purchasing and installing a disposal bin; the ongoing costs to train personnel to manage the bins; and the cost of contracting with a reverse distributor to periodically dispose of the bin’s inner liner and contents. Stakeholders gave varying examples of the specific costs associated with these investments. For example, one stakeholder estimated the yearly costs of maintaining a disposal bin ranged from $500 to $600 per location; another stakeholder said that the cost is thousands of dollars per location per year, but did not provide a specific estimate. These stakeholders added that costs can increase if the disposal bins fill more quickly and need to be emptied more often than expected. For their part, officials from the reverse distributor stakeholders we interviewed cited incinerating hazardous waste, the availability of incinerators, and the cost of personnel as factors that increase the cost of their services for authorized collectors. One reverse distributor stakeholder told us that there are not many incinerators available, requiring them to travel long distances to incinerate collected waste. The other reverse distributor stakeholder added that DEA’s requirement that a second employee be present during the transportation and disposal increases the cost of their services. While some stakeholders speculated that costs are a reason for low participation, a few stakeholders told us that the benefits are worth the costs. In fact, two stakeholders we spoke with told us that the benefit to the communities was so important that they decided to provide funding to retail pharmacies, alleviating an individual pharmacy’s concern about the cost of installing and maintaining the disposal bins. We found that as of April 2017, over a quarter of the 2,233 authorized collectors using disposal bins received external funding to pay for the costs associated with installing and maintaining the disposal bins. In addition, stakeholders told us that some localities have enacted laws known as extended producer responsibility ordinances, which require that pharmaceutical manufacturers pay for certain costs associated with drug disposal. When asked about the costs associated with operating disposal bins, DEA officials told us that addressing cost issues with eligible participants falls outside of their responsibilities. Uncertainty: Stakeholders also told us that uncertainty regarding how to comply with aspects of DEA’s regulations for prescription drug disposal bins affected their decisions to participate. One stakeholder added that many eligible entities decide not to participate because uncertainties over participation requirements could result in inadvertent non-compliance with DEA’s regulations. As an example of their uncertainty over some of the requirements governing the disposal bins, officials from both of the reverse distributor stakeholders we interviewed cited DEA’s non-retrievable standard for destruction of the inner liners of the bins. DEA requires that the method of destruction be sufficient to render all controlled substances non- retrievable, meaning that the physical and chemical conditions of the controlled substances must be permanently altered and unusable in order to prevent diversion for illicit purposes. Both reverse distributor stakeholders told us that they are uncertain about whether certain disposal methods meet this standard, and they said that the agency has not provided further guidance on how reverse distributors can meet this requirement. DEA officials told us that the agency responds to questions about whether a specific method of destruction meets the non-retrievable standard by telling the registrant to test the remnants after destruction, to see if any components of the controlled substance are still present. In its summary of the regulations implementing the Disposal Act, DEA stated that in order to allow for the development of various methods of destruction, the agency did not require a specific method of destruction as long as the desired result is achieved. However, DEA officials stated that to their knowledge, incineration is the only method known to meet the non-retrievable standard to date, but the officials hoped other methods will be developed in the future. When asked about the guidance they provide to authorized collectors of unused prescription drugs or those eligible to become authorized collectors, DEA officials told us that they post frequently-asked questions on their website, routinely answer questions from participants and others, and give training presentations at conferences that include information on the disposal bins. In our prior work, we found problems with DEA’s communication and guidance to stakeholders. In 2015, we recommended that DEA identify and implement cost-effective means for communicating regularly with pharmacies and other entities authorized to handle controlled substances. DEA agreed with the recommendation, and officials told us that, starting in August 2017, these entities can subscribe to DEA’s website to receive notifications when it is updated with new guidance. Stakeholders also noted that some DEA requirements related to disposal bins may conflict with other state and federal requirements governing the transportation and disposal of hazardous waste, which includes some controlled substances. For example, the two reverse distributor stakeholders told us that some incinerator permits issued by states require that hazardous waste be examined before incineration; however, DEA requirements do not allow the contents of the liners to be examined, even at the time of incineration. To address the incinerator permit requirements, one reverse distributor told us that they use the Environmental Protection Agency’s hazardous waste household exemption, which treats the liners as household waste and thereby allows incinerator facilities to destroy the liners without examining the contents or violating their state permit. In addition, some stakeholders raised concerns that DEA’s regulations may conflict with other federal regulations. For instance, one stakeholder noted that they recently learned that transporting the disposal bin’s inner liners could violate Department of Transportation regulations. DEA officials told us that they were aware of this, explaining that the conflict was between DEA’s requirement that controlled substances be transported in liners and the Department of Transportation’s requirement that this type of waste be transported in sturdy containers. According to DEA officials, this conflict has been resolved by the Department of Transportation allowing reverse distributors to place the liners inside sturdy containers kept on trucks. Participation in or Availability of Similar Efforts: Stakeholders said that some pharmacies and other eligible entities were already participating in other, similar efforts that allow for the safe disposal of controlled substances, and therefore they did not want to invest additional resources into participating as authorized collectors using disposal bins. For example, the Centers for Medicare & Medicaid Services has an established process that long-term care facilities use to dispose of their unused controlled substances. As a result, all of the long-term care stakeholders told us that long-term care facilities may choose not to partner with pharmacies interested in placing disposal bins within their facilities because it adds significant cost and effort without any additional benefit. Furthermore, pharmacy stakeholders noted that because of the availability of other prescription drug collection efforts in their communities, they did not think that maintaining a disposal bin at their locations was needed. For example, two of the stakeholders explained that local law enforcement precincts already had a similar type of disposal bin in place to collect unused prescription drugs. DEA officials told us that they were aware of other options for the public and entities such as long-term care facilities that are not registered as authorized collectors to dispose of controlled substances. The officials also indicated that the availability of disposal options at law enforcement agencies contributes to the low participation rates among pharmacies as authorized collectors of unused prescription drugs. Agency Comments We provided a draft of this report to the Department of Justice for comment. DEA, part of the Department of Justice, provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Attorney General of the United States and the Administrator of DEA. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Entities Eligible to Register with DEA to Become Authorized Collectors and Participating Collectors, by State, April 2017 Number of authorized collectors 19 Number of authorized collectors 75 Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, Elizabeth H. Curda (Director), Will Simerl (Assistant Director), Kathryn Richter (Analyst-In-Charge), Nick Bartine, Giselle Hicks, Jessica Lin, and Emily Wilson made key contributions to this report. Also contributing were Muriel Brown and Krister Friday.
Why GAO Did This Study In 2015, 3.8 million Americans reported misusing prescription drugs within the last month, and deaths from prescription opioids have more than quadrupled since 1999. About half of the people who reported misusing prescription drugs in 2015 received them from a friend or relative. One way to help prevent this kind of diversion and potential misuse is by providing secure and convenient ways to dispose of unused, unneeded, or expired prescription medications. The Secure and Responsible Drug Disposal Act of 2010 authorizes pharmacies and other entities already authorized by DEA to handle controlled substances to also collect unused prescription drugs for disposal. In 2014, DEA finalized regulations for the implementation of the Act, establishing a voluntary process for eligible entities to become authorized collectors of unused prescription drugs using disposal bins. GAO was asked to review participation among authorized collectors that maintain disposal bins. In this report GAO describes (1) participation rates among entities eligible to collect unused prescription drugs and (2) factors that affect participation. GAO analyzed the most currently available DEA data from April 2017 on entities eligible to participate and those participating as authorized collectors. GAO also conducted interviews with DEA officials and a nongeneralizable sample of 11 stakeholder organizations selected to illustrate different types of authorized collectors and long-term care facilities. GAO is not making any recommendations. DEA provided technical comments, which GAO incorporated as appropriate. What GAO Found GAO found that about 3 percent of pharmacies and other entities eligible to collect unused prescription drugs for disposal have volunteered to do so. The Drug Enforcement Administration (DEA) authorizes these entities to dispose of unused drugs to help reduce their potential misuse. Analysis of DEA data shows that as of April 2017, 2,233 of the 89,550 (2.49 percent) eligible entities—that is, certain entities already authorized by DEA to handle controlled substances—had registered with DEA to use disposal bins to collect unused prescription drugs. Most—about 81 percent—of the authorized collectors were pharmacies, followed by hospitals or clinics. GAO also found that participation rates varied by state, though in 44 states less than 5 percent of the state's pharmacies and other eligible entities had registered to become authorized collectors. Stakeholders cited several factors that may explain why relatively few pharmacies and other eligible entities have registered with DEA as authorized collectors of unused drugs. Most notably, stakeholders representing authorized collectors told GAO that because participation is voluntary, the cost associated with maintaining a disposal bin—which includes purchasing and installing the bin according to DEA requirements and paying for the destruction of its contents—is an important factor to weigh against potential benefits. DEA noted that availability of disposal by law enforcement agencies also contributes to low participation.
gao_GAO-18-144
gao_GAO-18-144_0
Background Oversight of 2014 Nuclear Enterprise Reviews’ Recommendations In November 2014, the Secretary of Defense directed DOD to address the 2014 nuclear enterprise reviews’ recommendations and directed CAPE to track and assess these implementation efforts. The Joint Staff, Navy, Air Force, offices within the Office of the Secretary of Defense, and U.S. Strategic Command are supporting CAPE’s efforts. The Secretary also established the Nuclear Deterrent Enterprise Review Group (NDERG), a group of senior officials chaired by the Deputy Secretary of Defense and including the Vice Chairman of the Joint Chiefs of Staff, to oversee and make decisions regarding implementation of the nuclear enterprise reviews’ recommendations. The NDERG is supported by a Nuclear Deterrent Working Group, which meets biweekly and reviews the status of recommendations, and a Nuclear Deterrent Senior Oversight Group, which meets quarterly and reviews any recommendations that the Working Group believes are ready for the NDERG to close. The Deputy Secretary of Defense updates the Secretary of Defense on NDERG progress as requested. CAPE compiled the recommendations from the two 2014 nuclear enterprise reviews and a memorandum from the Commander of U.S. Strategic Command that identified several additional recommendations. In total, CAPE identified 175 distinct recommendations from the three documents. CAPE then identified 247 sub-recommendations from recommendations directed to multiple services (or other DOD components)—for example, if a recommendation was directed to the Air Force and the Navy, then one sub-recommendation was made to the Air Force and one sub-recommendation was made to the Navy. CAPE then worked with the services to identify offices of primary responsibility for implementing actions to address the recommendations, any offices of coordinating responsibility, and any resources necessary to implement each recommendation. CAPE has developed a tracking tool to collect information on progress in meeting milestones and metrics. This tracking tool identifies offices of responsibility, implementation actions, milestones, and metrics to measure the effectiveness of the actions taken toward implementing each of the recommendations. The tracking tool currently contains hundreds of unique milestones and metrics, and according to CAPE officials, additional milestones and metrics are included as they are identified. The Air Force and the Navy also developed their own methods of tracking their service-specific recommendations. We reviewed DOD’s processes for implementing the 2014 nuclear enterprise reviews’ recommendations and issued a report on July 14, 2016. We found that the process DOD had developed for implementing and tracking the 2014 nuclear enterprise reviews’ recommendations generally appeared consistent with relevant criteria from the Standards for Internal Control in the Federal Government—including using and effectively communicating quality information and performing monitoring activities. As we reported in July 2016, CAPE officials stated that it would take about 3 years to see measurable improvements in the health of the nuclear enterprise and 15 years to implement the great majority of the recommendations and measure whether they have had their intended effects. CAPE and service officials have noted that it would take years for some of the recommended cultural changes to manifest. NC3 Systems NC3 is a large and complex system comprised of numerous land-, air-, and space-based components used to assure connectivity between the President and nuclear forces. NC3 is managed by the military departments, nuclear force commanders, and the defense agencies and provides the President with the means to authorize the use of nuclear weapons in a crisis. NC3 systems support five important functions: Force management: assignment, training, deployment, maintenance, and logistics support of nuclear forces before, during, and after any crisis. Planning: development and modification of plans for the employment of nuclear weapons and other options. Situation monitoring: collection, maintenance, assessment, and dissemination of information on friendly forces, adversary forces and possible targets, emerging nuclear powers, and worldwide events of interest. Decision making: assessment, review, and consultation that occur when the employment or movement of nuclear weapons is considered. Force direction: implementation of decisions regarding the execution, termination, destruction, and disablement of nuclear weapons. Oversight of 2015 NC3 Report Recommendations As recommended in the 2015 NC3 report, the Council on Oversight of the National Leadership Command, Control, and Communications System (the Oversight Council) has taken a lead role in providing oversight and making the final determination on the implementation status of that report’s 13 recommendations. The Oversight Council is co-chaired by the Under Secretary of Defense for Acquisition, Technology, and Logistics and the Vice Chairman of the Joint Chiefs of Staff and its members are the Under Secretary of Defense for Policy; the Commander, U.S. Strategic Command; the Commander, North American Aerospace Defense Command/U.S. Northern Command; the Director, National Security Agency; and the DOD Chief Information Officer. Additional organizations, such as CAPE, may participate in the Oversight Council’s meetings to provide subject matter expertise. The Oversight Council is supported by the Executive Management Board—a functional governance committee chaired by the DOD Chief Information Officer. DOD CIO tracks the implementation of the 2015 NC3 report’s recommendations, among other activities. Nuclear Personnel Reliability DOD and the military services set standards to ensure that personnel who work with nuclear weapons and nuclear weapons systems, NC3 systems and equipment, and special nuclear material are reliable, trustworthy, and capable of performing their assigned nuclear weapons-related mission. Nuclear surety generally refers to DOD’s efforts to ensure that nuclear weapons and materials are safe, secure, reliable, and controlled. DOD and the military services use personnel reliability assurance programs— the Personnel Reliability Program and the Air Force’s Arming and Use of Force program for Air Force security forces—to implement these nuclear surety requirements for personnel. When personnel are assigned to a nuclear unit, relevant unit commanders certify that those personnel meet the personnel reliability assurance program standards. Commanders can also suspend or decertify personnel from working with nuclear weapons if they fail to meet these standards during their service. Factors that may lead to suspension or decertification include medical issues; personal conduct; emotional, mental and personality disorders; financial problems such as an inability or unwillingness to satisfy debts or the presence of unexplained wealth; criminal conduct; sexual harassment or assault; misuse of drugs or alcohol; and security violations. According to DOD data, as of December 31, 2016, there were 10,603 DOD personnel certified under the Personnel Reliability Program and 36,464 security forces personnel certified under the Air Force’s Arming and Use of Force program. Together, there were a total of 47,067 personnel that met the personnel nuclear surety requirements of a personnel reliability assurance program (see table 1). Progress Made in Implementing Recommendations, but Identifying Additional Performance Measures, Milestones, and Risks Can Aid in Tracking and Evaluating Efforts DOD and the military services have made progress in implementing recommendations to improve the defense nuclear enterprise but could improve their efforts by identifying additional performance measures, milestones, and associated risks. CAPE and DOD CIO have separate processes for tracking and evaluating DOD’s progress in implementing the recommendations from the 2014 nuclear enterprise reviews and the 2015 NC3 report, respectively. DOD Continues to Implement the Recommendations from the 2014 Nuclear Enterprise Reviews The NDERG has closed 77 of the 247 sub-recommendations from the 2014 nuclear enterprise reviews following CAPE’s assessment of implementation actions that had been taken by the military services and other DOD components (see fig. 1). For example, with regard to Nuclear Weapons Technical Inspections, the independent 2014 nuclear enterprise review recommended that inspection teams not focus on auditing records but instead examine the processes in place to inform commanders of Personnel Reliability Program issues. In response, DOD, the Air Force, and the Navy have made changes to their inspection processes and the Joint Chiefs of Staff have updated the Nuclear Weapons Technical Inspections guidance to de-emphasize records reviews in favor of knowledge checks and scenario-based discussion during the Personnel Reliability Program portion of these inspections. After reviewing these actions, the NDERG closed this recommendation in December 2016. The 77 closed sub-recommendations make up 62 of the initial 175 recommendations from the 2014 nuclear enterprise reviews. DOD Has Made Progress in Implementing Recommendations from the 2015 NC3 Report According to DOD CIO officials, as of March 2017, the Oversight Council has closed two of the 13 recommendations from the 2015 NC3 report, and DOD is making progress in implementing the remaining 11 recommendations (see fig. 2). The two closed recommendations are to (1) make the Oversight Council the synchronizing body to evaluate, track, and resolve the findings and recommendations made in that report and (2) broaden Air Force Global Strike Command’s responsibilities to include serving as the lead command for all of the Air Force-owned portions of the NC3 systems. DOD has made progress in implementing the remaining 11 recommendations. For example, the 2015 NC3 report recommended that U.S. Strategic Command review and validate the availability requirements of one of the NC3 systems, which the command has now completed. Additional detail about DOD’s progress is omitted because the information is classified. DOD’s Processes for Tracking and Evaluating Its Progress Can Be Improved by Identifying Additional Performance Measures, Milestones, and Risks DOD’s processes for tracking and evaluating its progress in implementing the 2014 nuclear enterprise reviews’ recommendations do not consistently identify and document risks, and its processes for tracking and evaluating its progress in implementing the 2015 NC3 report’s recommendations do not identify performance measures, milestones, or risks. Identifying performance measures, milestones, and associated risks can help an agency to track and evaluate its progress toward completing tasks over time and can help to inform decision makers of potential issues that need to be addressed. We have previously reported that by tracking and developing a performance baseline for all performance measures, agencies can better evaluate whether they are making progress and their goals are being achieved. Similarly, Standards for Internal Control in the Federal Government emphasizes using performance measures and milestones to assess performance over time. We have also derived leading practices from the Government Performance and Results Act of 1993 (GPRA) and the GPRA Modernization Act of 2010, such as clearly defining performance measures and milestones and assessing program results against them. Additionally, Standards for Internal Control in the Federal Government states that management should identify, analyze, and respond to risks related to achieving the defined objectives and should use and internally communicate the necessary quality information in meeting those objectives. DOD Has Identified Performance Measures and Milestones for Evaluating the Implementation of the 2014 Nuclear Enterprise Reviews’ Recommendations, but Additional Guidance for Identifying and Documenting Risks Could Improve Oversight CAPE is working with the military services and other DOD components to track and evaluate the implementation actions taken in response to the recommendations from the 2014 nuclear enterprise reviews; however, risks associated with these actions are not consistently identified and documented. In July 2016, we reported on CAPE’s use of a centralized tracking tool that contains relevant information about the status of the actions taken in response to those recommendations. CAPE continues to use this tool, and it remains accessible to the services and other DOD entities on DOD’s classified network. As shown in figure 3, it includes fields for the underlying problem statement, or root cause, for the recommendation; time frames with milestones for implementing the recommendations; and performance measures (referred to as metrics in the tracking tool) to assess the effectiveness of the actions taken. The tracking tool also contains a field for Key Risks and Issues, but we found that this field has not been used consistently. According to CAPE officials, CAPE is using the tracking tool to track progress in meeting milestones and record the metrics it has identified to assess both the progress (through “process metrics”) and the effectiveness of the implementation actions (through “outcome metrics”). The outcome metrics are selected to aid CAPE in determining whether implemented recommendations have addressed the underlying problem that was the impetus for the original recommendation. CAPE used the outcome metrics to inform its assessment of each of the 77 sub- recommendations that the NDERG then closed. According to CAPE officials, CAPE’s approach to measuring effectiveness is to gather supporting data from the services and measure the effectiveness of each recommendation separately. However, these officials noted that until a recommendation has been implemented, CAPE cannot fully assess the effectiveness of the implementation actions. Some recommendations— including changing a service’s culture or morale—will take time to evaluate. According to CAPE officials, the tracking tool currently contains 389 unique metrics and 370 unique milestones to aid in the assessment of the implementation actions. For each of these metrics and milestones, the tracking tool includes expected completion dates and indicates which have been met and which are behind schedule. Additional milestones, particularly for actions more than 18 months out, and additional metrics to aid in measuring the effectiveness of actions taken, are still being identified, according to CAPE officials. In December 2016, the Deputy Secretary of Defense issued a memorandum that directed the transition of the tracking and analysis responsibilities related to implementing the 2014 nuclear enterprise reviews’ recommendations from CAPE to the military departments and other DOD entities. However, CAPE remains responsible for providing guidance to inform the analyses conducted by other DOD entities, overseeing the analyses, and assessing recommendations for closure. The aim of these changes was to enhance ownership and embed the principles of robust analysis, continuous monitoring, and responsibility throughout the department. As part of this transition, CAPE provided the military departments and other DOD entities with guidance to aid in their tracking and analysis of the recommendations from the 2014 nuclear enterprise reviews, but this guidance does not require the military services and other DOD components to identify and document risks prior to bringing a recommendation for closure. This guidance emphasizes using performance measures and milestones to track and measure the progress of implementation actions. It includes sections tailored to specific groups of recommendations from the 2014 nuclear enterprise reviews. It also calls for the consideration of potential risks that unintended consequences could occur when a recommendation is brought for closure, but it does not call for risks to be identified, assessed, or documented prior to that time. According to officials from CAPE and the military services, the department considers risks in a number of ways and does capture information about some risks. For example, CAPE has supplemented its review of the military services’ proposed budgets by conducting a review of funding risks related to the nuclear enterprise in areas such as modernization, investment, and personnel. CAPE briefs the results of this review to senior leadership within the NDERG to provide them information about whether the services are including funds to address these items in their yearly budget requests. Additionally, CAPE personnel have identified key risks regarding some of the recommendations and have entered this information into the centralized tracking tool. According to CAPE officials, 63 of the 247 sub-recommendations include information in the Key Risks and Issues field in the tracking tool. However, these officials told us that none of the remaining 184 sub-recommendations include information in this field, because either no key risks or issues were identified or the risks that were identified were not formally documented within the tool. Additionally, risks that are introduced as a result of actions taken to implement a recommendation are not consistently included in the centralized tracking tool or otherwise documented by CAPE. For example, according to Navy and CAPE officials regarding a recommendation to increase the number of skilled shipyard workers to keep up with the maintenance demands of ballistic missile nuclear submarines, the centralized tracking tool documents the risks as the need to complete hiring and training of new shipyard personnel. However, according to Navy officials, the risks resulting from the prioritization of maintenance of ballistic missile nuclear submarines over other vessels not associated with the nuclear deterrent mission, such as fast attack submarines and nuclear aircraft carriers, were discussed and accepted by the Navy, but not documented in the centralized tracking tool. Similarly, the risks associated with recommendations that the Air Force provide additional incentive pay for personnel serving in nuclear positions were identified but not documented in the centralized tracking tool prior to implementation and closure. According to a CAPE official, the Nuclear Deterrent Working Group determined that implementing incentive pay could negatively affect morale, because some Air Force personnel in nuclear positions are not eligible to receive this additional pay. The official stated that the Nuclear Deterrent Senior Oversight Group was briefed on this risk and responded by requesting updates from the Air Force’s annual review on the effectiveness of this incentive pay. The department is not consistently identifying and documenting risks associated with the recommendations, because CAPE’s guidance does not direct the military services and DOD components to document and update information on risk in the centralized tracking tool. According to CAPE officials, since the release of the December 2016 memorandum directing the transition of the tracking and analysis responsibilities for the 2014 nuclear enterprise reviews’ recommendations from CAPE to the military departments and other DOD components, the military services have not, to date, formally identified any key risks for inclusion in the centralized tracking tool. According to one Air Force official, the Air Force identifies and responds to risks through its day-to-day operations; however, this information is not captured by the tracking tool or otherwise documented. According to a CAPE official, additional guidance on documenting risk could encourage the military services and DOD components to capture risks that they have identified in the tracking tool. In a November 2014 memo announcing the department’s response to the nuclear enterprise reviews, the Secretary of Defense stated that the nuclear deterrent plays a critical role in assuring U.S. national security and that it is DOD’s highest priority mission. The Independent Review of the Department of Defense Nuclear Enterprise found that the avoidance of managing risks by many leaders within the enterprise resulted in adverse impacts to the mission. The review noted that avoiding risk by avoiding the problem until it becomes a major issue is a near inevitable outcome of risk-averse cultures and that, too often, it takes a significant event for the leadership to recognize major problems within the force. Similarly, the Internal Assessment of the Department of Defense Nuclear Enterprise stated that many of the senior leaders within DOD and the military services were not cognizant of the problems faced by the enterprise. According to that review, many issues were already being reported through internal self-assessments, but many senior leaders within DOD and the military services were not aware of the conclusions of these self-assessments and so were unable to take action to address them. Given the critical role the nuclear enterprise plays in national security, and given the challenges the Independent Review of the Department of Defense Nuclear Enterprise identified with respect to managing risks and communicating them across the defense nuclear enterprise, it is essential that risks be consistently identified and documented. By documenting information on risks in its centralized tracking tool, DOD could enhance its ability to provide oversight of the recommendations throughout its review processes in the military services, the Nuclear Deterrent Working Group, the Nuclear Deterrent Senior Oversight Group, and the NDERG. By developing additional guidance for identifying and documenting information about these risks, CAPE can also aid the components of the defense nuclear enterprise in their efforts to communicate and formulate responses to the risks—either by deliberately determining to accept the risk or by taking steps to avoid, reduce, or share the risk across the enterprise. Identifying Performance Measures, Milestones, and Associated Risks could Improve DOD CIO’s Efforts to Evaluate the Actions Taken in Response to the 2015 NC3 Report DOD CIO uses an internal spreadsheet to track the implementation of the 13 recommendations from the 2015 NC3 report, but it has not identified performance measures, milestones, or associated risks to evaluate these actions. This spreadsheet includes fields for indicating whether an execution plan exists, the operational impact from implementing the recommendation, forecast closeout (which lists the responsible DOD component or designates the status of the recommendation), and follow- up actions to be taken after a recommendation is closed. Figure 4 shows the layout of this spreadsheet. According to DOD CIO officials that we met with, DOD CIO shares information about the status of the 2015 NC3 report recommendations through meetings with the DOD entities with primary responsibility for implementing the recommendations. However, there is currently no centralized collection of metrics, milestones, and other information with the same level of detail that CAPE had developed and is using for the 2014 nuclear enterprise reviews’ recommendations. According to DOD CIO officials, they are working with the offices of primary responsibility to expand on the current content of the internal tracking spreadsheet. These officials stated that while they had drafted a template to contain the expanded content, it has not yet been approved by the Oversight Council. This draft template contains fields similar to those CAPE developed and the department uses for tracking the department’s progress in implementing the recommendations from the 2014 nuclear enterprise reviews. When approved and implemented, this template will provide a form that could be used for documenting performance measures, milestones, and risks for these 2015 recommendations, once this information is identified. Identifying and sharing performance measures, milestones, and risks could aid DOD CIO in tracking and evaluating DOD’s efforts to implement the 2015 NC3 report recommendations. DOD CIO could improve its efforts to track DOD’s progress in addressing the recommendations by identifying performance measures and milestones as part of the effort it has initiated to expand on the content of its tracking spreadsheet. DOD CIO could also use performance measures to evaluate the actions DOD has taken and determine whether the actions have fully addressed the root cause of the recommendation. DOD officials leading some of the recommendation implementation efforts told us that a number of the issues identified in the 2015 NC3 report stem from enduring problems. These officials noted that an overemphasis on identifying easily attainable performance measures and closing recommendations quickly may improve the overall percentage of recommendations implemented but also could result in underlying root causes continuing to go unaddressed. Our prior work on performance measurement has identified several important attributes—such as the inclusion of baseline and trend data— that performance measures must have if they are to be effective in monitoring progress and determining how well programs are achieving their goals. Additionally, by identifying and communicating risks to NC3 stakeholders, DOD leadership may be in a better position to formulate responses to these risks—including deliberately determining to accept the risk or take steps to avoid, reduce, or share the risk across the defense nuclear enterprise. Promoting the sharing of quality information on the status of the recommendations and potential risks from the 2015 NC3 report among the services and other DOD components with a role in NC3 could help DOD to integrate its nuclear deterrent efforts and help decision makers to formulate responses to any potential risks. The DOD CIO officials that we met with said that it will be important to incorporate performance measures and milestones into their tracking and evaluation process and to consider operational risk and its management when discussing effects on the nuclear enterprise and its NC3 systems. The draft template that DOD CIO is developing, once it is finalized and implemented, could aid the department in identifying performance measures and milestones for these 2015 recommendations in the same way that the centralized tracking tool CAPE developed has been used to collect performance measures and milestones for the 2014 recommendations. In addition, including an assessment of risks associated with the implementation of the recommendations from the 2015 NC3 report similar to the follow-up to the recommendations of the 2014 nuclear enterprise reviews could enhance DOD’s ability to provide oversight of the recommendations and make informed responses to address any identified risks throughout its review processes, all the way to their closure by the Oversight Council. DOD and the Military Services Have Implemented Recommended Changes to their Personnel Reliability Assurance Programs to Reduce Administrative Burdens DOD and the military services have implemented changes to their personnel reliability assurance programs in response to 17 recommendations from the 2014 nuclear enterprise reviews. DOD has identified nine essential elements of reliability and released updated guidance to refocus personnel reliability on these elements. Additionally, the Air Force has incorporated these nine essential elements into its Arming and Use of Force program, allowing the Air Force to use this program to ensure that its security forces meet nuclear surety requirements. The Air Force has also created a new office within the Air Force Personnel Center, the Personnel Reliability Program Administrative Qualification Cell, to assist with the administrative review process for personnel newly assigned to Personnel Reliability Program positions or returning to Personnel Reliability Program positions after working elsewhere. In response to both the personnel recommendations and the inspections-related recommendations of the 2014 nuclear enterprise reviews, the Joint Staff, the Navy, and the Air Force have made changes to the procedures they use to conduct nuclear personnel reliability inspections at nuclear facilities. DOD and the Military Services Have Altered Personnel Reliability Standards to Focus on Nine Essential Elements of Reliability In response to recommendations from the 2014 nuclear enterprise reviews, the Joint Staff led a review of the department’s guidance on the personnel reliability assurance program. The Joint Staff, with the assistance of the military services, identified nine elements from DOD’s personnel reliability assurance requirements that it considered essential to ensure that personnel working with nuclear weapons fully met nuclear surety standards of reliability and trustworthiness. These nine essential elements are that an individual must 1. be a U.S. citizen 2. have a security clearance and be reinvestigated every five years 3. be fully qualified for the position in which he or she will serve 4. have reliability verified by the commander before being assigned to a Personnel Reliability Assurance Program position 5. be continuously monitored by peers, supervisors, and commander for issues that could affect reliability 6. have his or her personnel file checked for issues that could affect 7. undergo a medical evaluation to identify any conditions that could 8. have a personal interview with the commander who will be assessing 9. exhibit the character and competence to do the job, including allegiance to the United States and a positive attitude toward nuclear weapons In response to the Joint Staff review, the Office of the Assistant Secretary of Defense for Nuclear Matters, through the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, issued a new version of the Personnel Reliability Program manual in January 2015, followed by a reissue and renaming of the overarching DOD instruction— changing the name to DOD Nuclear Weapons Personnel Reliability Assurance—in April 2016. This guidance requires that all DOD personnel occupying positions subject to nuclear personnel reliability assurance program standards must meet the nine essential elements of reliability. Additionally, the revised guidance removed a procedure for temporary decertification, which under the previous guidance was to occur immediately on receipt of information that was, or appeared to be, a reason for decertification. The manual also makes it clear that personnel reliability assurance programs are the commanders’ programs, and the commander is exclusively accountable for determining the fitness for duty of individuals subject to the program. The updated manual also provides some clarity regarding requests for reinstatement by personnel who had previously been decertified from the Personnel Reliability Program. The military services have responded to DOD’s changes by updating their own guidance. The Navy has released a new version of its department- specific Personnel Reliability Program manual, applicable to the Navy and Marine Corps, and Army officials told us that the Army plans to release a new version of its manual in early 2018. The Air Force has released a new version of its Personnel Reliability Program manual, in addition to other guidance changes. Specifically, in response to a provision in DOD’s updated personnel reliability guidance that authorizes the military departments to develop reliability guidance specific to their security force personnel guarding nuclear weapons, the Air Force has made changes to its Arming and Use of Force program. Air Force Arming and Use of Force standards include qualification requirements under which all Air Force security forces, whether assigned to a nuclear facility or a non-nuclear facility, are authorized to carry a weapon as part of their official duties. In addition, Air Force nuclear security forces no longer require separate Personnel Reliability Program certification, as they previously did. The 2014 nuclear enterprise reviews determined that requiring nuclear security forces to meet the standards of two reliability programs at the same time was redundant. Air Force officials told us that utilizing the two reliability programs caused manning problems for the Air Force, because the availability of security force personnel qualified under both programs was limited. As a result of the changes to DOD’s guidance, the Air Force rewrote its Arming and Use of Force guidance to incorporate a new chapter that outlines procedures for assessing security forces against each of the nine essential elements of reliability. This change has allowed the Air Force to use its Arming and Use of Force program as its sole method of establishing personnel reliability assurance for Air Force security force personnel. The Air Force continues to use its Personnel Reliability Program to certify nuclear operators and maintainers. Prior to the implementation of its new version of Arming and Use of Force standards, the Air Force conducted an assessment of the new Arming and Use of Force reliability standards as the sole standard for security forces at six Air Force installations (four nuclear installations and two non- nuclear installations), to identify any gaps or areas for improvement of the new guidance prior to its Air Force-wide implementation. The assessment found that the new Arming and Use of Force standard adequately addressed the nine essential elements required of a personnel reliability assurance program, streamlined monitoring of security forces for commanders by merging the Arming and Use of Force standards with the Air Force Personnel Reliability Program standards, and held the security force personnel to a higher standard to perform armed duty. The Air Force fully implemented its new version of Arming and Use of Force standards across the service in February 2016. As a result of the Air Force’s changes to its Arming and Use of Force guidance, Air Force security forces are now qualified to serve at nuclear facilities and do not need to certify under the Personnel Reliability Program (see fig. 5). Air Force officials told us that requiring security forces to qualify under Arming and Use of Force standards had helped to address manning challenges among nuclear security forces, as well as allowing the Air Force to move experienced security forces personnel from non-nuclear facilities to nuclear assignments. According to several Air Force officials in command of security forces at non-nuclear installations, the changes to the Arming and Use of Force guidance have led to a slight increase in administrative work but have been an overall positive development, in part due to improvements in communication with medical personnel about factors that may affect a determination that an airman should not be armed. All Air Force security force personnel are required to meet the standards of Arming and Use of Force to carry a firearm and perform many of their duties. The Air Force implemented the new version of the Arming and Use of Force standards in 2016. According to Air Force officials, during the implementation, the Air Force decided that security force personnel who were, at that time, disqualified or permanently decertified under the Personnel Reliability Program would not be allowed to certify under the new version of Arming and Use of Force until they had been restored to eligibility for the Personnel Reliability Program. In early 2016, the Air Force conducted a review of 3,167 security force personnel who had previously been decertified or disqualified from the Personnel Reliability Program. The Air Force determined that 2,628 of these personnel were able to attain Personnel Reliability Program eligibility during this review, while 539 were not. Because qualifying under the new version of Arming and Use of Force is now a positional requirement, Air Force officials noted that those who do not qualify must retrain for a different job or separate from the Air Force. Air Force officials told us that the security forces career field received a greater number of new security forces personnel than they had been allocated in previous years to account for the loss of personnel who were unable to qualify under the new Arming and Use of Force standards. The Air Force tracks metrics from the Personnel Reliability Program and from the Arming and Use of Force program on an annual basis. Air Force officials told us that they have not yet reviewed the extent to which the changes to Arming and Use of Force made in February 2016 have been effective. Air Force and DOD officials told us that they are waiting until sufficient data are available before making additional changes to the guidance for their personnel reliability assurance program. The Air Force is currently developing a nuclear enterprise health assessment, which will include further assessment of the effects of the changes the Air Force has made to its Personnel Reliability Program and Arming and Use of Force guidance. Air Force officials told us that data collection for this assessment began in the spring of 2017 and that the first summary report will be released in September-October 2017. Once implemented, this Air Force nuclear health assessment will provide an overarching assessment on a periodic basis, similar to a biennial assessment that the Navy conducts of the Navy nuclear enterprise. Unlike the Air Force, the Navy and the Army have opted not to develop separate guidance on nuclear personnel reliability assurance for their security forces personnel. Navy and Army officials told us that there was no reason to create separate guidance for their security forces personnel because, unlike the Air Force, they have not faced manning challenges or administrative burdens related to these positions. The Air Force has a much larger nuclear security force, and personnel transfer between nuclear and non-nuclear facilities more frequently within the Air Force than the other services. The Navy fills security forces positions at the two Navy nuclear facilities with Navy and Marine Corps personnel who report directly from training. According to a Marine Corps official, once these personnel move on to non-nuclear assignments, they generally do not return to nuclear security positions. Army officials told us that their nuclear security forces are highly specialized, very few in number, and serve at only one facility. The Air Force Has Created a Personnel Reliability Program Administrative Qualification Cell to Facilitate the Assignment Process for Personnel New to Personnel Reliability Program Positions The Air Force has taken additional steps to improve the Personnel Reliability Program by creating the Air Force Personnel Reliability Program Administrative Qualification Cell to aid with the review of non- security force personnel (e.g., operations personnel, maintenance personnel) as they transition into Personnel Reliability Program positions. Personnel transferring into these positions are subject to an administrative qualification process, which includes a review of their personnel file, medical information, and security clearance information as well as an interview by the new, gaining, commander to assess them for factors that affect their reliability. Prior to October 2015, the commander for the unit that the individual was leaving reviewed the individual’s administrative paperwork and then provided an assessment of the individual’s reliability under the Personnel Reliability Program standards to the commander of the gaining unit. Because this initial review was often conducted by commanders outside of the nuclear field, they had less experience than nuclear commanders in conducting such an assessment. According to Air Force officials, this lack of experience often resulted in the standards being applied either too stringently or too loosely and the initial reviews often being completed late. Additionally, although Air Force guidance indicated that personnel transferring directly from one Personnel Reliability Program position to another were not required to undergo administrative qualification, one of the 2014 nuclear enterprise reviews found that some administrative file reviews were occurring. As of November 2016, the Air Force Personnel Reliability Program Administrative Qualification Cell has been staffed by personnel experienced with the standards, and they assist in conducting reviews of many of the Air Force personnel moving to nuclear assignments. The cell performs the administrative review formerly conducted by the commander of the individual’s losing unit and provides a recommendation to the commander of the gaining unit before that commander makes an assessment (see fig. 6). As a result, according to Air Force officials, the qualification process is now completed more quickly, and the administrative burden on commanders has been lessened. Officials from the Air Force Personnel Center told us that the Personnel Reliability Program Administrative Qualification Cell was currently assisting all Air Force Major Commands but had not yet begun working with all Personnel Reliability Program units. In addition, in response to a recommendation from the 2014 nuclear enterprise reviews, the Air Force has eliminated administrative reviews that some commands were conducting of personnel transferring directly from one Personnel Reliability Program position to another, but which were not required in the Air Force’s guidance. These personnel have remained subject to continuous monitoring, so they do not require new administrative qualification reviews. DOD, the Air Force, and the Navy Have Made Changes to the Inspections Processes for Their Personnel Reliability Programs DOD, the Air Force, and the Navy also made changes to their nuclear inspections processes in response to the 2014 nuclear enterprise reviews. Nuclear units are subject to a number of different inspections. For example, Joint Staff guidance requires that each of the services conduct Nuclear Weapon Technical Inspections biennially at each of their nuclear units. These inspections are intended to examine every aspect of the nuclear mission at that unit, including the processes of the personnel reliability assurance program. Because of the importance of maintaining nuclear surety by keeping nuclear weapons safe and secure, units that receive an unsatisfactory rating on an inspection may be decertified from conducting operations or have a portion of their nuclear capabilities withdrawn and retain only a limited nuclear capability in mission areas that would not jeopardize the safety, security, or reliability of the nuclear weapons. The 2014 nuclear enterprise reviews found that inspections of nuclear forces occurred too frequently, and that the procedures for inspections of personnel reliability assurance programs had become overly burdensome because of their focus on records review. The reviews found that, as a result, these personnel reliability assurance programs had become dominated by processes that were intended to prepare for inspections, rather than to ensure personnel reliability. Before the 2014 nuclear enterprise reviews, DOD personnel working with nuclear weapons were subject to frequent inspections by multiple organizations. According to DOD officials, Air Force major commands and Navy commands were performing inspections at nuclear units under their control every 18 months. One such inspection was conducted as a combined military service and Defense Threat Reduction Agency inspection. Each service inspected additional specific areas. For Navy units, the Navy inspectors would accept the Defense Threat Reduction Agency inspection report and the Navy inspectors would review additional, service-specific items; this resulted in a larger number of inspectors present. For Air Force units, the combined inspection was performed concurrently, with the Air Force inspecting the same items as the Defense Threat Reduction Agency inspectors as well as reviewing additional, service-specific items; this resulted in two separate inspection teams. The 2014 nuclear enterprise reviews found that a mistake by a single individual could result in an entire submarine or wing receiving an unsatisfactory rating—even in cases not involving a clear, critical error—potentially leading to the withdrawal of their nuclear weapons capabilities. The Independent Review of the Department of Defense Nuclear Enterprise found that the high frequency of inspections resulted in nuclear units spending significant time preparing for inspections rather than focusing on performing their mission. The Independent Review of the Department of Defense Nuclear Enterprise also stated that the portions of these inspections concerned with the personnel reliability assurance program were heavily focused on records review, especially at Air Force nuclear units. During each inspection, inspectors would review hundreds of personnel files and medical records to assess whether the commander and medical staff had made the correct decision in determining an individual to be reliable. Air Force officials told us that commanders and their medical staffs could be found deficient for improperly certifying individuals as reliable even if these individuals had been able to perform their duties without any issues—for example, after routine medical procedures like a regular check-up with an eye doctor. As a result, commanders and medical staff at these units implemented additional procedures beyond those outlined in DOD guidance, such as temporarily suspending personnel from Personnel Reliability Program duties for every off-base medical appointment regardless of whether it could affect their reliability. Additionally, according to the Internal Assessment of the Department of Defense Nuclear Enterprise, inspectors also cited minor administrative deficiencies that were unrelated to personnel reliability, such as using the improper color of ink to fill out a form. To address the recommended improvements identified by the 2014 nuclear enterprise reviews, DOD has updated its inspection procedures. The Joint Staff has updated the Nuclear Weapons Technical Inspections guidance to reduce the frequency of inspections at nuclear units from every 18 months to every 24 months. DOD’s Defense Threat Reduction Agency no longer conducts joint inspections with the services but is responsible for providing oversight of the services’ inspectors on behalf of the Chairman of the Joint Chiefs of Staff. For the portion of the inspection concerned with personnel reliability assurance, the updated guidance de-emphasizes records reviews in favor of focusing on processes and procedures through observation, interviews, and scenario- based discussions. The Navy and the Air Force have also updated their inspection procedures to implement these changes in DOD’s guidance. For example, Air Force inspectors do not conduct records checks unless the interviews and scenario-based discussions reveal a lack of procedural knowledge. Similarly, Navy officials stated that Navy inspectors review additional records as needed if a lack of procedural knowledge is revealed. To aid the Navy in assessing the overall effectiveness of the updated inspection procedures, the Navy has opted to also review a sample of the health records of personnel recently certified or reinstated into the Personnel Reliability Program. According to Air Force officials at one nuclear wing that had recently undergone a Nuclear Weapons Technical Inspection, the changes to inspection procedures for their personnel reliability assurance programs that DOD and the Air Force have implemented have had a positive effect. These officials stated that the increased use of scenario-based discussions and knowledge checks, combined with inspectors taking a less adversarial and more conversational discussion approach to their inspection inquiries, has resulted in an environment where personnel feel more comfortable self-disclosing problems or mistakes, and where the focus of the inspection is on process improvement rather than on identifying administrative errors, independent of whether the errors were substantive deficiencies. Conclusions DOD has taken steps to improve the defense nuclear enterprise in response to the 2014 nuclear enterprise reviews and the 2015 NC3 report. The processes CAPE has developed to track and evaluate continuing progress to improve the defense nuclear enterprise—including changes in DOD’s and the military services’ approaches to administering their personnel reliability assurance programs—provide a good framework for continually monitoring the department’s efforts. This framework is also a good example of how similar efforts to implement and oversee actions on department-wide improvements on a wide range of subjects could be made effectively. By developing additional guidance to identify and document risks associated with implementing the recommendations from the 2014 nuclear enterprise reviews and identifying and communicating performance measures, milestones, and risks for the 2015 NC3 report recommendations, the department— particularly through the NDERG and the Oversight Council for NC3— would be better positioned to ensure that progress continues to be made, underlying problems are addressed, and risks are mitigated or accepted after considering the predictable and desirable results. Recommendations for Executive Action We are making the following two recommendations to DOD: CAPE, in coordination with the military departments and other DOD entities serving as offices of primary responsibility for implementing the recommendations, develop additional guidance for these offices to identify associated risks and document information about these risks in the centralized tracking tool. (Recommendation 1) DOD CIO—in coordination with CAPE, the military departments, Joint Staff, and U.S. Strategic Command—as the draft template and any other additional tools to aid in their approach are finalized, identify and communicate to NC3 stakeholders performance measures and milestones to assist in tracking the progress of implementation of the recommendations from the 2015 NC3 report and evaluating the outcomes of implementation actions, and risks associated with the implementation of the recommendations from the 2015 NC3 report. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of the classified report to DOD for comment. In its comments, reproduced in appendix I, DOD concurred with both of our recommendations. In response to our first recommendation, DOD indicated that the Director, CAPE, will issue supplementary guidance for the relevant DOD components to identify and document key risks related to implementation of recommendations from the 2014 reviews, risks related to implementation of alternate approaches, and potential unintended consequences. In response to our second recommendation, DOD stated that DOD CIO will work with the stakeholders of the Council on Oversight of the National Leadership Command, Control, and Communications System to identify and document performance measures and milestones associated with progress toward the recommendations from the 2015 NC3 report, as well as the risks related to implementation of these recommendations. We are encouraged that DOD is planning to take these actions and believe that, once they have been completed, the department will be better positioned to ensure that progress in implementing the recommendations from both the 2014 nuclear enterprise reviews and the 2015 NC3 report continues to be made, underlying problems within the defense nuclear enterprise are addressed, and risks are mitigated or accepted after deliberate consideration. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, and to the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Chairman of the Joint Chiefs of Staff; the Secretaries of the Army, of the Navy, and of the Air Force; the Commandant of the Marine Corps; the Commander, U.S. Strategic Command; the Department of Defense Chief Information Officer; and the Director of the Office of Cost Assessment and Program Evaluation. In addition, the report is available at no charge on the GAO website at http://www.gao.gov If you or your staff have any questions about this report, please contact me at (202) 512-9971 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Defense Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, key contributors to this report were Penney Harwell Caramia, Assistant Director; Chris Cronin; R. Scott Fletcher; Jonathan Gill; Brent Helt; Douglas Hunker; Joanne Landesman; Marc Molino; Amie Lesser; Pamela Davidson; and Michael Shaughnessy. Related GAO Products Nuclear Weapons Sustainment: Budget Estimates Report Contains More Information than in Prior Fiscal Years, but Transparency Can Be Improved. GAO-17-557. Washington, D.C.: July 20, 2017. Nuclear Weapons: DOD Assessed the Need for Each Leg of the Strategic Triad and Considered Other Reductions to Nuclear Force. GAO-16-740. Washington, D.C.: September 22, 2016. Defense Nuclear Enterprise: DOD Has Established Processes for Implementing and Tracking Recommendations to Improve Leadership, Morale, and Operations. GAO-16-597R. Washington, D.C.: July 14, 2016. Nuclear Weapons Sustainment: Improvements Made to Budget Estimates Report, but Opportunities Remain to Further Enhance Transparency. GAO-16-23. Washington, D.C.: December 10, 2015.
Why GAO Did This Study In 2014, the Secretary of Defense directed two reviews of DOD's nuclear enterprise. These reviews identified problems with leadership, organization, investment, morale, policy, and procedures, as well as other shortcomings that adversely affected the nuclear deterrence mission. The reviews also made recommendations to address these problems. In 2015, DOD conducted a review focused on NC3 systems, which resulted in additional recommendations. The National Defense Authorization Act for Fiscal Year 2017 includes a provision for GAO to review DOD's processes for addressing these recommendations, and House Report 114-537 includes a provision for GAO to review changes to DOD's nuclear personnel reliability assurance programs. This report addresses the extent to which DOD and the military services have (1) made progress in implementing recommendations to improve the nuclear enterprise and (2) made changes to their personnel reliability assurance programs. GAO reviewed relevant documents and interviewed agency officials from DOD and the military services. This is a public version of a classified report GAO issued in August 2017. It omits information DOD deemed classified. What GAO Found The Department of Defense (DOD) has made progress in implementing the recommendations from the 2014 nuclear enterprise reviews and the 2015 nuclear command, control, and communications (NC3) systems report. In December 2016, the Office of Cost Assessment and Program Evaluation (CAPE) provided the military services with guidance that emphasizes using performance measures and milestones to evaluate progress to aid them in tracking and analyzing their implementation of the recommendations from the 2014 nuclear enterprise reviews. However, CAPE's guidance does not require the military services and other DOD components to identify and document risks as part of its recommendation tracking processes. As a result, DOD does not consistently identify and document risks, and it may not be identifying and communicating potential risks related to the nuclear enterprise. One of the 2014 nuclear enterprise reviews found that the avoidance of managing risks by many leaders within the enterprise adversely affected the mission. Developing additional guidance on identifying and documenting risks could enhance DOD's ability to provide oversight of its efforts to monitor progress and make informed responses to address any identified risks. For recommendations made in the 2015 NC3 report, DOD's Office of the Chief Information Officer (DOD CIO) uses an internal spreadsheet to track implementation but has not yet identified performance measures, milestones, or risks. DOD CIO has drafted a template that, once it has been approved and implemented, will provide a form that could be used for documenting performance measures, milestones, and risks. By identifying and communicating this information, DOD CIO could improve its efforts to track the progress of DOD's actions, evaluate their effects, and formulate responses to risks. DOD and the military services have implemented changes to their personnel reliability assurance programs in response to recommendations from the 2014 nuclear enterprise reviews. These programs are intended to ensure that DOD personnel who work with nuclear weapons and nuclear weapons systems, NC3 systems and equipment, and special nuclear material are trustworthy, reliable, and capable of performing their assigned nuclear weapons-related mission. The 2014 nuclear enterprise reviews found that these personnel reliability assurance programs were overly complex and administratively burdensome and that frequent and intrusive inspections left nuclear units more focused on preparing for and responding to inspections than on ensuring personnel reliability. DOD and the services have updated their guidance for personnel reliability assurance programs, including focusing on nine essential elements of reliability. For example, the Air Force has incorporated these elements into the standards it uses for its security forces. Additionally, the Air Force has centralized some of its administrative processes, and the Joint Staff has updated inspection procedures in a way that may ease the burden on personnel being inspected. What GAO Recommends DOD should develop additional guidance on identifying and documenting risks, and should identify and communicate performance measures and risks. DOD concurred and provided information about planned actions to implement them.