The White House Office of Management and Budget (OMB) evaluates research at the U.S. Environmental Protection Agency and other federal agencies using the Program Assessment Rating Tool (PART), a set of questions that asks agencies about many aspects of their programs, including whether they can measure and demonstrate annual improvements in efficiency.

Based on the answers, OMB rates research programs as effective, ineffective, or somewhere in between. An "ineffective" rating can have serious adverse consequences for a program or agency. After experiencing difficulty meeting OMB's requirements to demonstrate the efficiency of its research programs, EPA asked a National Research Council committee for guidance on how to measure efficiency.

The committee's new report recommends four changes in how the federal government assesses the efficiency of research at EPA and other agencies.

First, the report emphasizes that assessing efficiency should be considered only one part of evaluating a program's quality, relevance, and effectiveness.

Second, the report introduces a novel distinction between "investment efficiency" and "process efficiency," and recommends that these aspects be evaluated in different ways. Assessments of investment efficiency should examine whether an agency's R&D portfolio, including the budget, is relevant, of high quality, matches the agency’s strategic plan, and is adjusted as new knowledge and priorities emerge.

These evaluations require panels of experts, which should comprise both scientists and other stakeholders. In contrast, evaluations of process efficiency should focus on "inputs" (the people, funds, and facilities dedicated to research) and "outputs" (the services, grants, publications, monitoring, and new techniques produced by research), as well as their timelines. Of nine measures currently used by R&D agencies, common examples are the number of grants awarded or publications produced annually, which could be assessed against appropriate benchmarks.

Third, the committee clarified the roles of outputs and outcomes. EPA's difficulties in answering PART's questions about efficiency have grown out of OMB's insistence that the agency find ways to measure the efficiency of its research based on outcomes, rather than outputs.

Measuring research efficiency based on what the committee describes as "ultimate outcomes" -- for example, whether a program eventually results in cleaner air or fewer deaths -- is neither achievable nor valid, because such outcomes occur far in the future and are highly dependent upon actions taken by many other people who may or may not use the research findings. The committee's review of practices across government R&D agencies revealed that no agency has found a way to demonstrate efficiency based on ultimate outcomes.

The report does endorse evaluating efficiency based on "intermediate outcomes" -- for example, assessing whether a program has improved the body of knowledge for decision-making, or disseminated newly developed tools and models. These assessments would be conducted most effectively using expert review panels and not just formulas.

Fourth, the committee recommended that the efficiency of EPA's research programs be evaluated according to the same standards used at other agencies. OMB has rejected some methods for measuring research efficiency when proposed by EPA, but accepted them when proposed by other agencies. OMB should train and oversee its budget examiners to make sure they implement the PART questionnaire consistently and equitably across agencies.

The study was sponsored by the U.S. Environmental Protection Agency. The National Academy of Sciences, National Academy of Engineering, Institute of Medicine, and National Research Council make up the National Academies.

Copies of EVALUATING RESEARCH EFFICIENCY IN THE U.S. ENVIRONMENTAL PROTECTION AGENCY available at http://www.nap.edu.