G l o b a l E n v i r o n m e n t F a c i l i t y

GEF/ME/C.25/2
May 6, 2005
GEF Council
June 3-8, 2005







MANAGEMENT RESPONSE TO THE
GEF ANNUAL PERFORMANCE REPORT (2004)




(Prepared by the GEF Secretariat and the
GEF Implementing Agencies)





INTRODUCTION

1.
We welcome the presentation of the 2004 APR. Its preparation reflects very
considerable efforts by the Office of M&E, building upon the M&E systems of the
Implementing Agencies. The 2004 APR provides a series of useful insights to assess the
dissimilar building blocks it uses: (i) a One-time Study on Elapsed Times in the
Preparation of GEF Projects, (ii) Quality of Terminal Evaluation Reports, and
(iii) Quality of Project M&E Systems. As such, it represents an improvement over
previous versions and could be the first in a series of more useful annual APRs.

2.
An important consideration in the 2004 APR is the analysis of time-lags. A lag
exists between the results of M&E studies and the time when results of adjusted practices
can be seen in the portfolio. For example, many findings that apply to projects at entry
cannot be seen in the portfolio immediately; instead, the test of whether or not these
findings have been incorporated in project design can only be seen in cohorts
representing new project entries only, since results cannot influence project design
retroactively. Analysis by cohort should be used whenever possible as is has been done
here.

ELAPSED TIME IN THE PREPARATION OF GEF PROJECTS

3.
This is a useful and well designed one-time study that provides important and
balanced findings regarding the causes for delays in GEF project preparation, even
though it downplays important sources of delay, such as the time it takes to obtain
endorsement letters from focal points, and the significance of the additional time required
for GEF specific processes together with the innovative characteristics of many GEF
projects that can require additional time for design.

4.
We agree with the recommendation for better delineation of roles, including
focusing Council priorities on policy and program matters rather than project reviews.
The increased technical scrutiny by Council often duplicates the technical review
functions of the IA safeguard teams as well as the GEF Secretariat.

5.
We also agree with the need for increased transparency of the approval process,
including the exploration of alternatives such as internet-accessible databases, as well as
an active management approach to the project approval process. Some IAs, however,
have pointed that the client-oriented nature of projects preparation makes the process
quite transparent already.

THE QUALITY OF TERMINAL EVALUATION REPORTS

6.
This important section develops a robust methodology to assess the quality of the
terminal evaluations conducted by the Implementing Agencies, although we question the
validity of applying such methodology retroactively. Such methodology is useful to track
the quality of terminal evaluations over time and if it is to be used in the future, this needs
to be communicated to the IAs explicitly. In addition, we note that the small sample size





limits the validity of statistical analyses on these results. We agree with the OME that the
observed decrease in UNDP ratings, for example, cannot necessarily be considered a
trend because the sample size is based on six terminal evaluations only.

7.
It is possible to summarize the results of the terminal evaluations by analyzing
the data in Annex 3 (page 65). We present a summary here to facilitate the review by the
reader:

Rating
Achievement of Objectives
Sustainability
Good or Better
19
21
Less than Good
3
2
Not Ranked
3
2
Total
25
25

8.
As can be seen from the table, 86% of projects with ratings have a "good or
better" rating in achievement of objectives, and 91% of projects have a "good or better"
rating for sustainability. In the future, it would be important for the APR to concentrate
on analyzing and discussing such substantive issues.

THE QUALITY OF PROJECT M&E SYSTEMS

9.
This is another useful section that summarizes and discusses the quality of M&E
systems used by the IAs at the project level. We agree that there has been a marked
improvement in the number of projects with adequate M&E systems, as well as the
quality of such systems. Although the report calls for further improvements, it is
important to point out that many remaining weaknesses are germane to some of the focal
areas and cannot be attributed to the GEF alone. For example, measuring biodiversity
impacts is impossible given the current levels of scientific uncertainty; instead, it is
widely accepted that certain outcomes can be used as strong proxies for impacts, such as
the presence of effective managed protected areas, maintenance of habitat integrity, etc.

CONCLUSIONS

10.
The 2004 APR is a useful and welcome step in the direction of better
characterizing the GEF portfolio.

11.
In the future, the APR needs to be complemented by a serious effort at portfolio-
level monitoring of outcomes and whenever possible, impacts. The establishment of the
independent Office of Monitoring and Evaluation provides an opportunity for the GEF
Secretariat to provide greater leadership in the area of portfolio-level monitoring. Under
an ideal "division of labor" system among GEF entities, IAs can be responsible for
project-level quality and monitoring, while the GEF Secretariat can concentrate on
portfolio-level strategic issues and monitoring. Such a division of labor (repeatedly called
by various M&E studies) will also help streamline the project cycle by avoiding overlap
of the review functions at project entry.






12.
The GEF Secretariat wishes to advance such thinking, and working through the
Focal Area Task Forces, will apply portfolio-level monitoring results in 2005, on a pilot
basis, and possibly starting with the biodiversity focal area.