Evaluation is often misunderstood in the United Nations.  Of course, this is my personal observation, but it comes from years of undertaking evaluations and working on monitoring and evaluation in learning and training programmes. When done right, an evaluation is like an old friend who tells you the truth, even if the message is not what you want to hear. Within a bureaucracy, the stakes can be high if less-than-optimal evaluations are correlated with bad management practices. To benefit from what evaluation has to offer, the UN has to undergo a culture change.

Currently, the way evaluations are carried out can be problematic. In an organization such as the UN, where monitoring and evaluation of activities are often mandatory, evaluations are not done systematically and with rigour.  Real monitoring is seldom performed, and evaluations are more often than not ‘tick-the-box’ exercises rather than tools to verify or reject the assumptions behind a policy or a project.  In a nutshell, evaluation findings do not often find their way into meeting rooms when policies and strategic plans are discussed. Evaluation findings are not systematically used to inform decision-making.   We often treat evaluation as a “must-do” rather than a “need to do” or “want to do” activity. 

Independent evaluations are often based on qualitative measures.  Consultants recruited to undertake such activities base their findings on a combination of desk reviews, interviews with staff and UN counterparts, and, at times, with direct beneficiaries.   While useful, there are limitations to qualitative evaluations.  First, the findings are based on assumptions that can be influenced by individual biases.  Although certain methods can be used to verify assumptions, the findings cannot be fully trusted. 

Undertaking quantitative evaluations in the UN is not a simple task.  Despite ample sets of available data on operational activities (e.g., finance, human resources, etc.), we are not well equipped with measureable indicators when it comes to field-level activities (i.e. projects).  For example, to assess the effectiveness and the impact of a project, the two most common evaluation methods used in social sciences are correlational and the quasi experimental designs.  To apply these methods one must have pre- and post- datasets.  Rarely do we have any pre-datasets in a country setting that an evaluator can utilize to do before and after comparisons.  This makes it impossible to attribute results to the project.

Time is never on the side of the evaluator.  The average time dedicated to an evaluation process is usually not more than four weeks.  This is not long enough to undertake a systematic evaluation when the collection of data, analysis and reporting normally takes much longer.  For a rigorous evaluation, a survey has to be designed, tested, refined and implemented.  Implementation by itself is a lengthy process, when a sample population (large and diverse enough) to represent the population has to be randomly selected and surveyed.  This is perhaps the most extensive and time-consuming stage of an evaluation exercise. To make the findings meaningful, an evaluation must devote adequate time and money to the collection of data. 

Funding for evaluation is often insufficient.  This creates a problem for the consultant when she is responsible for all stages of the process, including the analysis and the reporting.  With a combination of inadequate time and money, the evaluator must take shortcuts to complete the task to meet the deadline.

While there are professional evaluators or those within the UN with considerable expertise in monitoring and evaluation, many staff responsible for managing evaluations lack the skills to assess the quality and the accuracy of evaluation designs and their findings.  In this context, managers responsible for undertaking evaluations are often not know evaluation methodologies well enough, and are thus ill-equipped to oversee the evaluation.

Evaluators and those responsible for evaluation within the UN system are not regularly present when corporate policies or programme objectives are discussed and formulated.  When evaluation is missing in action, evidence-based decision making suffers.  The current push to put data in the forefront when setting new objectives or designing new projects can change the practice of viewing evaluation as a tick-the-box exercise.   Evaluation is an essential task that should not be solely to look at the past but to inform future planning.

After offering courses on monitoring and evaluation for several years, the Staff College has been asked by different entities within the past year to support them in designing and delivering courses on evaluation to a large number of staff in various positions.  This is a welcome change, as it demonstrates that they increasingly appreciate the benefits of rigorous evaluation.  A better understanding of evaluation represents the beginning of a culture change in the UN.  The shift to view evaluation as a friend may be slow but it is coming.