Thursday, January 25, 2018

When "Strong Evidence" Is Not Sufficient

Here's a recent blog post of mine on InterAction's website.

When "Strong Evidence" Is Not Sufficient

Despite the promotion of experimental and quasi-experimental designs providing strong evidence of "what works," this post argues that this type of evidence is not sufficient for learning about actual effectiveness of interventions (what works) in a real world context of international development.

Wednesday, August 31, 2016

Network Weaving for Regional Development

I just completed a study, with Lasha Bokuchava, using social network analysis to evaluate two regional alliances.

The report can be downloaded at:

My best,

Thursday, January 2, 2014

Evaluation Calendar

Lars Balzer, who created and manages Evaluation Portal, has recently created Evaluation Calendar. The Evaluation Calendar provides a detailed listing of evaluation related training, courses, and presentations being conducted world-wide.

The first event listed, for 2014, is a 5-day course on Technical Approaches for Conducting Impact Evaluations being held from 7-11 January 2014 at the Center for Learning on Evaluation and Results fro Anglophone Africa.

If you know of any evaluation related training, course or presentations that will be occurring please include it in Lar's Evaluation Calendar.

Monday, December 23, 2013

Proxy Means Testing in a Conditional Cash Transfer Program

Based on a $100 million USD conditional cash transfer (CCT) project in Kazakhstan, I have written---with some colleagues---a handbook on the various approaches to targeting beneficiaries in a CCT program. It highlights the reasons, benefits, challenges and costs with using a Proxy Means Test for beneficiary selection.

Take a look:  PMT Handbook

Saturday, September 15, 2012

Statistics Without Borders

How many times have you wished that you could get professional assistance in designing a quantitative study or evaluation, or had data you would like help in analyzing statistically for little or no cost?

If you are a non-profit humanitarian and/or development organization, then Statistics Without Borders can help.

Statistics Without Borders is an all-volunteer group of 300 statisticians who are willing to provide pro bono consultancy services. The three main services are 1) research, 2) statistical analysis, and 3) survey design. Projects Statistics Without Borders has worked on to date include "Child Mortality in Afghan Refugee Camp in Pakistan," "CARE Girl's Workload Study" and "Digital Data Collection in Haiti."

So, if you need help with research, statistical analysis and/or survey design, send a brief outline of your idea/project to 

Thursday, October 27, 2011

Making Evaluations Matter - A Practical Guide for Evaluators

Recently, the Centre for Development Innovation, Wageningen University & Research Centre, Wageningen, The Netherlands, published Making Evaluations Matter: A Practical Guide for Evaluators written by Cecile Kusters with Simone van Vugt, Seerp Wigboldus, Bob Williams and Jim Woodhill.

This guide emphasizes participatory evaluation and draws heavily upon the work of Michael Quinn Patton, especially from Utilization-Focused Evaluation.

I think this is a very handy guide NOT ONLY for evaluators, but also a handy guide for country directors, project managers, and project directors to read PRIOR to implementing a project as well as toward the end of a project when planning an evaluation.

The contents are the following:

1. Core Principles for guiding evaluations that matter.
2. Suggested steps for designing and facilitating evaluations that matter.
3. Getting stakeholders to contribute successfully.
4. Turning evaluation into a learning process.
5. Thinking through the possible influences and consequences of evaluation on change processes.
6. Conclusion

Annex A: Examples of (Learning) Purposes, Assessment Questions, Users, and Uses of an Evaluation for a Food Security Initiative.

Annex B: Contrasts between traditional evaluation and complexity-sensitive developmental evaluation.

Wednesday, October 19, 2011

Self-administered questionnaires

In survey research, especially when questions related to sensitive topics are being asked, there are debates between which form of questionnaire administration is best: a) interviewer-administration or b) self-administration. More often that not, questionnaires are administered by a trained interviewer; however, there are times that some people feel its best that the respondent completes the questionnaire without the assistance of an interviewer (self-administered).

Currently, I'm dealing with survey data from a youth study that used a self-administered questionnaire and the data contain many "missing" cases, nonsensical responses, and numerous cases of Errors of Commission and Errors of Omission. An Error of Commission is one where the person responds where they should not and an Error of Omission is one where the person fails to respond when they should.

A questionnaire designed for an interviewer administered survey cannot be used for a self-administered survey! Interviewers are trained in understanding the questions and how to navigate through the questionnaire; however, a questionnaire that is designed for someone who has never seen it before and for them to understand the questions as well as navigate through the questionnaire requires special attention to many factors. Using pg.6 from the 2008 National Survey of College Graduates, conducted by the US Census Bureau to illustrate, some critical factors to consider for a self-administered questionnaire are:

  • Language - the instructions and questions need to be written in a vocabulary that is slightly lower than lowest education level of any respondent.
  • Section Heading - every section/topic needs a heading that is short, in bold font, slightly different color than the rest of the questionnaire, such as Part B - Past Employment.
  • Question Numbering - question numbers should carry the section lettering/numbering as well as the question number and should be in a slightly larger font than the question text and in bold font, such as B1.
  • Verbal navigation - instructions next to certain responses that tell the respondent where to clearly go next. In the example above, if the respondent answers "No" in question B1 there is a verbal instruction, in bold font, telling them both 1) the page and 2) the question # to go to.
  • Symbol navigation - these are generally arrows showing a respondent where to go next if they answer a certain response. Above, if a respondent answers "Yes" in question B1 the arrow shows them to go to question B2.
  • Adequate spacing - all to often to save printing costs, a questionnaire is too cluttered but generally this is ok for a trained interviewer but not for self-administration. A self-administered questionnaire should have adequate spacing between questions to reduce eye fatigue and confusion.
  • Coloring - if posible, use slightly different grays or colors to highlight different sections and responses, such as in the example above.