Thursday, December 3, 2009

Pre-test of Project Group with Post-test of Project and Comparison Groups, Design #5

The next set of project evaluation designs that I will presenting are considered weaker than the previous designs; weaker meaning that they can not provide good evidence that a project's interventions are directly attributable to the measured outcomes or to what degree the project contributed to the measured outcomes (net impact).

The pre-test with a project group combined with a post-test of both project and comparison group design is shown below. As the name implies, the pre-test (baseline study) is conducted only with the people involved in the project. There could be a number of reasons why this might occur: save money, project team didn't like the idea of a comparison group initially, or some technical reasons. However, later there may be the budget, interest or feasibility to include a comparsion group.

The advantages of this design are that it can assess reasonably well how a project is being implemented and whether intended outputs from activities were produced. IF the comparison group studied at the end of the project is quite similar to the project group in characteristics, and adequate mixed methods can demonstrate the comparison group were similar at the baseline as the project group on the outcomes that will be measured, then this design MIGHT demonstrate project effects.

The disadvantages are that is that it is difficult to conclusively determine if the differences between the project and comparison group at the end-line study are due to the project or other factors.  Another weakness of this design is that local context events can effect outcomes in the comparison group. For example, an agricultural project compares agricultural output of its farmers to a comparison group of farmers without knowing that the the comparison group of farmers received more irrigated water than previous years thus increasing their output. To monitor local context events, retrospective information can be obtained from the comparison group.

Truncated Pre-test and Post-test with Comparison Group, Design # 4

How do you evaluate a project if no baseline was conducted and the project is being implemented? One possible approach, if a project has an adequate budget and has a reasonable amount of time left before it is completed, is the use of a truncated pre-test and post-test with a comparison group.

The truncated, or shortened, quasi-experimental project evaluation design uses a mid-term study as a proxy measure for the baseline even while recognizing that the project has been underway for some time. Again, as in all the other more rigorous designs, a comparison group is studied so as to estimate the net impact of the program interventions; however, with no baseline this design is weaker than the previous ones.

The post-test or final study, is conducted at the end of the project. One advantage of this design is that smaller sample sizes can be used since the time frame is shorter which reduces the possibility of respondent loss (attrition). One of the drawbacks of this design is that without having a baseline measure it is not possible to know the total amount of change over the life of the project, rather inferences have to be made based on contextual analysis and incorporating mixed methods approaches.

Friday, November 27, 2009

Project Evaluation: General Purpose Quantiative Evaluation Design #3

The next RealWorld (Bamberger, Rugh, Mabry: 2006) project evaluation design (#3) is a more simplified version of the previous two designs, but still considered one of the more robust project designs. It is simplified since it has only two studies: baseline and end-line, which is basically a pre- and post-test. But, similar to the two previous rigorous designs, it includes both participants and a match comparative group.

The cost, time, and data constraints are reduced with this design compared to the previous two designs, but still are relatively substantial, yet nonetheless provide more convincing evidence of project success.

With all three robust quantitative project designs discussed so far it is essential to combine them mixed method qualitative methods that especially focus on overall project implementation (quality), context (social, cultural, economic and political) in which the project occurs, and cases (case studies).

So, the three most rigorous project designs have been presented. Let me know what you think of them or if you have used them, what were the benefits, challenges or drawbacks. Just use the Comments section below.

Second Most Robust Quantitative Evaluation-Design #2

In this series on various types of quantitative project evaluation designs, let's look at another design that is considered very rigorous using quasi-experimental methods. This design is quite similar to Design #1 presented in an earlier blog, in that two groups (participants and a comparative non-participant group) are studied over the life of the project; HOWEVER, this desgin does not include the the 4th study that was in Design #1, the post-project follow-up, but rather has three: baseline, mid-point, and end-line.

As in the most rigorous project evaluation design, what makes this design rigorous are a) the use of a matched comparative group who help establishe the counterfactual [i.e., what would have happened if the project had not occurred], and b) measurements taken at three-points in time. What makes this design slightly less rigorous is without the post-project study the sustainabiliy or trajectory of the results is not known. In other words, after a certain period of time with no interventions, were the results among the participant at the end of the project able to be sustained, increased or did they eventually decline?

Many of the limitations that applied to the first design also apply to this design, which partly explain why this design is not often used among NGOs. First, it requies more time and costs to collect data at 3 points in time and among two groups. Second, the sample size must be relatively large to account for loss or attrition of members in both groups over this period of time. Third, data management and analysis can be a challenge.

In an effort to clearly demonstrate "What Works" this is a project evaluation design that should be considered more often than is currently being used, especially for longer-term projects that span 3 or more years. BUT, such designs must be included at the propsal development phase, otherwise trying to fund, arrange and organize such a project evaluation becomes difficult.

Wednesday, November 25, 2009

Where Every Project Succeeds and Every Intervention is Above Average!

On 20 November 2009, Nicholas Kristof published an article in the New York Times called "How Can We Help the World's Poor?" He discussed three views on this question, those who think: 1) that aid is crucial to help the poor, 2) aid, and aid organizations, don't help but actually hurt the poor, and 3) there are both shortcomings and successes with aid but that this should be demonstrated on a case by case basis.

The first view holds that more money is needed from the developed countries to assist the underdeveloped or developing countries. And, one of the main reasons that there are still so many poor in the world is becasuse of so little development aid. Kristof cites Jeffery Sachs and his book "The End of Poverty" as one of the main proponent of this view.

The second view holds that from the years of foriegn aid, there is no correlation between aid given and develoment. In fact, they say that aid systematically fails, undermines self-reliance, entrepreneurship, and can even harm people. The proponents of this view are William Easterly and his book "The White Man's Burden: Why the West's Efforts to Aid the Rest Have Done So Much Ill and So Little Good," and Dambisa Moyo and her book "Dead Aid: Why Aid is Not Working & How There is a Better Way for Africa."

The third view acknowledges that aid has had and continues to have shortcomings, but that some aid programs and projects do make a difference in the lives of people. Proponents in this camp rely on quality evaluations to empirically, not theoretically, show if a project is successful and the interventions make a difference. These proponents are against evaluations showing that every project is successful, where project failures are buried, and every intervention is "above average," and as Kristof says, "these evaluations are often done by the organizations themselves."

Recently, I had a director of a project tell me, "my projet is too unique to be evaluated!" It is this view that quickly provides fuel to the critics of aid.

I think all too often those who work in foreign aid take it for granted that whatever they do is good and appreciated, by both the world community and beneficiaries, but this is simply not the case. There are many who believe that those funds should be used in other ways to improve the lives of people.

Aid projects and interventions can be good or bad.  However, to determine this, systematic methods of evaluating the merits of a project, as objectively as possible are needed with the findings---both good and bad ---being made public. In other words, the aid community needs to willingly face "what works" but also "what does not work" and not view project evaluations as a way to report "All our projects succeed and all our interventions are above average."

Thursday, November 19, 2009

Project Evaluation: Most Robust Quantitative Evaluation Design #1

I will be starting a series on 7 different types of quantitative project evaluations, from the strongest (or most rigorous) to the weakest (or least rigorous) designs, that use are based on a quasi-experimental design (i.e., randomization is not used but rather the use of a matched comparison group). These seven quantitative project designs are discussed in more detials in the book, RealWorld Evaluation: working under budget, time, data and political constraints, by Michael Bamberger, Jim Rugh, and Linda Mabry (2006).

The most robust or strongest quantitative project design has also the longest name: Comprehensive longitudinal design with pre-, mid-term, post- and ex-post observations on the project and comparison groups. This design is one of the strongest quantitative project evaluation designs but also the most time consuming and expensive.

As shown in the diagram above, there are several characteristics of this design that make it one of most rigorous, but also expesive and time consuming. First, data is collected at 4 points of time (1-Baseline, 2- Mid-Term, 3-End-line and 4-After Project). In addition, it involves data collection among two groups: those people/households that are involved in the project as well as a match compartive group who are as similiar as project participant BUT who are NOT involved or effected by the project.

The reason for the matched, comparative group is to establish what is called the "counter factual", which attempts to answer the question: "What would have happened to these individuals/households IF the project had not occurred?" Thus, any differences between the project participants and the matched group at the end of the project is estimated to be the impact of the project. The reason the 4th data collection point (After Project Study) is included is to understand the "trajectory" or sustainabilty of any results or impact(s); that is, do the results tend to increase, level off or decrease over time.

There are limitations to being able use this type of design, which is why it is not very often used. First, as mentioned earlier, it requies more time and costs to collect data at 4 points in time and among two groups. Second, the sample size must be relatively large to account for loss or attrition of group members over this period of time. Third, data management and analysis can be a challenge. Fourth, due to the longer period of time, there is potential for larger macro-level influences, such as policy changes, that can affect results.

Despite these limitations and challenges, this design should be considered in new projects, or projects that want to scale-up regionally or nationally, to clearly demonstrate project interventions produce the expected outcomes and results, as well as how sustainable the results are.

Sunday, November 15, 2009

Using WORDLE to Illustrate Reports

When submitting reports or papers, or even giving a Power Point presentation, it is nice to have an illustration of what your presenting. There is an online tool which allows you to generate a "word cloud" which you can place at the beginning of any report, paper, or presentation. This tool is called, WordleWordle is a tool that generates word clouds from any text that you provide. These word clouds give greater prominence to words that appear more frequently in the source text; that is, words used most often are large and words used less often are smaller. You can tweak your clouds with different fonts, layouts, color schemes and backgrounds. The images you create with Wordle can be copy and pasted into documents or presentations. To illustrate, I have made two word clouds.

This "word cloud" represents text I copied and pasted into Wordle from a semi-annual report submitted by a project for street-children in Georgia.

This "word cloud" represents text from a proposal written for supporting children in the marsh lands of Iraq.

So, this is a great tool to embellish a report or presentation and gives the reader a "snap shot" of what will be presented. In both word clouds, the most prominent theme is "children" with other relevant themes and issues. It is a great way to check if children are the most common theme in your report, proposal or presentation. The link to Wordle is:

Friday, November 13, 2009

Measuring Advocacy and Policy Outcomes

One of the more challenging aspects of projects in Save the Children is measuring IR-4 performance (Enhanced Enabling Environment), especially if advocacy and policy outcomes are envisioned. This challenge was addressed by various authors and organizaitons in The Evaluation Exchange (Vol. XIII, No.1, Spring 2007), sponsored by Harvard Research Project.

In this issue of The Evaluation Exchange the authors attempted to define advocacy and policy change and how to evaluate this change. Excerpts from one article, which presents a illustrative menu of outcomes and strategies for various types of advocacy and policy objectives, are below.

Objective: Shifts in social norms. Social norms are the knowledge, attitudes, values, and behaviors that comprise the normative structure of culture and society. Advocacy and policy work and intervention increasingly has focused on this area because of the importance of aligning advocacy and policy goals with core and enduring social values and behaviors.

Examples of outcomes
Changes in awareness
+  Increased agreement about the definition of a problem
+  Changes in beliefs

+  Changes in attitudes
+  Changes in values
+  Changes in the salience of an issue
+  Increased alignment of campaign goal with core societal values
+  Changes in public behavior

Examples of strategies to achieve these outcomes
+  Framing issues
+  Media campaign
+  Message development (e.g., defining the problem, framing,
+  Development of trusted messengers and champions 

Objective: Strengthened organizational capacity. Organizational capacity is another name for the skill set, staffing and leadership, organizational structure and management systems, finances, and strategic planning of nonprofits and formal coalitions that do advocacy and policy work. Development of these core capacities is critical to advocacy and policy change efforts.

Examples of outcomes
Improved management of organizational capacity of organizations
   involved with advocacy and policy work

+  Improved strategic abilities of organizations involved with
    advocacy and policy work
+  Improved capacity to communicate and promote advocacy
    messages of organizations involved with advocacy and policy work
+  Improved stability of organizations involved with advocacy and
    policy work

Examples of strategies to achieve these outcomes
Leadership development
Organizational capacity building
+ Communication skill building
+ Strategic planning

Objective: Strengthened alliances. Alliances among advocacy partners vary in levels of coordination, collaboration, and mission alignment and can include nontraditional alliances such as bipartisan alliances or relationships between unlikely allies. Alliances bring about structural changes in community and institutional relationships and are essential to presenting common messages, pursuing common goals, enforcing policy changes, and protecting policy “wins.”

Examples of outcomes
+ Increased number of partners supporting an issue
+ Increased level of collaboration (e.g., coordination)
Improved alignment of partnership efforts (e.g., shared priorities,
   shared goals, common accountability system)
Strategic alliances with important partners (e.g., stronger or more
    powerful relationships and alliances)
+ Increased ability of coalitions working toward policy change to
   identify policy change process (e.g., venue of policy change, steps
   of policy change based on strong understanding of the issue and
   barriers, jurisdiction of policy change)

Examples of strategies to achieve these outcomes
Partnership development
+ Coalition development
+ Cross-sector campaigns
+ Joint campaigns
+ Building alliances among unlikely allies

Objective: Strengthened base of support. Nonprofits draw on grassroots, leadership, and institutional support in working for policy changes. The breadth, depth, and influence of support among the general public, interest groups, and opinion leaders for particular issues are a major structural condition for supporting
policy changes. This outcome category spans many layers of culture and societal engagement including increases in civic participation and activism, “allied voices” among informal and formal groups, the coalescence of dissimilar interest groups, actions of opinion leader champions, and positive media attention.

Examples of outcomes
Increased public involvement in an issue
+ Increased level of actions taken by champi ons of an issue
+ Increased voter registration
+ Changes in voting behavior
Increased breadth of partners supporting an issue (e.g., number
   of “unlikely allies” supporting an issue)

+ Increased media coverage (e.g., quantity, prioritization, extent
   of coverage, variety of media "beats,” message echoing)
+ Increased awareness of campaign principles and messages
   among selected groups (e.g., policymakers, general public,
   opinion leaders)
+ Increased visibility of the campaign message (e.g., engagement
   in debate, presence of campaign message in the media)
+ Changes in public will

Examples of strategies to achieve these outcomes
Community organizing
+ Media campaigns
+ Outreach
+ Public/grassroots engagement campaign
+ Voter registration campaign
+ Coalition development
+ Development of trusted messengers and champions
+ Policy analysis and debate
+ Policy impact statements

Objective: Improved policies. Change in the public policy arena occurs in stages—including policy development, policy proposals, demonstration of support (e.g., co-sponsorship), adoption, funding, and implementation. Advocacy and policy evaluation frequently focuses on this area as a measure of success. While and important focus, improved policies are rarely achieved without changes in the preconditions to policy change identified in other outcome categories.

Examples of outcomes
+ Policy development
+ Policy adoption (e.g., ordinance, ballot measure, legislation,
   legally binding agreements)
+ Policy implementation (e.g., equity, adequate funding, other
   resources for implementing policy)
+ Policy enforcement (e.g., holding the line on bedrock legislation)

Examples of strategies to acheive these outcomes
Scientific research
Development of “white papers”
Development of policy proposals
Pilots/demonstration programs
 Educational briefings of legislators

+ Watchdog function

Objective: Changes in impact. Changes in impact are the ultimate and long-term changes in social and physical lives and conditions (i.e., individuals, populations, and physical environments) that motivate policy change efforts. These changes are important to monitor and evaluate when grantmakers and advocacy organizations are partners in social change. Changes in impact are influenced by policy change but typically involve far more strategies, including direct interventions, community support, and personal and family behaviors.

Examples of outcomes
Improved social and physical conditions (e.g., poverty,
   habitat diversity, health, equality, democracy).

Friday, November 6, 2009

Outcome Mapping

Outcome Mapping: building learning and reflection into development programs (2001), is a book by Sarah Earl, Fred Carden and Terry Smutylo, with a forward by Michael Quinn Patton. Outcome Mapping focuses on intermediate results (outcomes) of change in behavior, relationships, activities, or actions of people or groups; thus the focus is on people rather than things such as cleaner water or improved economy.

Outcome Mapping is most effective when used at the planning stage of a project or program. The parts of the Outcome Mapping exercise can then be adapted into a Results Framework or Logical Framework. And, importantly, successful Outcome Mapping requires commitments in knowing the strategic direction of the project, type of monitoring and evaluation data needed, reporting, participatory learning, team consenus, and resource commitments.

Outcome Mapping has 3 Stages and 12 Steps:
The outline of the book is:
1. Outcome Mapping: The Theory
2. Outcome Mapping: The Workshop Approach
3. Stage 1: Intentional Design
4. Stage 2: Outcome & Performance Monitoring
5. Stage 3: Evaluation Planning

Appendix A: Sample Intentional Design Framework
Appendix B: Overview of Evaluation Methods
Appendix C: Glossary
Appendix D: Terms in French, English, Spanish

If this book sounds useful, you can download a PDF version of it under the DME Documents section to the right.

Thursday, November 5, 2009

Matching Results Framework and Logical Framwork Terminology

Long before the need to monitoring and evaluate a project/program is the fundamental need to design a project/program. The acronym, DME, as in DME Advisor refers to design, monitoring and evaluation. Yes, I like to get involved in the initial design of a project, which is important in how it will monitored and evaluated.

There are two basic project/program designs being used in SC at this time. There is the Results Framework (RF), recommended by SC and generally used by US and Canadian-based donors. In addition, as more funding comes from non-US sources, the Logical Framework Approach (LFA), which is generally used by European donors.

Since SC primarily uses the RF most project/program directors or manager are familiar with it; however, increasingly they are being asked to use the LFA.

There are basic differences between the two design approaches, of which the two main are format and terminology. The basic format of a LFA is that of a matrix, whereas the basic format of a RF is a graphic illustration.

The largest challenge though for staff is the terminology differences. Below I have tried to match the RF and LFA terminology as closely as possible. Of course, there are slightly different versions of the RF and LFA, so this table is for the generic versions of both approaches.

Results Framework                     Logical Framework
Goal                                              Long-term Objective/Goal
Strategic Objective (SO)                Purpose/Short-term Objective
Intermediate Results (IRs)              Outputs
Strategies                                       ----
Activities                                        Activities
----                                                Inputs
----                                                Risks/Assumptions
Benchmarks                                   Milestones
Targets                                          Targets

Let me know if there are some terminology I'm missing or your think I've mismatched.

Narrative Methods of Project/Program Evaluation

Have you been involved in a project/program in which a rigorous project/program evaluation was not possible or simply not wanted? For such times there are other methods to conduct evaluations, especially those with few pre-determined indicators, difficulties in implementing rigorous studies, or with the possibiliity of many unforseen or unitended results/outcomes.

One broad approach to evaluating projects/programs is called the "narrative" method. The narrative method is described by Charles McClintock (Dean of the Fielding Graduate Institute’s School of Human and Organization Development, in his article "Using narrative methods to link program evaluation and organization development" published in The Evaluation Exchange Volume 9, Number 4, Winter 2003/ 2004 by the Harvard Family Research Project).

The narrative method is fundamentally storytelling and is related to participatory change processes because it relies on people themselves to make sense of their own experiences as it relates to the project/program. And, the participant and beneficiary's stories can be systematically gathered and claims verified from independent sources or methods.

The narrative method can be divided into three basic types, depending on the purpose of the evaluation.
  1. Success stories
  2. Positive and negative outcomes
  3. Emerging themes

1. Success Stories: One of the most prominant narrative methods for the purpose of success stories related to intermediate outcomes and impact is Most Significant Change, or MSC (Davies and Dart 2003, see MSC document list on the right tab). This method is highly structured and designed to engage stakedholders at all levels. Davies and Dart recommend MSC when a project or program is:
  • complex and produce diverse and emergent outcomes 
  • large with numerous organisational layers 
  • focused on social change 
  • emphasizes participation 
  • designed with repeated contact between field staff and participants 
  • struggling with conventional monitoring systems 
  • highly customised services to a small number of beneficiaries (such as family counselling). 
2. Positive and negative outcomes: this narrative method is called the Success Case method (Brinkerhoff, 2003). This Success Case method has two phases: a) a short questionnaire sent to all project/program participants to identify those for whom the project/program has made a difference and those for whom it did not make a difference; b) next, a number of extreme cases are selected from those two ends of the success continuum (i.e., did and did not make a difference) and respondents are asked to tell stories about both the features of the project/program that were or were not helpful as well as other factors that facilitated or impeded success. Based on the logic of journalism and legal inquiry, independent evidence is sought during these storytelling interviews that would corroborate the success claims.

These stories serve both to document outcomes, but also to guide management about needed change in project/program interventions that will accomplish higher level outcomes and impacts.

3. Emerging Themes: This narrative methods is basically qualitative case studies (Costantino & Greene, 2003). Here, stories are used to understand context, culture, and participants’ experiences in relation to program activities and outcomes. As with most case studies, this method can require site visits, review of documents, participant observation, and personal and telephone interviews. Stories can include verbatim transcripts, some of which contained interwoven mini stories. Selection of a few cases, studied in-depth, are able to develop many more "themes" that are involved in a project/program and of relationships among participants and staff.

Wednesday, October 28, 2009

Why NGOs Are Hesitant to Share Lessons Learned

Ricardo Wilson-Grau Consulting (17 April 2007 from CSO survey) asked a civil society organizations (CSOs) if they had lessons learned and, if so, how many were not from just "better practices" but just as importantly from "bad practices." Surprisingly, virtually none of the CSOs reported sharing "bad practices" as one aspect of lessons learned.
A follow-up question was asked: Why are so few CSOs willing to share their "bad practices" so that others can learn from them. The question had 5 closed-ended responses and 1 open-ended response. The result were:
  1. Organizations are reluctant to think about negative experiences - 36.6%
  2. They are uncomfortable sharing weaknesses with a donor - 64.1%
  3. They are uncomfortable sharing weaknesses with other organizations - 57.3%
  4. Organizations have little information and knowledge available to explain failures - 36.6%
  5. Organizations are interested in what does work and not in spending time on what does not - 35.9% 
  6. Other: 36.6% were by and large nuances of the five multiple choice options above.
In a blog by Daniel O'Neil (The Change Agent), he cites the paper, "Lessons Not Learned: Why don't NGO workers collaborate more?" by Wade Channell. In this paper, Wade highlights why development workers are lousy at learning from each other. Although we might be friends and socialize together, the world of international development workers does not foster learning. He cited four problems of learning:
  • Incentives for Knowing, But Not for Learning
  • High Incentives for Repetition, Low Incentives for Innovation
  • High Incentives for Guarding Information
  • Disconnection between Performance and Awards
All too often staff are hired because of their proficiency in an area, and due to the pace of project implementation, there is little time or incentive to read, reflect and interact with others in a learning process. Access to conferences, workshops, and journals for those workers in the field are both a physical and financial challenge. Thus, bad practices can "creep" into projects because staff are not able to keep up-to-date.

As repetition, Daniel states, "Donors only want to fund proven successes and NGOs write their proposals to satisfy what the donor wants to hear. This is especially critical when entering a competitive bid. The NGOs seek to divine what the donor wants to hear, rather than to come up with the best approach. The Gates Foundation has made significant waves because they are willing to fund projects that take risky approaches."

When it comes to guarding information, occasionally, project staff and NGOs are not willing to share project evaluations due to potentially unfavorable findings. Also, there are not many forums to share project evaluations even with the same organization so "bad practices" can be avoided.

Finally, the fear is that to expose "bad practices" and the lessons learned from that will not be rewarded. Certainly, good planning should reduce bad practices, but no project can be completely flawless.

Thursday, October 22, 2009

How Many In-depth Interviews Are Enough?

Have you ever had a limited amount of time and budget and wondered what are the fewest number in-depth interviews you could get by with but still get an adequate amount of data and information on a specific topic?

In the article, "How Many Interviews Are Enough? An Experiment with Data Saturation," (Field Methods, Vol. 18, No. 1, February 2006) the authors Greg Guest, Arwen Bunce and Laura Johnson investigate this question.

Specifically, these authors were interested in the minimum number of in-depth interviews does it take to get a reliable sense of themes and issues and variability. That is, does it take 6 interviews, 18 interviews, 100 interviews to render a useful understanding of most of the issues; or another way to ask the question is, when does adding more interviews not make a difference in rendering substantial more information? When is enough?

To answer these questions, they conducted a study among a group of women (sex workers) in two African countries. The in-depth interview guide consisted of six structured demographically oriented questions, sixteen open-ended main questions, and fourteen open-ended sub-questions. To determine the degree of data saturation (useful understanding of most themes/issues), the authors used the point in data collection and analysis when new information produced little or no change to the codebook.

After collecting and analyzing their data, data saturation occurred at when they had analyzed 12 interviews.That is, 92% of the total number of codes they developed for the entire study were developed by the 12th interview.

1. Maximum data saturation obtained with minimum number of interviews.
2. Time and cost-effective

1. Must be done with a purposive sample: people specifically interviewed because of their knowledge or experience related to the specific topic.
2. The individuals should be relatively similar (homogeneous), for example female sex workers, street children, etc.

Tuesday, October 20, 2009

Participatory Video: A Qualitative Method of Monitoring & Evaluation

The following have been taken from Insight Into Participatory Video: A handbook for the field, written by Nick and Chris Lunch, 2006. Published by Insight. (Find this publication under DME Documents.)
Participatory Video (PV) is a set of techniques to involve a group or community in shaping and creating their own film. The idea behind this is that making a video is easy and accessible, and is a great way of bringing people together to explore issues, voice concerns or simply to be creative and tell stories. This process can be very empowering, enabling a group or community to take action to solve their own problems and also to communicate their needs and ideas to decision-makers and/or other groups and communities. As such, PV can be a highly effective tool to engage and mobilize marginalized people and to help them implement their own forms of sustainable development based on local needs.
Insight has its own YouTube channel that allows you to view PV from around the world. Insight's YouTube channel is located at:

Nick and Chris Lunch were recently interviewed as part of OneWorldTV's series focusing on pioneering individuals and organizations using video as a tool for social change.

How Does It Work
  • Participants (men, women and youth) rapidly learn how to use video equipment through games and exercises.
  • Facilitators help groups to identify and analyze important issues in their community by adapting a range of Participatory Rural Appraisal (PRA)- type tools with PV techniques (for example, social mapping, action search, prioritizing, etc. See ‘Chambers’ in Appendix 7, References).
  • Short videos and messages are directed and filmed by the participants.
  • Footage is shown to the wider community at daily screenings.
  • A dynamic process of community-led learning, sharing and exchange is set in motion.
  • Completed films can be used to promote awareness and exchange between various different target groups. Insight has worked with pastoralists, farmers, marginalized communities and youth in rural and urban settings, street children, refugees and asylum seekers,people with mental health problems, learning difficulties and physical disabilities (see Part Five, Case Studies). PV films or video messages can be used to strengthen both horizontal communication (e.g. communicating with other communities) and vertical communication (e.g. communicating with decision-makers).
Tichezerane AIDS Support Group Participatory Video

What Does PV Offer
PV engages: Video is an attractive technological tool, which gives immediate results.
PV empowers: A rigorous but fun participatory process gives participants control over a project.
PV clarifies: Participants find their voices and focus on local issues of concern.
PV amplifies: Participants share their voices with other communities, including decision-makers.
PV catalyzes: Participants become a community, which takes further action. PV is inclusive and flexible: Insight have worked with a wide range of groups in the UK and internationally.
PV is accessible: Findings, concerns and living stories are captured by communities themselves on video; projects
can be documented and evaluated; policy information and decisions can also be transferred back to the community level through PV.
PV equips people with skills and positive attitudes: Skills developed include good group-working skills, listening skills, self-esteem building and motivation techniques; PV projects encourage better awareness of community, identity and place; PV develops an active role for participants in improving their quality of life.
PV disseminates good practice: A range of impressive initiatives and suggestions can be documented by those directly involved, cheaply and effectively, and shared across the country and even further abroad;policymakers can be deeply affected by powerful stories and images captured in this way at, and by, the grassroots.

Waiting for Water

Applications of PV
  • Marginalised social group to wider community: showing a PV film made by one group and using as a tool to stimulate discussion and participation among other groups in society. Participants may want to conduct filmed interviews to gauge reactions among the audience and record feedback. Facilitators can use such screenings to identify and congregate new groups to work with using the same PV methods.
  • Community to community: produced films shown to other communities and used as a tool to inspire and initiate same process of analysis and local action in the second community. Spreading impacts of the work and "PV strikes me as especially well suited to enabling rural people, after only a little training and at moderate cost, to create vivid accounts of their own experience. Very suitable for sharing with their counterparts elsewhere in the country or even abroad." Claire Milne, ICT Telecoms Consultant awareness raising, but also a chance to bring in new groups, highlight differences as well as similarities.
  • Community to community PV exchange visits: introducing PV into this process as a tool for wider sharing, equitable exchange and team building (i.e. focusing on a shared task and having fun together!). Exchange visits can be costly and usually only benefit a handful of community members, with PV the learning and exchange can be documented enabling the wider community and other communities to benefit from the exchange.
  • Policy to community PV visits: as with the community to community PV exchange visits above, but getting policymakers to the field. This can be difficult to arrange and maybe only one or two individuals can be prized out of their offices! A policymaker sharing a PV documentation task with the community members can be a good way to equalize relationships. They will have fun together and create something which the policymaker can show to his/her network of colleagues and superiors.
  • Facilitating multi-stakeholder workshops using PV: A means of getting different groups together on a more equal footing, empowering populations who feel uncomfortable in a workshop setting, or are illiterate. Community members present their films and these become the starting point for discussion and group work which is all documented using PV tools rather than written notes. This also allows the workshop outcomes to be shared widely among communities, personal and professional networks of the workshop participants and the general public (if relevant).
  • Campaigns: PV has tremendous potential to bring out personal stories to support campaigns and build understanding and consensus in potentially fraught situations. Decision-makers may respond better to the voices of people on the ground than to organizations, academics or activists campaigning on their behalf. Participatory videos are raw, direct and show a fuller picture of what is at stake.
  • Participatory Research: Generate knowledge, initiate local action, raise awareness, monitor and spread widely.
  • Community-led Research: Assist groups in the target communities to carry out their own research using the video as a tool for them to document local knowledge and ideas, as well as generate new knowledge and fresh solutions. Local people’s findings can be included in multimedia reports and publications, bringing their authorship into the process and developing a synthesis of local and scientific knowledge.
  • Participatory Monitoring & Evaluation: Using video rather than an attitudes survey to look at progress during the research can put the community in control. It is visual and accessible to all. It allows the community to highlight issues and areas of interest that we could not necessarily conceive of as outsiders. Things emerge from the films they produce that open up new lines of enquiry and can also help shape the kinds of quantifiable questions partners focus on.
  • Sharing Best Practices: The groups involved can document and communicate their achievements in their own words. The use of PV to collect and share the best practices and lessons learned. Often while collecting the lessons learned staff and experts obtain the information from the project implementing parties and having analyzed such information they may then prepare the manuals and adjust the vision expressed by the local communities while looking at such data from their own professional perspective. When receiving the project outcomes and developments, NGOs and local communities may have difficulty to fully understand the essence of the project outcomes. The use of PV can enable people to have a virtual interaction with their colleagues from other villages. While watching video material they obtain the information directly without the "university" filter of the professionals."
Participatory Video made by semi-nomadic shepherds of Kazakhstan.

Psycho-Social Programming for Children in Crisis

In 2004, Save the Children produced a handbook on psycho-social programming for children (Children in Crisis: Good Practices in Evaluating Psycho-social Programming, by Joan Duncan and Laura Arntson). You can download this handbook under the DME Documents section to the right. Those country offices doing, or considering psycho-social programming, should read this handbook if you have not already. It provides a good overview of psycho-social programming, from theory, definitions, issues, types of interventions, and measuring program/project outputs and outcomes. And, there is a very helpful chapter that discusses the differences between outcome and impact measurement.

Some short summaries from the handbook are:
What does psycho-social refer to? “The term “psychosocial” implies a very close relationship between psychological and social factors. When applied to child development, the term underlines the close, ongoing connections between a child’s feelings, thoughts, perceptions and understanding, and the development of that child as a social being in interaction with his or her social environment.”

What are the levels of severity children face in crisis? 1) Severely Affected Group- these are children in which their psychological and social functioning abilities may be severely compromised. While generally a small percentage of the overall population, this group requires intensive psychological attention because they are unable to manage on their own. Children forced to view and/or commit violent acts, such as child soldiers, are likely to fall into this group. More time-intensive, individualized approaches are likely to be the most appropriate responses, where social and cultural resources permit. This group is in need of one-on-one attention in order to address the more severe traumatic and/or depression disorders, for example. 2) At-Risk Group- A second segment of the community consists of those who have experienced severe losses and disruption, are significantly distressed, and may be experiencing despair and hopelessness, but whose social and psychological capacity to function has not yet been overwhelmed. Children in this category may be suffering from acute stress disorder (the most extreme, or exaggerated normal reaction to violence and trauma). They may have lost family members in the violence, they may have witnessed deaths, or they may be victims of violence. This group is at particular risk for psychological and social deterioration if their psychological, social, cognitive, and development needs are not addressed through timely community and social support mechanisms. 3) Generally Affected Group- The third and broadest segment of the population consists of individuals who may not have been directly affected by crisis events and whose families may be largely intact. Children in this group may be suffering from physical and mental exhaustion, for example, but are not experiencing the level of distress felt by those in the severely affected or at-risk groups. Community-based interventions that include not only normalization activities but also theme- and body-based activities can preserve and augment positive coping strategies among this population in a shorter time-frame and contribute effectively and more immediately to children’s and youths’ social, cognitive, and emotional development.

What are psycho-social programs/projects? Child-focused psychosocial projects are those that promote the psychological and social well-being and development of children. The orientation here is that child development is promoted most effectively in the context of the family, community, and culture. At its most fundamental level, psychosocial programming consists of activities designed to advance children's psychological and social development, to strengthen protective and preventive factors that can limit the negative consequences of complex emergencies, and to promote peace-building processes and reduce tensions between groups.

What are the primary issues psycho-social program attempt to address?
+ Secure attachments with caregivers - Child feels safe and cared for by supportive adult caregivers.
+ Meaningful peer relations or social competence - Child has the capacity to create and maintain relationships with peers and adults. Feels he/she is able to effectively navigate his or her social world.
+ Sense of Belonging - Child is socially connected to a community and feels he/she is part of a larger social whole. Child adopts the values, norms and traditions of his/her community.
+ Sense of self-worth and value, self-esteem, well-being - Child thinks of him/herself as worthy and capable of achieving desired goals. Child has a sense of empowerment and a sense of being valued. Child participates in larger community and feels in harmony with norms of his/her society. Child has the capacity and/or possibility to participate in decisions affecting his/her own life and to form independent opinions.
+ Trust in others – Child has a belief that he/she can rely on others for nurturance, help, and advice. Child feels that he/she will not be hurt by others.
+ Access to opportunities – Child has a sense of being in a supportive environment. Child has access to opportunities for cognitive, emotional, and spiritual development and economic security.
+ Physical and economic security – Child’s physical health, livelihood/economic security and environment are supportive and do not pose threats to the child’s emotional or physical wellbeing.
+ Hopefulness or optimism about the future – Childs feels confident that the world offers positive outcomes and a hopeful future.

Should psycho-social programs/projects be similar to each other? Some elements of psychosocial development are specific to a particular culture, meaning that there is not a “one size fits all” approach to psychosocial programming. A key challenge facing project designers is how cultural factors minimize or increase risk, and promote or impede resiliency. However, child development theory and research does point to a set of concepts that are useful building blocks for psychosocial projects regardless of where they are established. These include understanding what makes children resilient and the role that protective factors play throughout development. Identifying the ways these concepts are expressed within a particular culture should guide psychosocial project development and implementation. Through the study of children who have grown up under difficult circumstances, we have learned that some have certain characteristics and social supports that have enabled them to overcome adversity. Similarly, features of the social world have been identified that buffer the consequences of negative experiences on children. These features are often referred to as protective factors.

What are the content areas for interventions? Since children and adults experience and react to complex emergencies in unique ways, the types of projects designed to address their needs will also differ. Projects range and include those that are curative, preventive, and those that promote psychosocial well-being. Curative projects address the diagnosed psychological effects of complex emergencies on children and families, such as treatment of trauma. Preventive projects seek to prevent further psychosocial deterioration and may focus on a particular group or social environment. Lastly, projects may seek to promote healthy psychosocial development through, for example, opportunities to engage in educational, social, and spiritual activities that support the development of children.

What are the basic intervention approaches? There are different approaches to psychosocial programming, depending on the population being targeted and the project to be implemented. It is possible to identify three major groupings:
1. Psychological: Some projects focus more on psychological factors than on social factors. For example, some projects may provide individual counseling to children who have had traumatic experiences or provide training to key community members to identify, refer, or counsel children. These projects will most likely target children and caregivers who have been most severely impacted by crisis events and require a higher level of individualized attention than community-based interventions can provide.
2. Predominately Psychosocial: Some psychosocial projects are predominately or exclusively psychosocial in focus. The project is self-contained and not integrated into other projects with different foci health, food security, shelter) that may co-exist and are co-located. Examples include stand-alone recreation projects, art therapy, or various community-based interventions that promote positive cognitive, emotional, and educational development and functioning. Staff working in these psychosocial projects may have only minimal contact with staff working on other projects. Predominately psychosocial projects are likely to target their activities toward generally affected and at-risk populations, and provide screening and referral (to individualized mental health services or counseling programs) for those more severely affected by conflict or violence.
3. Integrated/Holistic: In some cases psychosocial interventions are integrated into a holistic and total response to the needs of a community. In this case, the “psychosocial” elements may not be as visible. For example, income generation or vocational training projects are not typically thought to be psychosocial. Yet, addressing the economic livelihood of families is fundamental to psychosocial health both in terms of reducing the daily stress of how a family will feed itself, and in terms of providing a pathway to stability and hope for the future. Similarily, such an intervention may have an educational component that supports cognitive development and at the same time fosters good peer relationships and social skills. An income generation project or vocational training project may be a conduit for improved self-esteem and self-worth and the establishment of peer friendships. The position here is that projects that are based on such a holistic approach are to be preferred since they maximize a mutually reinforcing effect when responding to different aspects of child development simultaneously. These projects are most likely to focus on those in the at-risk or generally affected group.
It is useful to organize projects into six broad areas that encompass the diverse social and psychological needs of children during and after a crisis: The Primacy of Family, Education, Engaging Activities, Economic Security, Community Connections, and Reconciliation and Restoration of Justice.

20 Essential Program/Project Evaluation Books

If you have an interest in having several books on program/project evaluation as a reference, you may want to include those book that are most often purchased together.
Using the online book store,, I began my search to determine which books might be best to have in an evaluation library by starting with the most known evaluation book, Evaluation: A Systematic Approach, by Peter Rossi. also provides a list of other books purchased by those people who purchased a certain book. Starting with Rossi's book, which is generally recognized as an evaluation primer, I made a list of all evaluation books people purchased with Rossi's Evaluation: A Systematic Approach. Then I looked-up each one of those books (as of October 2009) and made the same list of books purchased with each on of them until I had reached almost 550 citations for 110 books on evaluation. (Of course, there is a degree of selection bias in this method since this list will show primarily American readership and only for those who purchase books online.)
The map below shows the purchasing patterns for the 110 books on evaluation purchased in the US. The cluster in the middle of the map is those evaluation books most often purchased together.Taking those books that received the most citations of being purchased together I created the top 10 list, and the next 10 list, which are then the 20 most essential books on program/project evaluation.

Taking those books that received the most citations of being purchased together I created the top 10 list, and the next 10 list, which are then the 20 most essential books on program/project evaluation.
Top 10 Program/Project Evaluation Books Purchased Together
  1. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, by John W. Creswell.
  2. Qualitative Research & Evaluation Methods, by Michael Quinn Patton (Editor).
  3. Qualitative Inquiry and Research Design: Choosing among Five Approaches, by John W. Creswell.
  4. Evaluation: A Systematic Approach, Peter H. Rossi, Mark W. Lipsey, and Howard E. Freeman.
  5. Handbook of Practical Program Evaluation, Joseph S. Wholey, Harry P. Hatry, Kathryn E. Newcomer.
  6. Program Evaluation: Alternative Approaches and Practical Guidelines, Jody L Fitzpatrick, James R Sanders, Blaine R Worthen.
  7. Utilization-Focused Evaluation, Michael Quinn Patton.
  8. Logic Modeling Methods in Program Evaluation, Joy A. Frechtling.
  9. Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation, E. Jane Davidson.
  10. Evaluation Theory, Models, and Applications, Daniel L. Stufflebeam, Anthony J. Shinkfield.
Next 10
11. Program Evaluation and Performance Measurement: An Introduction to Practice, James C. McDavid and Laura R. L. Hawthorn.
12. RealWorld Evaluation: Working Under Budget, Time, Data, and Political Constraints, Michael J. Bamberger, Jim Rugh, and Linda Mabry.
13. The Program Evaluation Standards: How to Assess Evaluations of Educational Programs, James R. Sanders.
14. Evaluation, Carol H. Weiss.
15. Practical Program Evaluation: Assessing and Improving Planning, Implementation, and Effectiveness, Huey Tsyh Chen
16. Case Study Research: Design and Methods, Robert K. Yin
17. Designing and Conducting Mixed Methods Research, John W. Creswell and Dr. Vicki L. Plano Clark.
18. Experimental and Quasi-Experimental Designs for Generalized Causal Inference, William R. Shadish, Thomas D. Cook, and Donald T. Campbell.
19. The Research Methods Knowledge Base, William Trochim and James P Donnelly.
20. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory, Juliet Corbin and Anselm C. Strauss.

Other books I found of interest:
  • What Counts as Credible Evidence in Applied Research and Evaluation Practice?, Stewart I. Donaldson, Christina A. Christie, and Dr. Melvin (Mel) M. Mark.
  • Program Theory-Driven Evaluation Science: Strategies and Applications, Stewart I. Donaldson
  • Counterfactuals and Causal Inference: Methods and Principles for Social Research, Stephen L. Morgan and Christopher Winship.
  • How to Measure Anything: Finding the Value of "Intangibles" in Business, Douglas W. Hubbard
  • Quasi-Experimentation: Design and Analysis Issues for Field Settings, Thomas D. Cook and Donald T. Campbell.
  • Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists, Joel Best.

Saturday, October 17, 2009

Use of Mobile Phone SMS in Program Delivery and Monitoring

Increasingly, mobile phones are becoming cheaper and thus more widespread around the world and owned and used by the poor. SMS is most likely the most widely used electronic means of communication in the world, with an estimated 2.5 billion active users.

Percentage of people covered by mobile services, by country in the MEE Region:
                         % population covered     Mobile phone service
Country                by mobile service          per 100 people

Armenia:                        88%                              63
Azerbaijan                     94%                               53
Egypt                            98%                               40
Georgia                         96%                               59
Iraq                               72%                              n/a
Jordan                           99%                              23
Kazakhstan                     94%                              80
Kyrgyzstan                     90%                               41
Tajikistan                        4%                                1
West Band/Gaza             95%                              n/a
Yemen                           68%                               3

Why use mobile phone SMS in development?
1. Very cost effective for communication (data and information).
2. Fast because SMS is sent in real time and immediately.
3. High exposure in that, generally, 90% of all SMS messages are opened.
4. Potential for large reach depending on how many use mobile phones.
5. Personal in the SMS can be personalized.
6. Interactive in that receives can respond.

If you would like to read more, below are links to sites that provide more information about mobile phones in development work.

Community mobilization (FrontlineSMS
Cash transfer (

Friday, October 16, 2009

Why does SC use a Results Framework?

Often, country offices ask me why Save the Children recommends a Results Framework rather than some other type of program/project design tool, such as Logical Framework. Several employees of Save the Children (see attached article on the right titled, “A Results Framework Services Both Program Design & Delivery Science” under the Documents section).
Some of the reasons these authors cite include:
1. The entire program/project logic and “theory of change” can be visually grasped without extensive reading.
2. Different disciples or technical specialists (health, food security, livelihoods, education) can use the same
    basic model.

3. The ability to clarify assumptions as well as state hypotheses.
4. Facilitates in the design programs and projects.
5. Helps in the evaluation designs.
6. Informs action research

The Results Framework has the following components:
Goal- a) States the long-term end status that is to be achieved, b) Usually expensive to measure since it requires large population-based surveys.
Strategic Objective (SO) – a) Is the most ambitious result that programs can reasonably effect and for which implementing agencies are willing to be held accountable.
Intermediate Results (IRs) – These are essential steps toward achieving the SO. Save the Children recommends the use of the following 4 IRs, since SC’s programming is based on behavior change:
   IR-1: Availability & Access (as service must be available as well as spatially and economically accessible)
   IR-2: Quality (services meet technical as well as client perceived standards)
   IR-3: Demand (knowledge, skills, attitudes, or beliefs that hinder or promote service usage)
   IR-4: Enabling Environment (facilitates both the supply and demand side of services)
IR Strategies – specific steps to achieve the Intermediate Results
IR Activities – specific program/project activities related to each IR strategy.

Results Framework (Health Example)

What are some of the limitations of the Results Framework?
1. IR2, Quality, has many dimensions, such as technical (i.e., meeting national or international standards) and perceived (client’s perception of quality services). However, this model combines both types into one box even though these are separate dimensions.
2. The Enabling Environment (IR4) is often highly related to achieving IR3, a change in demand via more informed clientele.
3. The Results Framework, unlike the Logical Framework, omits external environmental factors (apart from those in IR4) that can ease or constrain achieving the results and that are beyond the reach of programmers.
4. Finally, the framework lacks the operational details (such as those found in Logical Frameworks) that managers and some donors need; however, standard detailed implementation and monitoring plans that are based on the framework provide these.

Overall, the Results Framework is a simplistic way to illustrate the relationships between higher level Goals all the way down to activities for small as well as large, complex programs/projects regardless of the sector. But, as the authors conclude, that simplicity has its disadvantages.