Remarks
U.S. Department of State Third Annual Conference on Program Evaluation - Democracy and Governance Track
Washington, DC
June 8, 2010


MR. KULCHINSKY: Good afternoon everybody and welcome back. My name is Yarro Kulchinsky (phonetic) and I am with the Office of Strategic and Performance Planning. I will be your content manager for today and this afternoon for the next two sessions. I welcome you to the first after lunch, entitled Improving Democracy in Governance Programs Through Better Evaluation, the Impact of Evaluation on USAID’s Initiative in Central America.

We have with us three presenters. First will be Mark Billera, who is from the Office of strategic planning and research division in the agency for national development, who holds an undergraduate from Duke and is currently working on his Ph.D. at UCLA. Secondly, we have Eric Kite, who is USAID’s democracy team leader for Latin America and the Caribbean. He started as a PMF, Presidential Management Fellow, in 1998, serving first as the anti-corruption advisor, then for the LAC’s strategies coordinator in the democracy and governance office. In 2006 he led USAID’s democracy office in Afghanistan. He holds degrees in political science and German from the University of Missouri, a Fulbrighter at Bonn University and a Masters in arts and an emphasis in democratic transition from Georgetown. And lastly, we have Dr. Abby Cordova, who is a post-doctoral fellow with the Latin American Public Opinion Project and the political science department at Vanderbilt University. She received her Ph.D. in political science from Vanderbilt University, and Dr. Cordova is currently directing Vanderbilt’s impact evaluation study of USAID’s emeritus initiative community-based crime prevention program in Central America.

Welcome.

[ Get Adobe Reader View slide presentation ]

MR. BILLERA: Let’s put that appropriate to my stature. Good afternoon. First I want to thank the organizers here at the State Department. I think that they have done a tremendous job, especially with me. I required a lot of patience as I wasn’t always on time or concise or �'�' they were always nice, very patient, and they have done a great job of putting this together.

So as Yarro mentioned, I work in the strategic planning and research division in the office of democracy and governance at USAID, and I am here to talk about USAID’s evaluation initiative in democracy and governance, just briefly, to put in context what I think is the really interesting stuff that Eric and Abby are going to talk about.

So democracy and governance, as a field of development, is relatively young. It really didn’t exist as a defined field of development until about 20 years ago. There are a lot of implications for that. One of them is that we’re way behind the other sectors of development in terms of figuring out what works and what doesn’t work.

That hasn’t stopped us from trying though. We have had 10 to 15 years of really knocking our heads against the wall trying to figure out what works and what doesn’t work in democracy promotion. And we’ve had �'�' we have figured out a lot of ways that don’t work, to answer that question, but we’re making progress now on being able to do a better job. The highlight of our long and circuitous route to figuring out how to answer these questions include about �'�' let’s see, it’s about �'�' it’s a little more than five �'�' seven years ago. We went to the Social Sciences Research Council and said, “We’ve been trying to answer these questions, and we’re not doing a very good job. Can you help us?” And they said, “Sure.” And what they told us was one thing, don’t try to answer all your questions with one method. And so sort of the key thing that came out of the SSRC recommendations were, use mixed methods, different �'�' you will learn different things with different approaches and put them together; don’t try to put all your eggs in one basket. And that was great and we followed that did a few things.

But a few years later we had learned some general things, but we were still struggling with how to answer questions in a particular country in a particular time. Did these dollars make any difference? And so we went to the National Academy of Sciences and we asked them for just a little bit of advice and they gave us a book.

(Laughter.)

MR. BILLERA: And it’s down there in my back, and if I open my bag, it’s got Velcro, it makes a lot of noise. And so they gave us a book, and there were four key recommendations. First was, do a pilot program of impact evaluations. And when I say impact evaluations, I know that this is sort of a contested term. I am probably using the term in one of the contested ways, but I do mean ways where you can set up the counterfactual and compare often and where appropriate using experimental and quasi-experimental designs. That was the first thing, do a pilot program of impact evaluations on democracy in governance programs.

It’s a �'�' these methods have been used a lot in other sectors; not so much in democracy in governance, and so in that way this is kind of new ground for people in our field.

Okay. Second was they recognized the USAID’s ability for institutional learning had been depleted over the recent years, and so we have to rebuild that. In order to do all of this, you’ve got to be able to measure what you’re working on, and it became very clear that indicators for democracy in governance were �'�' some of them are good, some of them aren’t so good. They’re spotty both in terms of quality and coverage, and so we have to work on indicators at the sort of �'�' not at the Freedom House levels, not democracy in general, but pieces of democracy.

And then finally, agreeing with the SSRC on mixed methods, they give us recommendations on how to do case studies and how to put those together.

So what have we done with these recommendations? We have created an, what we have called the EDGE Initiative. I advocated for Edgy --

(Laughter.)

MR. BILLERA: �'�' but we ended up with EDGE, which stands for �'�' I’ve got to look, Evaluating DG Effectiveness. And to do this within the Office of Democracy and Governance, we’ve got about 10 people who are working n this, but to be honest, eight or nine of us have other jobs too. And so it’s usually at AID when you say it’s cross-sectoral, it’s good, but this means that we’re sort of borrowing people’s time, and that’s a challenge. We have a couple of people who are working on this full time.

And our plans are, over the next five years, to do �'�' we would like to do 50 impact evaluations. And we have budgeted our central funds, which we will then use to plus-up the program budgets of our USAID missions. And because we recognize better evaluation costs more money, and so we’re going to ask our missions to do this, we’re going to help them out.

And there are a few other things we’re doing, and I’ll cover them later if it comes up. I just want to point out though that �'�' so this is �'�' what Eric and Abby are going to talk about came out of all of this, and this is one of our �'�' they have their own motivations and their own independent thinking, but we help them do one of the things that they were thinking about doing anyway because they count as one of our pilots.

We, the Office of Democracy and Governance, contributed money to this project to plus-up the evaluation, and now Eric can tell you why this is a very interesting project, and Abby can tell you exactly what we’re doing.

MR. KITE: Great. Thank you very much. And I’d also like to thank the organizers of the conference for putting together such a professional event and for keeping us on task in terms of getting you everything you needed. It really is a great �'�' very well organized effort.

And maybe I’ll continue to agree with Mark that �'�' I won’t agree that his part was the least interesting, but if there is a most interested part, it’s what’s coming after me. So I fall somewhere in the middle here.

I want to talk a little bit about our �'�' the USAID programs that are under the Central America Regional Security Initiative, our rationale for undertaking evaluation of the program, and then I’m going to turn over to Dr. Abby Cordova, who is our partner at Vanderbilt University, to talk about the specifics of the research design. And then we’ll come back to me to talk a little bit about some of the lessons learned and some of the challenges we have in undertaking this work.

Go to the next slide.

Just in terms of context, let me say first that what we’re dealing with in Central America, for those of you who aren’t familiar with the region, is truly horrendous in terms of crime rates. UNDP just this year has come out with a study finding that for the first time Central America is the most violent region in the world, more violent than sub-Saharan Africa.

And in terms of crime rates, we’re looking at �'�' in El Salvador, which is �'�' El Salvador, Guatemala and Honduras are the worst. We’re dealing with crime rates that are at least 10 times the U.S. average in cases like San Pedro Sula or other hot spots, we’re talking about crime rates that are 20 times, and this is homicide rates, 20 times the U.S. average. In the best case, which is Costa Rica, you’re dealing with five times the U.S. average. So this is a violent region with sort of a tremendous spike in crime over the last five to ten years.

It also impacts a large number of people, so if �'�' that’s the homicide rate, but in terms of victimization of crime, violent crime, you have a very high percentage of the population that is affected. You have up to 35 percent in some countries of people who are victimized by crime in some way or another over a 12-month period. So it’s a very broad spectrum of society that we’re talking about, and it has implications in terms of budgets for these countries in terms of the resources that they devote to security and to justice sector institutions.

Now why do we care about this in terms of democracy and governance problem? We care about it because we know through the survey research that we have been doing over a number of years, including with the Americas Barometer at Vanderbilt University, that there are two issues that are most closely correlated with citizen support for democracy in Latin America. Across the entire region, there really is not a single exception that I can think of in terms �'�' all the countries that we survey. That is, citizens are less supportive of democracy when they are victimized in some way, either by the state or are victims of crime.

So it is crime and corruption that are the two issues that are most closely correlated with support for democratic institutions. So it really is a democracy issue for USAID, and I think, for the U.S. government.

Go to the next slide.

So the U.S. government’s response to this spike in crime and violence in Central America has been the Central America Regional Security Initiative, otherwise known as CARSI. It has the same origin as the Merida initiative in Mexico, but there are some important distinctions between the work being done in Mexico and the work being done in Central America. And we’ll get into those a little bit.

Actually, we’ll get into it right now. The first point is the of the U.S. government resources devoted to Central America, about $250 million, a substantial portion of it is focused on crime prevention initiatives. This is in response to congressional interest, but it is one way that the work that is being undertaken in Central America is distinct and different than what is being done in Mexico under Merida.

So we have a substantial component that is devoted to community-based crime prevention. This is a relatively new area not only for USAID but for other donors, and it involves a mix of programming all at a very localized neighborhood level that includes working with at risk youth, providing economic opportunities, providing safe spaces, working with both local government leaders and local community leaders to talk about infrastructure improvements in terms of better lighting. It’s a full, complete, comprehensive package of educational, social and economic opportunities and programs at a very sort of localized geographic area.

Our next slide, here is just a quick summary of what our goals are in terms of these initiatives at the community level, and I’ll let you read it and then we can quickly go to the next slide. And I’ll just say that �'�' actually, let’s go to the next slide and I’ll make the �'�' one of the main points I want to make is really the second bullet here.

USAID does a lot of evaluating. We try to learn in terms of doing program evaluations, what works, what doesn’t work, why, so we can program better the next time. But in this case, it’s even more important that we do a serious rigorous evaluation because the entire field that we’re working in in terms of crime prevention is a new one. And we know from some research done within certain U.S. cities for instance, or other large metropolitan areas in other, more developed countries, that crime prevention programs are effective over the long-term. We don’t really have solid evidence that that is true in a developing country context.

And then we also have the issue that I think is probably most fundamental, that when you’re faced with a spike in crime, as a policymaker, a host government policy maker, the most compelling thing to do is always �'�' almost always going to be a law-enforcement related initiative, equipping the police better, training them better, working on justice sector institutions. It always seems like something you can do tomorrow to work on prevention.

So our feeling was we really had to do, really collect solid data to demonstrate that this was an effective approach, not only to influence our own programs going forward and to convince Capitol Hill that this was a good investment of resources, but �'�' with all the resources we have, we can impact 50 communities in Central America.

If you think about how many communities there are in Central America, and off the top of our head we did a sort of calculation once that there are probably 1,000 municipalities in Central America. What USAID can do is not going to be effective or sustainable on a national level or on a regional level. So we really have to draw lessons from these projects that are then understood widely and are used by policymakers in the region to devote greater resource to prevention-related programming.

So that was our sort of impetus for why we felt that doing a really serious evaluation was important. And the second �'�' we kind of got to that point because we were frustrated with the �'�' and no offense to any of my State F colleagues here, but the F indicators that we have, that we normally measure these programs with, really aren’t impact indicators, they are outcome indicators.

So when we asked ourselves sort of analytically, what does success look like, if we succeed in these programs, what will we have achieved, kind of �'�' it was very simple. Either we’re going to see �'�' hopefully we’ll see reductions in crime in these communities and we’ll see increased sense of citizen security in these communities. And we really didn’t have output indicators that would allow us to measure those without doing a much deeper evaluation.

So then we turned to the professionals, and I will turn it over to Abby who can say a little bit about the program design.

MS. CORDOVA: Okay. Well, first of all, thank you very much for the invitation and giving me the opportunity to share our study, which we are very, very excited to carry out. And what I’m going to do is basically briefly just explain to you the characteristics of the study.

And I would like to start by saying that the methodology that we are employing is called a multisite cluster randomized experiment. This is a very fancy name for saying the following. Basically, what we are considering clusters in our study are at risk neighborhoods across three Central American countries, specifically, El Salvador, Guatemala and Panama.

And when I say “at risk,” meaning that are very vulnerable to crime and violence. Those neighborhoods probably haven’t really reached the level of a hot spot because there in those places there is not much to prevent, but certainly have some features that make them vulnerable to crime.

And generally speaking, our impact evaluation has two main features, which actually are some of the very same features of any scientifically rigorous impact evaluation that can be put in place. And the first one is the collection of data before, during and after the implementation of the programs.

Right now, we are carrying out the baseline study in El Salvador and are about to finish, so I can say that this is possible and that we are making a lot of progress in this sign. And the second feature is that units of analysis or at risk neighborhoods are randomly assigned to either a treatment or a control group. A treatment group, of course, is a set of neighborhoods that actually recipients of the programs that USAID is going to implement, and a control group is a set of neighborhoods that actually are eligible to participate in those programs but that actually do not receive the treatment or there are no programs there to implement.

And the reason why we want to have this control versus treatment and track them over time is that as we have been talking here in the morning, we want to know not only how communities look like after the implementation of USAID programs but also we want to know how those same, the very same communities will look like without the implementation.

And just to give you another view of how this field experiment, as we call it, looks like, it’s just like the following. So we have three sides, three �'�' which are equivalent to three countries. And within each country, USAID has chosen some municipalities where they want to implement their local interventions. And within each municipality, Vanderbilt University has picked randomly a set of neighborhoods that are either assigned to a treatment or a control group.

At the very lowest level we have individuals which actually are going to be surveyed and interviewed using mixed methods, as Mark mentioned. And in general, what �'�' the plan right now is to include a total of 100 neighborhoods across these three countries and using some specialized software that is called optimal design, which was developed by researchers at the University of Michigan, we have estimated that a total of 100 neighborhoods across the three countries, and we’re �'�' surveys of a sample size of 150 individuals in each community will make possible or at least it will be sufficient to assess whether there has been an impact or not.

So as I said, this is a field experiment that relies on mixed methods. And our study basically has two components. One, that is the quantitative component which basically focuses on the collection of data at two different levels. We are going to collect census data in each of those 100 neighborhoods and also do like an assessment of how those neighborhoods look like. We are going to carry out what is called a systematic observation of each neighborhood. And that basically means to determine or to identify what are the risk factors associated with crime and violence in each neighborhood, things like lighting and all other sort of social economic characteristics that have been linked directly to crime and violence.

And then we are also conducting individual level survey, which it has been in the Latin American public opinion tradition to carry out across countries, but this time we are doing it at the neighborhood level, and just to give you an idea, the questioner basically has more than 100 questions and we are tapping into different outcome indicators that I will that about that later.

And then the second component of the study is qualitative study or research. And in this �'�' this company involves two things. One is focus groups and in-depth interviews with local stakeholders, including police officers, school teachers, civil society in general and also the local government. And while given the nature of the �'�' of USAID’s crime prevention programs as Eric was saying, we expect that a direct outcome indicator, it will be a reduction in crime and fear of crime. However, at the same time, because these programs really encourage citizen’s participation at the local level and the building of safer communities, we also expect that an indirect effect of these programs is going to be the promotion of good governance.

And because of that reason, we have developed a series of outcome indicators. Some of them have to do with crime and violence, but also we have a set of indicators that tap into good governance, such as increase social cohesion, how people are getting along with each other in these communities, increase participation in civic organizations that do work in crime prevention, perceptions of the police, trust in local government and support for democracy as well.

So and I have to say that for putting together this �'�' these are just �'�' this is a sample, by the way, of the outcome indicators, But for putting this together, it’s not that we sat in an office, and you know, made a list. Actually the way how we went about identifying the outcome indicators was to go out to the field, visit the communities and for that last year �'�' we started the process last year and jointly with some colleagues from USAID, we visited each mission and had thousands of interviews with the stakeholders. And based on that, what we observe in our experience, we determine what the outcome indicators will look like.

So now I will turn to Eric, who is going to --

MR. KITE: I think we can probably do this one jointly and maybe segue right over into a discussion period, because I think it raises some of the larger questions that folks might have.

First, this is �'�' it’s relatively resource intensive to do this right. AID used �'�' I don’t know if it’s still a guideline. When I started years ago we used to say, you should, in an ideal world, be spending 10 percent of your program budget on monitoring and evaluation. We’re short of that even in this project, but it is relatively much more resource intensive than what we normally do in terms of evaluation.

It’s also �'�' I mean I should, at this point, also thank those that have contributed to allow us to do this effort. It has included funding from the DCHA D.C. office at USAID. It has included really significant funding from the WHA bureau here at State. And I’m not sure that we would have been able to do a study quite so robust without that level of support.

Maybe Abby can say something about just how sort of significant the human resource side of this has been. You know, I can say that for us it was unique to have an opportunity to begin a program and be able, alongside of the design of the program itself be able to design the evaluation of that program. And I think that was really critical but it has been extremely resource intensive among multiple offices and for USAID it’s a challenge because we are a very decentralized agency.

So really you had three operating units on the ground, three different missions, Guatemala, El Salvador and Honduras �'�' excuse, me �'�' yes, Honduras.

MS. CORDOVA: Panama.

MR. KITE: Panama was the other. That coup got in the way in Honduras.

So really there is a �'�' to convince a USAID mission that they should turn control over where they implement over to us and Vanderbilt University in terms of randomizing which communities they work in and which they don’t is a bit of a sea change for �'�' but it’s a lot of work but we have been able to do it, and I think Abby has really been critical to that.

So do you want to say a little bit about sort of the level of human effort involved in this?

MS. CORDOVA: Well, I have to say that this has been a fascinating process. The interaction among different actors has been really, very intense. So this is not straightforward. It takes a lot of effort and a lot of coordination.

But what �'�' at least for me it has been really great about the process, per se, is how this interaction with the differing actors, including local stakeholders and the police and so on has really shaped the impact evaluation but also the crime prevention programs that are going to be in place because it has been like �'�' you know, a get together of different actors that are working on the very same topic, and that has made it possible to have a very robust impact evaluation in terms of the design and the outcome indicators also.

We draw those from those means. So for me, it has been really a neat process. That’s one for the lessons we have learned, and we are looking forward to doing the same in the remaining countries, namely Guatemala and Panama. We are about to start deploying there.

But also in terms of us, there is another point there about the challenges that we have faced on the ground. One is, of course, you have to be really �'�' you have to do your research right because this is not only about, you know, I picked these communities. Here there are things that can occur if you make a wrong decision like picking a hot spot, right? Like crime levels might be too high for sending people to work there. So how do you go about neighborhood selection?

And for that, one step that is �'�' one of the first steps is to pick those neighborhoods based on some data that will give you some indication of how those neighborhoods look like. And usually in developing countries there are no data at the national level at �'�' you know, some towns are 50 households.

So you don’t get that data, so you have to think creatively how to build your data set and how from that data set you are going to draw randomly those neighborhoods. And one of the methodologies that we implemented is actually combine some of the hard data that we found with some of the data that we draw from the interviews with the police and people that know not only their community but the municipality. So we integrated that data set and that was one of the first challenges. But I think there are ways to overcome that.

MR. KITE: Yeah. The final point is if we had �'�' maybe if we had known all the obstacles we would face at the outset, we wouldn’t have done it. But I actually �'�' it was worked out remarkably well. It has been a huge amount of work, and I think the result is going to be to tell us whether or not these programs work or not and how effectively, and I think that’s worth every bit of the investment that we have made.

So I think, with that, we are open to any questions that might come from the floor.

MODERATOR: Great. Thank you very much. Sir. If you could, introduce yourself.

QUESTION: Yeah. My name is Richard Blue and I was with USAID and did a lot of work in democracy and still do. In 1967 �'�' I’ll keep this very quick �'�' a USAID officer told me a story about a randomized control trial experiment in Ghana and the introduction of health delivery systems. I don’t know whether any of you know this story. Ian Kebemay is the author of it. It was four delivery systems and a control group. And the first choice was to randomly select districts in Ghana to locate these different delivery systems. The systems range from a full, physical clinic to barefoot doctors, as they were popular at the time.

The first thing that happened is the politicians said, “Where are these located” and, “That’s too far away from �'�' move them closer to the capital because we want to visit.”

The second thing that happened was that Ghanaians have a tendency to walk long distances. And it turned out the UCLA medical school or public health school was running the experiment, found that all kinds of people were showing up at the physical clinic that were actually supposed to be in the control group or in one of the other treatment groups. And this was before computers, and they were sending boatloads of data home, and finally AID just gave up.

So I appreciate very much your story, and I’m sure you have got thousands of war stories to tell with respect to keeping the discipline of a randomized control trial or even a quasi-experimental design in place because it is terribly, not only resource intensive but it’s management intensive and it requires a discipline.

How long is this one going on, three years? Going to go on, three years?

MS. CORDOVA: No --

MR. KITE: Longer. It’s actually going to be longer than three years, but --

QUESTION: Okay. Well, this is the other point that I wanted to make is that I was head of evaluation for AID and �'�' following on Peter Davis’s creation of the office of evaluation. And one of the things I learned very soon, quickly was that people who should be interested at the policy and strategic design level have a short attention span, and that if you’re going to tell them we’ll get the results in five years, they’ll say, well, I belong to the Obama administration, I may not be here longer than two years. And as a consequence, can’t you tell me something quicker. And this has plagued so many efforts to try to introduce long-term, highly disciplined controls.

And the last point is that if you’re going to do this, one of the things I would like to see more people spend some time on is saying where does it make the most sense to invest this amount of time and energy and resources, because I hear a lot of talk from CDG and poverty �'�' whatever is the action lab, “Oh, we should be doing RCTs,” and nobody is sitting down and saying, “Where does it make the most sense to do an RCT, where does it make the most sense to?” And you have come up with a justification, I think. This is a new program. This is a serious issue, blah, blah, blah.

But for a lot of the discussion that goes on in this, it’s just, oh, we want rigorous impact evaluations. And nobody really says, “Does that make sense for this particular activity or not?” Anyway, that’s kind of a lecture more than a question, but if you have a comment on it.

MR. KITE: I guess, to address one of the questions, we are planning to do �'�' it’s not only sort of beginning and end, it’s also �'�' there is a mid-point evaluation 18 months in, at which point there is going to be an interim report so that we do have some data that we can present.

MR. BILLERA: Let me just quickly respond to your main point about where does it make the most sense. We agree entirely. And so part of our initiative, our five-year plan is we are currently in the process of thinking this through. We can’t do it everywhere. We shouldn’t do it everywhere. We need to pick our spots and so we �'�' we have a series of criteria that we are trying to apply to ourselves so that at the end of five years we have a body of work that makes sense and not just a scatter shot of evaluations here and there that don’t add up to much.

MR. KULCHINSKY: Thank you, Mark.

Raymond.

A PARTICIPANT: Typically the decisions about evaluation has been made at the mission level. You know, they decided on the mid-term and the summative, and that this �'�' when you’re investing this kind of money, this seems to me to be �'�' ought to go up a level or two to what are the agency’s big questions if you’re going to invest this kind of money. And once you ask, you define that list, because really, in conjunction with the mission directors, you also want to ask what do we already know?

There is now two or three books out on conditional cash transfers, and if you really wanted to know what the answers were, you don’t �'�' you design a program following those guidelines, it’s done. You’re going to have impact, you know that. And so that it’s more than sort of answering, you know, what are the questions that are most important and who should be deciding what those questions are. It’s also where do you have the comparative advantage that 3IE or somebody else isn’t already doing that for you. So I mean it’s a terrific opportunity given what is going on at the agency for (inaudible).

QUESTION: Can I ask a question? When you were �'�' government programs, there is an expectation that you’re going to have a change in the three year funding cycle that you have got. When you were busy talking to all your stakeholders, did you discuss with them at all their expectations of how quickly things can change, and do you have a sense of for your outcome indicators what you can expect by when? I mean I know we all do short, intermediate, long-term, but beyond �'�' like a real sense of what you might expect to be able to see?

MS. CORDOVA: Yes, I think previous studies have shown that what it changed quicker are perceptions, so things like, for example, fear of crime, I think if you have a group of individuals working in those neighborhoods and really planning activities and implementing projects that deal with crime prevention, you change the perception of people relatively quickly. However things like crime victimization at this point, it’s just a big question mark whether we are going to be able to find something for after the midterm evaluation, and especially because it takes a lot of effort to really measure crime right. So, for example, in our own survey, we have embedded different experiments on how to measure crime, so this is a learning by doing process, and we will see what happens.

MR. KULCHINSKY: Thank you. We have time for one more question.

QUESTION: Jerome Gallagher, USAID. Two methods questions. One is, since you randomize at the neighborhood level, are you worried about spillover. Are you looking at neighborhoods right next to each other, is that a concern, and how you’re addressing that?

And secondly, one of the criticisms of randomized control trials is that they’ll give you average treatment effects of your intervention but not necessarily why that’s happening or why you’re getting �'�' the different effects, a large effect in one area and a small effect in another area, why maybe have a positive outlier �'�' and is that part of what you’re going to be looking at as well?

MS. CORDOVA: Yes, definitely. Your concerns have been our concerns since the very beginning. And what we have done to minimize those risks are, first of all, in terms of the location of treatment and control groups, what we have done is basically map the one municipality and randomly select where in that municipality we are going to place our control groups and what else �'�' the treatments are going to be, and make sure that those are far away.

That’s what we have done just for that specific issue. Of course, we have municipalities that vary in terms of their size. We have one municipality �'�' I’m sure you know Saragosa (phonetic), she is really tiny, so it’s difficult to keep the control groups far away, but we hope that since this is a multi-country study, that those things that might not go 100 percent right will cancel out at the end because we have 100 neighborhoods, so that is one of the advantages.

And the second thing in terms of �'�' are we just looking at average treatment effects or can we say something else? I will say, yes, that’s precisely our goal. And in what we are going to do because of the sample size that we are using, we have lots of individuals that are going to be interviewed, is that we are going to be able to space out for whom the projects are working better, for women, for men, for the young, et cetera. So that is one of the things that we are going to be able to answer.

The second thing, why question, and I think here is where the qualitative component really plays a role. Those questions, definitely we won’t be able to answer just with survey data, and that’s why we are using a mixed method strategy.

MR. BILLERA: Ma’am, I really liked your question. And Jerome, you touched on it too with the why, with your three year funding cycle because it highlights one of our weaknesses we have discovered. And one of our weaknesses at times within USAID is that our theories of change can be pretty muddy. And so if your theory of change is really solid then you can do a good job of saying, well, I know you’re only interested in the three year cycle or you’re most interested in the three year cycle.

Well, our theory says it’s a 10 year process, but what you should be seeing by three years is this, and we can show you that, but we’re not going to be able to show you the end. But we have been surprised in working with our colleagues at how much of a challenge it has been to get really good, tight theories of change. What do we expect to see so then can we measure to see whether or not we’re getting it? So it’s �'�' it is a very good point.

MR. KULCHINSKY: No, thank you very much, Mark. Now with that said, I would like to really thank Mark, Eric and Abby for their presentation.

(Applause.)

MR. KULCHINSKY: And that concludes the third workshop on Democracy and Governance. There is a break for about, I think, 15 or 20 minutes and then we’ll reconvene here once again. Thank you.