Remarks
U.S. Department of State Third Annual Conference on Program Evaluation - Programs Track
Washington, DC
June 9, 2010


[ Get Adobe Reader View slide presentation ]

DR. SIGMON: Good morning, everybody. I'm going to call this session to order. I understand that these 45�'minute sessions fly by, especially when we have numerous speakers. I'm Jane Sigmon, and I'm the senior coordinator for programs in the Office to Monitor and Combat Trafficking in Persons. And I want to welcome you to this workshop presentation on evaluability assessments.

Our office is the office that's responsible for issuing the annual Trafficking in Persons Report, which, as everyone may or may not know, is due in June and will be released this next Monday.

The programs section �'�' and we have several members of the programs section here today, from G/TIP; we're very pleased to have them with us �'�' the programs section manages a centrally managed grant program, about $22 million a year, and we support anti�'trafficking programs around the world.

We currently have about 175 programs in 65 countries. And we have several programs officers who visit, and we work with our embassies to manage these.

We are very committed �'�' if you'd go to the next slide �'�' we're very committed to evaluating the effectiveness of programs. As you all know, trafficking in persons really emerged about 10 years ago as a topic for global interest and foreign assistance.

Hundreds of millions of dollars have been spent to combat human trafficking. Unfortunately, it's a relatively young field and there's not enough good data about what actually works. Our office has been very committed to measuring effectiveness, and we would like to have more empirical data that tells us what actually works, which elements of programs really produce the intended effects.

So as we look at programs, we have many programs that have been �'�' that we support that were fledgling programs. We listened this morning to Dr. Levine talk about starting up a program from scratch and building evaluation design in at the beginning, which we're paying a lot of attention to with our new programs.

But we have a lot of programs that have been in existence already, and we need to be able to measure effectiveness. So the way we wanted to go about doing that was to look at their evaluability, and that's what this presentation is all about.

Evaluability is a word in which we're �'�' what we're talking about is does the program have enough trueness to its design? Does it have a good design, a trueness to its design and implementation, and enough data? Is it documented well enough that we could in fact evaluate its effectiveness? So this session is about evaluability assessment. If you would go to the next slide.

The purpose is just to give you an idea about what evaluability assessments are, how we've been implementing them, a little bit about how we selected our programs, and we're going to talk also some about lessons learned.

My part of the session is just some introduction because I really want to save the time for our presenters who have been implementing these programs for us. If you would go on to the next slide.

We published a solicitation that was solely devoted to evaluability assessments, and we found two excellent, outstanding organizations that we had not been funding in the past, WestEd and the Urban Institute. And we funded them to provide us with evaluability assessments each of four different G/TIP�'funded programs.

We selected the programs, and we were looking at programs �'�' you heard talk about what were the essential �'�' how to prioritize your evaluations and all. We had several different purposes, so we were a little mixed in our approach.

One, we were looking at the significance of the programs and the types of programs that we were funding. We wanted a geographic range, also, to be able to look at programs in different parts of the world. We wanted differences in types of programs, so we actually are looking at the evaluability of some prevention programs and some direct service programs.

But we wanted to make sure that we had programs, generally speaking, that had good designs, clear objectives, and indicators that were linked to those objectives. I will admit to you that there were a couple of programs that we also picked because we thought they needed more technical assistance. We thought they had promise, but they needed a fair amount of technical assistance.

Now, as we go to �'�' I'm going to switch to the next slide, if you would. I'm here to �'�' I'm very pleased to be here with our two implements, representatives of WestEd and Urban Institute. So you're going to be hearing in this workshop from Dr. Beth Rabinovich from WestEd and Frances Gregg (phonetic), who have worked on four of our projects, and three representatives of the Urban Institute, Bill Adams, Dr. Meredith Dank, and Colleen Owens.

These teams have been doing a terrific job for us in helping us to move along. And we've asked each of them to present different aspects of evaluability assessment in this workshop for you.

So I'll turn it over to Beth.

DR. RABINOVICH: Good morning, everybody. I would also like to introduce my colleague, Susanne Asama Bibi (phonetic), who's working on our project. She's here with us today.

This morning I'm going to focus a bit on methodology and technical assistance needs, where I understand that the Urban Institute is going to focus more on a particular project and the TA that you're giving them.

So I'm just going to give you a little bit of background on the four projects that we're conducting evaluability assessments of, talk about the research questions or methodology for doing the evaluability assessments, and then sort of wrap up with talking about a few of the technical assistance needs that we've identified.

The projects �'�' and you can follow along in your handout, if you want �'�' the projects that we are evaluating �'�' or not evaluating, excuse me, conducting evaluability assessments, EAs, for the rest of the presentation �'�' are in the Near East region, western hemisphere, and two are in the South and the Central Asia region.

I'm not going to go through every aspect of the projects. But I want to start out by saying that most of the projects, or all of them, take a multi�'pronged approach in terms of their anti�'trafficking activities. They focus on direct service, public awareness, working with the judicial system, with the police, border patrols. So they really, in combination, try to take a multi�'pronged approach.

And the first project �'�' as I said, I'm not going to go through every detail of the project, but we wanted to highlight some of the interesting aspects of each one of them �'�' the first one Frances is going to talk a little bit about in terms of some of their interesting and unique activities.

MS. GREGG: The first one is with the Near East. And the centerpiece of the program is a safe house, a shelter. And so they're very involved in the rescue of domestic migrant workers, particularly female.

And they really provide a very strong array of services to those who they rescue and have in the shelter, as well as some others who are able to stay in their employer's home.

Perhaps one of the �'�' in addition to the medical services, food, shelter, they offer extraordinary legal services. These cases, when taken to trial, can last generally from three to four years. So this is a lot of support. And usually the worker has already been repatriated.

They've done training for the security staff. They have a 24/7 hotline, which the social workers handle. And one of the things that they do that was very interesting was simply that they're doing everything they can to make sure that the workers understand their rights as up�'front as they can be.

They work with NGOs in the countries of origin as well as the diplomatic missions and the governments. But they also hold seminars. When the people arrive in this country, they hold seminars to help them understand what they should expect and make sure that they have their number if they've got questions.

DR. RABINOVICH: Yes. And I think what particularly struck me is they have the seminars in the airports, so people know their rights in terms of workers.

Then I'll just briefly talk about the three other projects. One of them is very interesting. It's changing �'�' the effort is to change norms about young men using prostitutes, many of whom are victims of trafficking. They've been trafficked. And they do this through peer�'to�'peer counseling. And we know from previous research in other areas that that's a very effective tool for changing behavior.

There are also two other projects that involve rescuing migrant workers or workers who have been trafficked for domestic or sometimes for prostitution. And those projects also take a multi�'pronged approach in terms of working with the judicial system and providing services, rescuing the victims, providing services, and helping them become repatriated or going back into the community and following with them there.

All right. Now, if you'd change the slide, we're going to talk a little bit about the evaluation questions, the methodology. Now, we've kind of taken what you'd call strict adherence to the areas of evaluability assessments in terms of looking at the design, the implementation of the projects, as well as what kinds of methods you're using to measure their advisories, their inputs, and their outcomes.

And I'm not going to read every research question, but I want to give you a flavor of what they are. And these are just examples. We have, as Francis Imbibi (phonetic) will tell you, a long list of research questions.

But in terms of project design, to what extent are the project goals realistic, given the activities, inputs, and time frame? To what extent are project performance targets sufficiently ambitious yet realistic, giving the environment, giving the context in which these projects operate, which I have to say are quite different?

Project implementation: To what extent is a project implemented as planned? What adaptations are necessary? What have been the facilitators and the impediments to the project implementation? To what extent is planning for sustainability part of the project implementation? You can go to the next slide.

Okay, measurement. This is a very important area. And as Jane pointed out, an evaluability assessment is to determine whether a project is ready for evaluation. So we're not actually evaluating the project, per se, but we are looking at areas such as, what are the indicators to measure inputs and activities?

Did the project collect baseline data? To what extent and how are project participants tracked? What methods are used to collect, store, and analyze project data? And finally, to what extent are the data appropriate for measuring project impact? Next slide.

Okay. I'm going to briefly just tell you about the methods that we've used. And I think that the Urban Institute has used similar methods.

We sent a letter of introduction about the evaluability assessment, although Jane and Casey had informed the projects beforehand to expect this letter and they had a good understanding, before we even called them, about their participation in the evaluability assessment and what it meant.

We did a thorough review of project documents. This included proposals, interim reports, case studies. May of the projects had developed documents that I would consider part of the evaluation process. For instance, one of the projects developed a two�'pager on what are the benchmarks for a successful community vigilance committee? And they're very much involved in thinking about, what are the indicators that we can measure about these committees?

So that was another type of document that we read. And I won't go into detail about all the documents. But we can talk about some of the details after the session if you're interested in them.

We also developed logic models independent of the logic models that the projects develop. I like to develop a project logic model without looking at the project's logic model, and that's to ensure that we have an understanding of what the project's about, and also to compare. And you'll see later why this was important, to compare sort of the project's conception of what their logic model is. Can you go to the next slide, please?

We designed data collection tools. We started out in terms of actually dealing with the projects by interviewing people, staff, at the NGO level about their project, where they were in terms of their timeline. We asked them about their efforts in terms of evaluability �'�' I mean, evaluation efforts.

So many of the projects have been very committed to evaluation. For instance, one of the projects at the NGO level had held a brainstorming session with experts in the field in terms of how to evaluate their projects.

We're also doing site visits to the projects, working with the staff, looking at what they've done, observing activities �'�' go on. I think �'�' are we running out of time?

DR. SIGMON: No. You're fine.

DR. RABINOVICH: Am I getting to my 15 minutes?

And an important part of this project is to identify technical assistance needs and provide them. And I have to say that the people that we've talked to so far are just passionate about their problems in terms of their service delivery.

But they're also very interested in evaluation because they understand, if they can show success of their project, it helps in terms of seeking additional funding. And it also helps in terms of, well, if they find something that isn't working as they had anticipated, they can tweak their project design.

So they are very committed in terms of delivering services and evaluation. Okay. Next slide.

Okay. Just as an example, in the back of the handout is an example of a project logic model that we developed. Actually, Frances developed this. And we have this ongoing discussion. She calls this her simplified project logic model. But I took "simplified" off because I thought it was pretty thorough �'�'

MS. GREGG: Abbreviated it. I had to get it on one page.

DR. RABINOVICH: So we'll be glad to talk about that after the presentation. And you can talk to Frances about her logic model. Okay. If you can go to the next.

We wanted to step back for a minute. We could go into more detail about the methodology, but we wanted to talk about just the process of delivering technical assistance, and some of the technical assistance needs that the projects asked for, and some that we identified.

And I thought it was very interesting, when we first talked to the staff at the headquarters NGOs and in the field, that they very much had an agenda of the kind of technical assistance that they wanted.

For instance, during Frances's site visit, she was asked for some technical assistance on how to measure cost/benefit. So we thought that that was on a pretty sophisticated level in terms of what their requests were.

Okay. In terms of some of the technical assistance needs that we identified, clarification of terminology. What's an input versus an output versus an outcome? We noticed in some of the logic models that the projects prepare that this is an area where they do need technical assistance.

And I noticed in the RFP that came out a couple months ago that the language in the RFP was very clear. It was almost technical assistance in and of itself in terms of defining what activity input, output, and outcome are. So G/TIP's already started with that.

Also, in one of the projects, they had a really interesting project design, one that we think builds on the literature. But the project isn't that familiar with the literature, so we plan to show them like, hey, this is really an important approach that you've taken, and look what the literature says.

And Frances is going to talk a little bit about some of the TA needs that she identified.

MS. GREGG: Well, when I was onsite, I was looking at what data they do have available. And one of the things that we found �'�' and we actually knew this from some of the documentation, and Jane and Casey had come in on this before �'�' they do a lot of case studies. It's a lot of social workers. And that's great, and that gives you a lot of flavor for that individual.

And one of the things that's suggested is that they need to be able to take a step back. And just as they're looking at the holistic needs of the individual, they need to apply that same methodology to looking at the project in that same way.

They've got a database that has a lot of the information that is used to build the case studies. They need to look at that in a more systemic way to see, okay, now this may be an exceptional example and important because of the needs provided.

But if you look at all of the people that you've provided services to, what do they look like? What is their time? What is their time in the project? When did they come into the �'�' at what stage of the process did they come into the program?

I mean, they've got people who come into the program after eight months of not receiving a salary, and then they've got people who have lived there without a salary. Okay? Those are two very different �'�' and they need to look at the database in that way.

In fact, they were working on the new progress reports when I was there, and I made some recommendations which G/TIP may or may not want to see. But anyway, there I was. So that's been my approach.

DR. SIGMON: Okay. Thank you. We want to make sure that the other team working on this has time. Before we move on to the Urban Institute representatives, I just want to acknowledge Casey Branchini, who is �'�' there she is �'�' who's a grants assistant and research specialist in our office who's been working on this program.

And she's the infamous Casey that's being referred to throughout this. So I wanted to introduce Casey.

So Bill?

MR. ADAMS: Okay. Great. We appreciate the opportunity to present to you today on the Urban Institute's evaluability assessment project. UI received funding through a cooperative agreement from G/TIP to conduct evaluability assessments of four human trafficking programs that receive funding from the State Department. And we are assessing two programs in Africa and two programs in South Central Asia, which we'll get into a little bit more in a bit.

But first just let me introduce the research team. Along with myself, Bill Adams, we have research associates Colleen Owens and Meredith Dank. And we also have a fourth member of the team who couldn't be here today, and her name is Devon Brookins (phonetic).

Okay. On this slide, we're kind of presenting the Urban Institute's evaluability assessment process. We divided up into six tasks, and currently we've �'�' thus far we've completed the first three tasks, and we are in the middle of working on tasks 4 and 5. So let me just quickly go through each of them.

Task 1, site selection and review of the program materials: The first thing we did was we reviewed program materials from 10 potential sites. And we then ranked these sites according to compatibility between the Urban Institute team expertise and experience and program focus.

We then provided this list of ranked sites in a memo to G/TIP. Shortly thereafter, we met with G/TIP and we agreed upon the four sites that we would do. G/TIP sent out invitation letters to the sites, and upon their acceptance, we were then granted access to the full program files, which we carefully reviewed. This included the project proposals and curriculum. And so we went through those carefully and learned as much as we could.

Task 2 was to conduct telephone interviews and develop protocols. The first thing we did was reach out to the sites via e�'mail. And we kind of described the project, described the Urban Institute, who we are, what we do, and just described evaluability assessment in general, what the goal was.

We then set up conference calls with each of the sites, which we conducted in Skype. During the calls, we asked questions about the program, its goals, its objectives, as well as data collection strategies. We then decided upon a schedule of dates for the site visits.

Prior to the site visits, we asked programs to send us any background materials they could electronically �'�' reports, brochures, anything at all that we could learn more about the program.

Task 3, conduct site visits. Each site visit was conducted over a two�' to three�'week period, and we interviewed all program staff in addition to program participants and program partners, where applicable.

All program participants signed consent forms, and then they received a small monetary incentive for the interview. While on the site visit, we also collected data forms, blank data forms used for data collection, and gathered documentation of databases, anything we could get, like schematic diagrams of variables and even skeletal versions of the database.

Task 4 then was to prepare the evaluability assessment reports.

Now, once all the site visits were completed, we began the process of typing up our notes for analysis. And we are currently in the midst of this, and we are beginning to write up the evaluability assessment report for each site. The reports will incorporate initial program analysis, site visit assessment, and overall findings.

Out of these reports comes task 5, and that's to present our recommendations for a technical assistance strategy for each program. Now, we already provided some technical assistance while onsite, and it's going to continue.

Task 6 is the final step, which we have not yet reached, which is to prepare the full, final report for the research project.

That's a basic overview of the process. I'll now turn it over to Colleen, who will talk more about the four sites, lessons learned, and what is needed for a full impact evaluation.

MS. OWENS: Thanks, Bill. So just to give you a brief overview of the four programs that we conducted the evaluability assessments of, so as Bill mentioned, two of those programs were in Africa and two in South and Central Asia.

The first project in Africa was really focused on providing individual and group counseling, psychosocial counseling, for victims of trafficking. And those victims were primarily identified through community sensitization that the program conducted in the target area of the program.

In addition, they do capacity�'building with various stakeholders in the area, both working with victims of trafficking as well as other populations that may intersect with victims of trafficking. So street children would be an example, victims of other types of crimes.

The second project in Africa was really focused on building local, volunteer�'based community groups in different regions throughout the country. So by the time that we visited the project, 36 groups were developed throughout the country, both in main cities and rural areas.

And those groups were responsible for creating a networking and referral system to really be the eyes and ears on the ground to identify trafficking in their local areas, to refer those potential cases to the local police department, to social service providers, and to the program.

And the program would come out and do a rapid assessment to identify whether or not those were in fact victims of trafficking. They would then coordinate the process, both the service provision process and the legal process, on both the local and national levels, where relevant.

In addition, the program would do significant reviews of anti�'trafficking legislation that was passed in the country in 2005. And they provided their recommendations for changes to the law, and are currently in the process of trying to work that through the various ministries in the government.

In addition, they are involved in a lot of training of criminal justice stakeholders. That includes magistrates, local law enforcement, and as well, they also train victim service providers that work with trafficking victims as well as, as I mentioned, a host of other NGOs working in the country with various populations that may or may not come into contact with trafficking victims.

The third project that we visited was in South and Central Asia. That project was very holistic in its approach and had a lot of activities around the rescue, rehabilitation, and reintegration process. Since G/TIP's funding primarily fell in the rehabilitation side of their program, we really focused our efforts on conducting an evaluability assessment of that portion of their program, although we did learn a lot about the other aspects since it does relate to rehabilitation.

They have three residential centers where they provide health care, psychological counseling, locational skills training, formal education, and informal education, depending on the age of the client. In addition, they have child care where needed and conduct parenting skills training.

And the last project that we visited was also in Asia and focused primarily on awareness�'raising and training youth leaders to advocate and inform networks that they're involved in about trafficking in persons, and to refer potential victims where possible.

So �'�' thank you �'�' the next slide, what is needed for a full impact evaluation: Really, this has sort of served as the basis for the development of our interview protocols. So driving the availability assessment is, really, what do you need to be able to conduct an impact evaluation?

And so the questions and the design of our research protocols reflected a lot of these questions. So we want toe able to determine, do they have an appropriate sample size and frame? Is it consistent with the objectives of the program? Are we able to have a sufficient number of either program participants or partners that we could sample over time to be able to look at impact? Does the program operate according to a logic model?

Something that we should have mentioned earlier, that I should have mentioned, was that our logic model development is being done a little bit differently, I think, than WestEd's.

Part of what we do is do the logic model after the site visit, and we build in a lot of questions about the logic model into our interview protocol so that we can see, across all staff members, whether they are �'�' they may not know that they're thinking according to a logic model design, but we want to be able to pull that out from our interviews.

And then what we're doing is incorporating that information in our evaluability assessment reports currently that will be then shared with the sites for comment and review. Technical assistance will be built around that.

In addition, we're looking at whether they have comprehensive data collection tools and processes. As Bill mentioned, we are looking at their systems that they have in place. When we did look at the system, we didn't look at any records. It would have just been the schematics or variables that they're collecting.

If they've produced any reports on using their data collection tools, we were looking at those reports and provided some onsite technical assistance in terms of what variables could be added to their data collection systems.

And we were very impressed with the systems that are currently in place in many of the programs. They were a lot more comprehensive than I think we had imagined them being, and certainly, compared to a lot of the programs we look at in the United States, it was really great to see that.

In addition, we are looking at potential threats to validity if an impact evaluation were to occur, so trying to assess what other programming is going on in the area that the program operates. One of the programs is residential, so that sort of eliminates a lot of the potential bias. But for some of the programs that deliver activities and services to clients that may also intercept other programming, we want to be able to make sure that we have that documented.

We also want to look at attrition of clients and staff members, whether they're actually documenting, over time, clients coming and going, how long they stay with the program. And it's also important to know this for the staff members, to see what the turnover is with the program, because that might have an impact later on.

We also wanted to get a sense of what the funding stream was looking like, what the plans were for sustainability in funding for the future, to make sure that we could identify, where possible, any potential threats to conducting the impact evaluation.

It's also important, in addition to financial buy�'in, that there is staff buy�'in and support of an evaluation, as Beth had mentioned, across all programs that we looked at. They had a very sophisticated understanding of why they supported evaluation.

It wasn't �'�' part of it is there's an understanding that this is the direction the field is moving, and we want to be able to show that our programs are working. But there was also a sense of, we're really committed to our program and we want to make sure that it's working according to the goals and objectives that we've created. And, where possible, if we need to tweak it, we would like to do that. So that was really great to hear.

And we also wanted to make sure that if programs were operating in various locations, as some of them were, that the activities that were being delivered were being delivered consistently across locations, which would be important later on if an evaluation were to occur.

So some quick lessons learned conducting evaluability assessments. Some of these are particular to the projects that we focused on, and some are generally what we've seen during evaluability assessments.

It's really important to establish trust in relationships early on. So as soon as programs had given their consent to G/TIP, we reached out to all of them fairly quickly through e�'mail and conference call. And what we found was really helpful in establishing trust and further getting buy�'in from the programs was to share consent forms, to share our staff confidentiality pledges, to explain the IRB process.

And so they knew that we were committed to human subjects protections, especially with such a sensitive topic as trafficking and the potential that we would be interviewing some program clients, which is very important.

And early on, we really went back and forth with a few of the sites around their concerns with the monetary incentive that we would �'�' that we have to provide, according to our IRB, and making sure that it was culturally appropriate. And so in many of the cases, we needed to adjust that to make sure that it fit with the program's need.

Okay. This is the last slide, so I'll hurry up. So it's also important to again define the research objectives and your role as researcher; to explain that it's not a performance evaluation �'�' because I think that is a misunderstanding often that we see, not just with these programs but others that we've done this with; to explain many, many times, both before going onsite and while onsite, that we are not the funder, that we are objective, and to explain what our role is; and in addition, to clarify expectations in the timeline so that they know what they can expect both when we're onsite and then, following our site visits, explaining the technical assistance that we would be providing and asking for them to come to us with their needs and recommendations as well, explaining that it is a two�'way street.

The interpretation and translation. What we found was helpful was to share any translated documents with the program before we went onsite so that they had an opportunity to look at those documents and to make sure that they were interpreted correctly, that they were translated correctly; to meet with our interpreter prior to conducting the interviews �'�' we met various times over the phone but also in person prior to the interview; and then also debriefing on a regular basis to make sure that the information that we need to be collecting is being collected; and to answer any questions the interpreter may have; and to make sure that they're doing okay, because a lot of the interviews are very difficult. And so it was really important to be constantly keeping communication open.

Also, to be adaptable while also ensuring human subjects protections �'�' oftentimes, interview locations are less than ideal, definitely not an air conditioned office or �'�' many times they're outdoors. And you still need to be flexible and work with the programs, but also make sure that they understand the importance that the interviews need to be conducted one�'on�'one, and that we need to minimize the number of people in the area, and try to make those private as much as we could.

And also, to just anticipate additional costs and times, to be flexible with the programs �'�' if there are changes that need to be made at the last second, just to go with the flow, basically.

So turn it over to Meredith.

DR. DANK: Thanks, Colleen. So I'm going to talk specifically about one of the programs that we did an evaluability assessment for. And this was the program that looked �'�' that specifically dealt with rescue, rehabilitation, and reintegration.

So Colleen had described a little bit about the program. They have three holistic victim centers that provide such services as shelter; three meals a day; informal education; formal education; vocational training such as hairdressing, weaving, sewing, agriculture; and they also had a doctor on staff, and also a psychologist and therapist.

So in addition to that, they had very extensive follow�'up activities. So the victims were allowed to stay �'�' usually stayed about one or two years, but if they needed to stay longer, especially if they were getting formal education, they were allowed to complete their formal education.

And then once they were ready to be reintegrated back into the community, the program would give them the tools needed to be able to set up their own business. So once that happened, they would follow that victim up to three years afterwards.

And this included going all the way out, traveling miles on the back of motorbikes and buses, to follow up with the victim and making sure that they have not fallen back into any kind of situation that would put them back into being a victim and a trafficking victim.

They also had a very comprehensive database. So each �'�' I guess we'll call them department �'�' so the medical group, the therapy group, the educational group, vocational group, they all had access to their own databases.

And then there was somebody in the main office who collected that information and was then able to analyze that. So the doctor was not able to access the information that was inputted from the education or the therapist. This way they were able to keep it confidential.

And then, from the analysis that we've done so far, we've seen that the variables that they've collected through the database match their program logic model.

They also have very confidential and systemic forms. So the forms come in multiple languages �'�' first, multiple languages, so this way, if anybody were to happen upon the forms, they would not be able to read what was written in them.

So the staff were trained. And I'm not going to say what languages because it would identify the program. But in addition to that, they were all locked up, and the only person who had access to those forms was the center directors.

There was also regular program stuff like meetings. So the centers would meet regularly, usually on a weekly basis. But also, the main office would bring in all the major stuff from all the centers and would brief them on what was going on within the project as a whole.

So if new funding came in, or if there were new program activities that needed to be communicated to them, they did so in a very systemic nature. And then whatever program staff were not able to attend those meetings, the staff that did would go back and inform them what was going on.

There is also many staff trainings that the staff were able to participate in. So this included medical training on STI education, HIV education. There was also trainings on education, informal and formal educations, and skills training.

They also had a very dedicated staff, which as Colleen had mentioned, staff buy�'in is very important. So a lot of them had been working since the beginning of the program, many of them as interns, so they'd been there for over 10 years.

They were �'�' when we asked them if they were supportive in evaluation, a lot of them understood what evaluation entailed, and they understand how it could benefit them. So they were excited about the opportunity of that.

And then as far as providing technical assistance to this program, one of the first thing we did preliminarily, we gave them assistance on their logic model development. Once we're able to analyze all the information that we've collected onsite, we will then provide them with more assistance on that.

And then another thing is improving baseline and follow�'up data collection. Although the data that they're collecting now is very systemically used forms for everything �'�' for intake, follow�'up, education, medical, and all of that �'�' there were still some things that we had noticed that they could be a bit more systemic with, particularly with the follow�'up, because they have a three�'year follow�'up.

As the forms look now, a lot of it was almost like a case review as opposed to filling out specific fields that can be then inputted into the database. I have one minute. Okay.

So then staff training on impact evaluation: Although they had a clear understanding of what impact evaluation is, what would also be very important is to make sure that they knew what would be needed from them and what benefits they could get from an impact evaluation. Ensure consistent delivery of services. You can go to the next slide.

And then options for delivering training and technical assistance. So the first thing would be to deliver evaluability assessment and technical assistance status reports. What we discussed with them onsite was that they were going to be participating in this as well. It's not something that they'd be blindsided by this report that we're writing. We want their feedback and input as well.

So this is something that we would do. We would send them the reports to look over to give them the initial feedbacks, and then incorporate anything that they maybe had concerns or questions about.

Having consistent communication with the sites. Even though the site visits are over, consistently e�'mailing them, asking if they have any questions, and also making sure that they understand any concerns or feedback that we have for them.

And then also building in longer�'term technical assistance into the final report so that they know that we didn't just go in there and give them a bunch of things that they need to do, that not only we but also G/TIP is there to help them through the process.

DR. SIGMON: I'm just going to wrap up very quickly. We've learned a tremendous amount through this process. We've learned about how to conduct evaluability assessments. I think it's been beneficial for G/TIP team, programs team, to have a clearer idea about what it is that we want to be building into our programs from the beginning. Clearly, our grantees have benefitted from the technical assistance, and we've already identified a handful that we believe are good candidates for impact evaluation.

So this evaluability assessment is a tool that we think is really, really useful to us. And I think �'�' this is our first try at it, and it's been quite a learning process. I would hope that we would have funds to be able to do additional impact evaluations, but also additional evaluability assessments and learn from this experience and apply it to other programs in the future.

It's just about time for people to get their break, so I'd be happy to take a question or two, but I really don't want to hold up the whole group more than about a couple more minutes. Yes?

MODERATOR: You could use the mike, actually, for the questions.

QUESTION: Marcel Habibim (phonetic) from VA. I'm very interested in how long this process takes. So from start to finish, how much time has your involvement been? And then once the selection was made that you'd be working with these two groups, how long does their portion take?

DR. SIGMON: It's a �'�' first of all, these are one�'year grants, cooperative agreements. And from �'�' I think it was within about three months we selected the organizations to select from, the 12, 10 or 12 that you all selected from. And then we matched them up. And we'll be finished by September 30th, and it's going along pretty well.

The prep and onsite �'�' the prep through onsite, I'm sure probably each one would take a minimum of three months and probably closer to six, between the prep. There is quite a lot of preparation.

But again, this was our first time doing this. We had to learn about human subject protection. We had to learn about how to approach these grantees. We asked them, would you like to participate in this? And, you know, if you're a direct service provider out there in the field, sometimes to say, "We'd like to evaluate your program," is what people hear �'�' it's a little scary.

But everybody that we approached agreed. And you heard, we're really pleased with the outcome.

Yes?

QUESTION: Jane, I think this parallels many of the types of evaluability assessments I've seen out there in the evaluation community. One of the things we're doing now at the National Institute of Justice is looking at overseas programs, and rather than do evaluability assessments, we've kind of expanded a little bit to do transferability assessments, to look at the cultural and the support factors and the like, and determine whether programs could be moved from one part of the world to another, whether it had the appropriate legal structure and so on and so forth.

And I just suggest that might be one of your selection criteria eventually, is the net worth of the program in other environments.

DR. SIGMON: Thank you. That's �'�' I hadn't heard about that. Evaluability assessment itself is a term that doesn't get used very often, and in fact, it even slipped from the title of this presentation somehow. I think Spell Check does not accept evaluability assessment.

(Laughter.)

DR. SIGMON: And I think somebody over here had a question? Yes?

QUESTION: I just had (inaudible). So your logic model on the back talks about short�'term and long�'term outcomes, but it doesn't look like you have any measures to get answers to that. Is that a key ingredient to determining that it's ready to be evaluated, that you have data on outcomes?

DR. DANK: Sure.

QUESTION: I didn't see it in here, but �'�'

DR. DANK: Yes. And exactly what we would measure from that. And to some extent, it also �'�'

QUESTION: Does it have to be quantitative data?

DR. DANK: No. It could be qualitative, certainly, particularly in terms of looking at legislation and policy and procedures. It could be things like how long �'�' if there's a �'�' how institutionalized a program? Yes.

Okay. Are they constantly renegotiating with a new director �'�' the retention center, the migrant center, the (inaudible) center, and whether or not they can �'�' how vested they are in the MOU.

And so every time there's a new person, is it taking eight months to get up and running? Or is this institutionalized enough that a change in staff doesn't affect anything?

Other measures �'�'

DR. SIGMON: I think to some extent she's limited by the paper. We said one page.

DR. DANK: Yes. I was on the �'�' the other thing I'm limited on is in baseline. This program has been in operation, technically, since 2003. And so there's someplace on measures I don't have and we're looking for (inaudible).

QUESTION: I'm wondering, because if you don't have quantitative data, how do you show causation? Right?

DR. DANK: Right.

QUESTION: It's hard to even show correlation.

DR. SIGMON: Why don't we have �'�' I think these folks would be available at the end.

There's one other quick question, did you have? And then we're going to close.

QUESTION: I'm Bruce Kutner (phonetic) from the GAO. We did a couple reports a few years ago, and I really want to compliment you for where you are today from where you were a few years ago.

DR. SIGMON: Well, thank you.

(Applause)

DR. SIGMON: And we were very mindful of the fact that evaluability assessment was a process that was �'�' we were guided to the National Academy of Sciences and the GAO, and we truly have embraced it. So we appreciate it, and glad you're here.

I think everybody needs to go because there's a session at 12:00 noon that I know everybody wants to go hear, the foreign journalists. And thank you so much for coming.